Guess What I'm Thinking

I fell down the rab­bit hole of Art­breed­er, a plat­form which uses a gen­er­a­tive adver­sar­i­al net­work to seam­less­ly breed” images togeth­er. The web­site allows users to (light­ly) con­trol the process and doc­u­ment the results. Enough human images have been fed in that it can gen­er­ate plau­si­ble imag­i­nary peo­ple, just as well as the GAN at this​per​son​does​no​tex​ist​.com. But when applied to land­scapes, objects, crea­tures, and tex­tures, it makes results just as grotesque as the ones from Deep­Dream back in 2015. Instead of DeepDream’s scream­ing dog-ori­ent­ed psy­che­delia, Artbreeder’s images often hov­er just out of the reach of plau­si­bil­i­ty, mak­ing things, arrange­ments, and planes that seem to call out to be realized. 

While the site is cur­rent­ly only allow­ing uploads of human por­traits, it is rea­son­able to assume that the field will open, and you will be able to mix your own images. Giv­en the pace at which the field grows, I would expect that before long you will have the abil­i­ty to do the same in 3D, and then 4D (if the serv­er farms don’t give up and explode). This would make the work of the con­tem­po­rary archi­tec­ture stu­dent rather eas­i­er, since such a sys­tem could quick­ly and reli­ably gen­er­ate mul­ti-col­ored booleaned voids.

When I set about Art­breed­ing I nat­u­ral­ly start­ed doing so with land­scapes. If you want to do so too, I would advise for the time being steer­ing away from cre­at­ing in Land­scapes, which oper­ates out of a shal­low pool of fair­ly stock pho­tog­ra­phy and paint­ing, and go straight to Gen­er­al, where you can mix genes” of snakes, clothes, and medi­um objects” with your land­scapes. The results, being fair­ly low res­o­lu­tion, are best viewed at small scale, where your mind starts to fill in the blanks to make them plau­si­ble – how would this image come to be?

In the process of scrolling through and blend­ing images, you find a char­ac­ter­is­tic blend of exhil­i­ra­tion and dis­may that will need its own word soon­er or lat­er, as such plat­forms pro­lif­er­ate; you see jaw­drop­ping things that com­plete­ly lack the assumed scaf­fold­ing of work behind them, be that geo­log­i­cal action over mil­len­nia or the blood, sweat, and tears in a paint­ing on can­vas. The work still exists – in every image uploaded to the sys­tem, in the design of the sys­tem itself, of the phys­i­cal work of com­pu­ta­tion – but it has a ten­u­ous rela­tion­ship to what is output. 

To put it anoth­er way, each indi­vid­ual image seems cheap, on pace with the cheap­en­ing of music – you do not make it your­self, or even pay for it upfront, or even wait an appre­cia­ble time for it – and so you do not reck­on with its indi­vid­ual val­ue. It has been sug­gest­ed, most recent­ly in con­nec­tion with WeWork’s algo­rith­mic gen­er­a­tion of office space, that algo­rith­mic design will serve to sev­er the design­er from tedious, repet­i­tive, and pre­de­ter­mined work. Who would wish that any fur­ther interns put work into detail­ing park­ing lots? But then, how much the val­ue of design is secret­ed away in tedious, repet­i­tive, and pre­de­ter­mined work? If you were to obey the call of reverse-engi­neer­ing the prod­ucts of Art­breed­er, you would be stak­ing end­less time on tran­scrib­ing a whim, one that might as read­i­ly been spit out any oth­er way.

And worst of all, after Art­breed­ing for a while, you go into pub­lic and every­thing seems Art­bred; minute­ly var­ied instances of the same tem­plates thrown plau­si­bly here and there. How much effort just to repeat what is fun­da­men­tal­ly the same! 

(October 2019)