Earlier this year, Karen X. Cheng, a video director in San Francisco, was commissioned by a client to make an augmented reality dress for special events. The idea was to “hide” an animation of a beating heart within the dress, which an Instagram filter would reveal whenever a phone camera was pointed at the wearer’s chest.
Cheng, who specializes in creating videos using complex editing techniques, enlisted a Mexican artist named Yunuen Esparza to help bring the dress to life. Ordinarily, Cheng would have sent Esparza a written description or some reference photos from Google to explain her vision. This time, however, she turned to tech’s most powerful new art-generating machine for help: Dall-e 2.
Trained on roughly 650 million pictures and captions scraped from the internet, Dall-e 2—the second generation artificial intelligence program launched by the Sam Altman–led research lab OpenAI—can produce an original image of almost anything humans are capable of describing in words (so long as the words don’t mention public figures, sex acts or violence).