Dall-E, so-called in homage to Salvador Dalí and Wall-E, is an artificial intelligence program from the American research lab, OpenAI, co-founded by Elon Musk. The program has been trained to generate an image from a text caption.
This neural network is capable of translating absurd concepts and of conceiving objects that don’t exist. It could prove a major boon for the future of design, potentially exceeding the limits of our imagination.
If you talk about flowers in a text, Dall-E will create pictures of flowers. These flowers may be inspired by reality, but won’t necessarily be copies. In fact, this AI program produces its own graphic worlds.
Dall-E makes use of the GPT-3 language model, which, in its full version, has a capacity of over 175 billion parameters. It is the largest language model in history, easily outstripping the 17 billion parameters of the former largest, Microsoft’s Turing NLG. Dall-E draws on a 12-billion-parameter version of GPT-3 to form and generate images based on text descriptions. The model is capable of translating absurd concepts or conceiving objects that don’t exist.
The impact of Dall-E on creative industries could provoke a major shake-up for professionals in the sector. This potential could be so enormous that it is also raising questions. What about illustrators, designers and other kinds of artists, for example? A content-generating model can offer various propositions in a flash, leaving humans unable to keep pace.
A tool to be honed