OpenAI has added a new “outpointing” function to its text-to-image AI model DALL-E, which allows the system to create new visuals that stretch the boundaries of any image.
In the example above, you can see how DALL-E, with the help of human prompting, “imagines” what is outside the frame of Johannes Vermeer’s portrait “Girl with a Pearl Earring”. Note that even from the limited information provided by the portrait, the system matches Vermeer’s style, mimicking the shadows and highlights of the original.
In the timelapse below, you can also see how August Kamp, the artist responsible, had to stretch the image in small sections at a time, often repeating generations of DALL-E to get the result she wanted. Not seen in this video but definitely worth highlighting, the system does not generate these extensions itself. As with all text-to-image AI, the model requires humans to interpret new visuals.
Outpainting can be used as a function to enhance original content, but many DALL-E users play with the feature to see what’s outside the frame of popular images. (Scroll down for my absolute favorite example…)
From a broader perspective, Outpainting doesn’t really extend the basic functionality of text-to-image AI systems, but shows how OpenAI can find its place in the growing market for these systems: by making usability a key pitch to users.
Many text-to-image AI models can perform the same essential function as outpainting, but, like DALL-E before this update, it requires a bit of manual fiddling. Making outpainting as easy as possible helps DALL-E stand out from the growing competition from smaller but comparable systems like Midjourney and Stable Diffusion.
DALL-E is now available through a beta program, currently giving access to more than one million users. Every beta user gets 50 free image generations in the first month, then 15 additional uses every month. They can purchase 115 additional image generations for $15.
In the meantime, though, “What if the Quaker Oats guy is a busty barmaid?” No wonder: