DALL-E’s latest trick: extending the boundaries of paintings

Open AI is introducing a new feature to its text-to-image generator. Here's how it works.
girl with a pearl earring paining with outpainting-filled background
An example of artwork with an outpainting-filled background. August Kamp / OpenAI / Johannes Vermeer

Share

Open AI, developers of the AI text-to-image generator DALL-E 2, have just announced a new feature for the app called “outpainting”. It allows users to extend existing images and works of art with AI-generated content. It’s pretty exciting, and hugely expands the capabilities of the tool. 

DALL-E 2 is one of the most popular text-to-image generators available at the moment. With more than a million users, it’s no wonder that content created by it seems to be everywhere. (A lot of other text-to-image generators are either in a closed beta, like Stable Diffusion, are not available to the public, like Google’s Imagen, or are much more limited in scope, like Craiyon.) 

DALL-E 2 takes a text prompt, like “an astronaut riding a horse in the style of Andy Warhol,” and generates nine 1,024-pixel by 1,024-pixel images that illustrate it. It uses a process called “diffusion” where it starts with randomly generated noise and then edits it to match the salient features of the prompt as closely as possible. 

Until now, users were limited with the size and aspect ratio of what they could create with DALL-E 2. The AI program could only generate 1,024-pixel by 1,024-pixel squares—anything larger or a different shape was out of the question. It was possible to use a feature called “inpainting” to modify details in existing artworks, but to actually create a bigger canvas involved manually stitching different sections together using an app like Photoshop. (For different aspect ratios, you could crop your image, but that reduced the overall resolution.)

Now with outpainting, the only limit users face—other than the content filters—are the number of credits they have. (Everyone gets 50 free generation credits during their first month and 15 to use every month after that. Blocks of 115 additional credits can be purchased for $15.) Generating an initial image takes one credit, as does every additional outpainted section. 

Outpainting works as an extension to DALL-E 2. Users select a 1,024-pixel by 1,024-pixel square area where they want to extend the image to and can specify any additional prompts to guide the AI. For example, to add more of a background to the astronaut on a horse, you could change the prompt to “an astronaut riding a horse on the moon with stars in the background in the style of Andy Warhol.” 

For each outpainted section, DALL-E 2 will offer up four possibilities for users to select. If none of them work for the image, you can get it to try again. 

Most impressively, outpainting “takes into account the image’s existing visual elements—including shadows, reflections, and textures.” This means that any details added “maintain the context” of the image and can really look like part of a coherent whole. 

In DALL-E 2’s announcement of outpainting, there’s a timelapse showing Girl with a Pearl Earring by Johannes Vermeer being extended to around 20 times its original size. Instead of a simple portrait, it shows a young woman standing in a cluttered house. It’s fascinating to see because so long as you don’t look too closely, it really does look like an extension of the original painting. The overall style and mood is spot on. It’s almost like an imaginary behind the scenes shot.

If you want to try outpainting, you will need to sign up to DALL-E 2. Open AI is currently operating a rolling waitlist. If you want to sign up, you can do so here