Unlocking useful and valuable image generation with a natively multimodal model capable of precise, accurate, photorealistic outputs.
At OpenAI, we have long believed image generation should be a primary capability of our language models. That’s why we’ve built our most advanced image generator yet into GPT‑4o. The result—image generation that is not only beautiful, but useful.
From the first cave paintings to modern infographics, humans have used visual imagery to communicate, persuade, and analyze—not just to decorate. Today’s generative models can conjure surreal, breathtaking scenes, but struggle with the workhorse imagery people use to share and create information. From logos to diagrams, images can convey precise meaning when augmented with symbols that refer to shared language and experience.
GPT‑4o image generation excels at accurately rendering text, precisely following prompts, and leveraging 4o’s inherent knowledge base and chat context—including transforming uploaded images or using them as visual inspiration. These capabilities make it easier to create exactly the image you envision, helping you communicate more effectively through visuals and advancing image generation into a practical tool with precision and power.
We trained our models on the joint distribution of online images and text, learning not just how images relate to language, but how they relate to each other. Combined with aggressive post-training, the resulting model has surprising visual fluency, capable of generating images that are useful, consistent, and context-aware.
A picture is worth a thousand words, but sometimes generating a few words in the right place can elevate the meaning of an image. 4o’s ability to blend precise symbols with imagery turns image generation into a tool for visual communication.
Because image generation is now native to GPT‑4o, you can refine images through natural conversation. GPT‑4o can build upon images and text in chat context, ensuring consistency throughout. For example, if you’re designing a video game character, the character’s appearance remains coherent across multiple iterations as you refine and experiment.
GPT‑4o’s image generation follows detailed prompts with attention to detail. While other systems struggle with ~5-8 objects, GPT‑4o can handle up to 10-20 different objects. The tighter binding of objects to their traits and relations allows for better control.
GPT‑4o can analyze and learn from user-uploaded images, seamlessly integrating their details into its context to inform image generation.
Native image generation enables 4o to link its knowledge between text and images, resulting in a model that feels smarter and more efficient.
Training on images reflecting a vast variety of image styles allows the model to create or transform images convincingly.
Our model isn’t perfect. We’re aware of multiple limitations at the moment which we will work to address through model improvements after the initial launch.

We’ve noticed that GPT‑4o can occasionally crop longer images, like posters, too tightly, especially near the bottom.

Like our other text models, image generation can also make up information, especially in low-context prompts.

When generating images that rely on its knowledge base, it may struggle to accurately render more than 10-20 distinct concepts at once, such as a full periodic table.


The model sometimes struggles with rendering non-Latin languages, and the characters can be inaccurate or hallucinated, especially with more complexity.

We’ve noticed that requests to edit specific portions of an image generation, such as typos are not always effective and may also alter other parts of the image in a way that was not requested or introduce more errors. We’re currently working on introducing increased editing precision to the model.
We’re aware of a bug where the model struggles with maintaining consistency of edits to faces from user uploads but expect this to be fixed within the week.

The model is known to struggle when asked to render detail information at a very small size.
In line with our Model Spec, we aim to maximize creative freedom by supporting valuable use cases like game development, historical exploration, and education—while maintaining strong safety standards. At the same time, it remains as important as ever to block requests that violate those standards. Below are evaluations of additional risk areas where we’re working to enable safe, high-utility content and support broader creative expression for users.
Provenance via C2PA and internal reversible search
All generated images come with C2PA metadata, which will identify an image as coming from GPT‑4o, to provide transparency. We’ve also built an internal search tool that uses technical attributes of generations to help verify if content came from our model.
Blocking the bad stuff
We’re continuing to block requests for generated images that may violate our content policies, such as child sexual abuse materials and sexual deepfakes. When images of real people are in context, we have heightened restrictions regarding what kind of imagery can be created, with particularly robust safeguards around nudity and graphic violence. As with any launch, safety is never finished and is rather an ongoing area of investment. As we learn more about real-world use of this model, we’ll adjust our policies accordingly.
For more on our approach, visit the image generation addendum to the GPT‑4o system card.
Using reasoning to power safety
Similar to our deliberative alignment work, we’ve trained a reasoning LLM to work directly from human-written and interpretable safety specifications. We used this reasoning LLM during development to help us identify and address ambiguities in our policies. Together with our multimodal advancements and existing safety techniques developed for ChatGPT and Sora, this allows us to moderate both input text and output images against our policies.
4o image generation rolls out starting today to Plus, Pro, Team, and Free users as the default image generator in ChatGPT, with access coming soon to Enterprise and Edu. It’s also available to use in Sora. For those who hold a special place in their hearts for DALL·E, it can still be accessed through a dedicated DALL·E GPT.
Developers will soon be able to generate images with GPT‑4o via the API, with access rolling out in the next few weeks.
Creating and customizing images is as simple as chatting using GPT‑4o – just describe what you need, including any specifics like aspect ratio, exact colors using hex codes, or a transparent background. Because this model creates more detailed pictures, images take longer to render, often up to one minute.