
OpenAI is releasing two new AI models called o3 and o4-mini. These models can think like humans to solve difficult coding and image problems
OpenAI has introduced two advanced AI models that replicate human-like reasoning, enhancing their ability to tackle intricate coding problems and visual tasks. The move comes as the company accelerates its innovation pipeline to counter growing competition from players in the U.S. and China.
OpenAI unveiled two new models Wednesday: O3, which uses extended computation to solve complex science, math, and coding challenges, and O4-mini, a lighter but efficient alternative. Both are now available for paid users.
OpenAI just launched o3 and o4-mini, its first AI models that can actually reason using all of ChatGPT’s features—like searching the web, creating images, and analyzing them. Plus, they’re the first to understand visuals in their reasoning, meaning they can work with blurry pictures, rotate them, or zoom in as needed.
In response to DeepSeek’s successful open-source R1, OpenAI CEO Sam Altman announced plans to introduce an open reasoning AI model in the coming months. Separately, Altman indicated that GPT-5, the company’s next-generation AI, is on track for release in a few months.
Coding has become one of the fastest-growing uses for generative AI, making it an important area for OpenAI.