OpenAIâs blog post claims that GPT-5 beats its previous models on several coding benchmarks, including SWE-Bench Verified (scoring 74.9 percent), SWE-Lancer (GPT-5-thinking scored 55 percent), and Aider Polyglot (scored 88 percent), which test the modelâs ability to fix bugs, complete freelance-style coding tasks, and work across multiple programming languages.
During the press briefing on Wednesday, OpenAI post-training lead Yann Dubois prompted GPT-5 to âcreate a beautiful, highly interactive web app for my partner, an English speaker, to learn French.â He tasked the AI to include features like daily progress, a variety of activities like flashcards and quizzes, and noted that he wanted the app wrapped up in a âhighly engaging theme.â After a minute or so, the AI-generated app popped up. While it was just one on-rails demo, the result was a sleek site that delivered exactly what Dubois asked for.
âIt’s a great coding collaborator, and also excels at agentic tasks,â Michelle Pokrass, a post-training lead, says. âIt executes long chains and tool calls effectively [which means it better understands when and how to use functions like web browsers or external APIs], follows detailed instructions, and provides upfront explanations of its actions.”
OpenAI also says in its blog post that GPT-5 is âour best model yet for health-related questions.â In three OpenAI health-related LLM benchmarksâHealthBench, HealthBench Hard, and HealthBench Consensusâthe system card (a document that describes the productâs technical capabilities and other research findings) states that GPT-5-thinking outperforms previous models âby a substantial margin.â The thinking version of GPT-5 scored 25.5 percent on HealthBench Hard, up from o3âs 31.6 percent score. These scores are validated by two or more physicians, according to the system card.
The model also allegedly hallucinates less, according to Pokrass, a common issue for AI where it provides false information. OpenAIâs safety research lead Alex Beutel adds that theyâve “significantly decreased the rates of deception in GPT-5.â
âWeâve taken steps to reduce GPT-5-thinkingâs propensity to deceive, cheat, or hack problems, though our mitigations are not perfect and more research is needed,â the system card says. âIn particular, weâve trained the model to fail gracefully when posed with tasks that it cannot solve.â
The companyâs system card says that after testing GPT-5 models without access to web browsing, researchers found its hallucination rate (which they defined as âpercentage of factual claims that contain minor or major errorsâ) 26 percent less common than the GPT-4o model. GPT-5-thinking has a 65 percent reduced hallucination rate compared to o3.
For prompts that could be dual-use (potentially harmful or benign), Beutel says GPT-5 uses âsafe completions,â which prompts the model to âgive as helpful an answer as possible, but within the constraints of remaining safe.â OpenAI did over 5,000 hours of red teaming, according to Beutel, and testing with external organizations to make sure the system was robust.
OpenAI says it now boasts nearly 700 million weekly active users of ChatGPT, 5 million paying business users, and 4 million developers utilizing the API.
âThe vibes of this model are really good, and I think that people are really going to feel that,â head of ChatGPT Nick Turley says. âEspecially average people who haven’t been spending their time thinking about models.â
