GPT5 is worse than 4.1-mini for text and worse than Sonnet 4 for coding
It seems that OpenAI have got the PR machine working amazingly. The Cursor CEO said it's the best, as did Simon Willison (https://simonwillison.net/2025/Aug/7/gpt-5/).
But I've found it terrible. For coding (in Cursor), it's slow, fails with tool calls often (no MCP just stock Cursor tools) and stored some new application state in globalThis - something that no model has ever attempted to do in over a year of very heavy Cursor / Claude Code use).
For a summarization/insights API that I work on, it was way worse than gpt-4.1-mini. I tried both mini and full gpt5, with different reasoning settings. It didn't follow instructions, and output was worse across all my evals, even after heavy prompt adjustment. I did a lot of sampling and the results were objectively bad.
Am I the only one? Has anyone seen actual real-world benefits of GPT-5 vs other models?
I feel like they should have let GPT 5 overlap in experimental mode for a month or so. It took a while to get the kinks out of GPT-4 until people trusted it. Just switching it on is really hurting their brand.
The fact they didn’t do this makes me think their finances are in very bad shape.
I agree, I just don't understand how the team at Cursor can say this:
“GPT-5 is the smartest coding model we've used. Our team has found GPT-5 to be remarkably intelligent, easy to steer, and even to have a personality we haven’t seen in any other model. It not only catches tricky, deeply-hidden bugs but can also run long, multi-turn background agents to see complex tasks through to the finish—the kinds of problems that used to leave other models stuck. It’s become our daily driver for everything from scoping and planning PRs to completing end-to-end builds.”
The cynic in me thinks that Cursor had to give positive PR in order to secure better pricing...
I tried it with cursor-agent, their cli - and it generated better code than expected. YMMV. It was more thoughtful and strategic than the other frontier models.
Planning was ok for me, much slower than Sonnet, but comparable. But some of the code it produces is just terrible. Maybe the routing layer sends some code-generation tasks to a much smaller model- but then I don't get why it's so slow!
The only thing that seems better to me is the parallel tool calling.
GPT-5 isn’t really a brand-new model in the way people think. From what I’ve seen, the goal was more about reducing costs and unifying the interface than releasing a totally different architecture. Under the hood it is still routing to models we already know, just picking what it thinks will give the “best” result for the request.
That can be fine for a lot of general use cases, but if you’re working in specific domains like coding agents or high-precision summarization, that routing can actually make results worse compared to sticking with a model you know performs well for your workload.
Thats not what OpenAI are claiming. They are claiming that there are two new flagship models and a router that routes between them.
"GPT‑5 is a unified system with a smart, efficient model that answers most questions, a deeper reasoning model (GPT‑5 thinking) for harder problems, and a real‑time router that quickly decides which to use"
it solved a huge bug i've been struggling with.
Had Sonnet 4 not been able to?
No, it kept going in circles....spent like 3 weeks trying to fix it. Got access to gpt5 yesterday and all major bugs are resolved.
Interesting I tried it to fix some unit tests that were failing but made the problem worse. Sonnet was able to fix the failing unit tests and the new problems introduced by GPT5. I used Claude Code for Sonnet and Cursor Agent for GPT-5. Maybe Cursor Agent is just bad?
I don't know I use roocode.
Sure.
And yet the media keeps using the term "exponential improvement"...