This blog post lacks almost any form of substance.
It could've been shortened to: Codex is more hands off, I personally prefer that over claude's more hands-on approach. Neither are bad. I won't bring you proof or examples, this is just my opinion based on my experience.
Heya, author here! Admittedly this was a quick blog post I fired off, much shorter than my usual writing.
My goal wasn't to create a complete comparison of both tools — but to provide a little theory a behavior I'm seeing. You're (absolutely) right that it's a theory not a study, and I made sure to state that in the post. :)
Mostly though the conclusion describes pretty succinctly why I wrote the post, as a way to get more people to try more of the tools so they can adequately form their own conclusions.
> I think back to coworkers I’ve had over the years, and their varying preferences. Some people couldn’t start coding until they had a checklist of everything they needed to do to solve a problem. Others would dive right in and prototype to learn about the space they would be operating in.
> The tools we use to build are moving fast and hard to keep up with, but we’ve been blessed with a plethora of choices. The good news is that there is no wrong choice when it comes to AI. That’s why I don’t dismiss people who live in Claude Code, even though I personally prefer Codex.
> The tool you choose should match how you work, not the other way around. If you use Claude, I’d suggest trying Codex for a week to see if maybe you’re a Codex person and didn’t know it. And if you use Codex, I’d recommend trying Claude Code for a week to see if maybe you’re more of a Claude person than you thought.
> Maybe you’ll discover your current approach isn’t the best fit for you. Maybe you won’t. But I’m confident you’ll find that every AI tool has its strengths and weaknesses, and the only way to discover what they are is by using them.
Hey! Didn't mean my comment negatively towards you in any way, though I now realize it might've come across as such. Blogs with opinions based on experiences alone are absolutely fine, thanks for sharing.
What I did mean is to indicate that your blog felt like a HN comment to me, where I generally expect a HN link to be news or facts that subsequently spark a discussion.
At the end of your post I guess I was hoping or expecting facts or examples, indicating it was engaging enough to read to the end.
It’s funny because my use of Claude Code is the opposite. I use slash commands with instructions to find context, and basically never interact with it while it is doing its thing.
> Codex is more hands off, I personally prefer that over claude's more hands-on approach
Agree, and it's a nice reflection of the individual companie's goals. OpenAI is about AGI, and they have insane pressure from investors to show that that is still the goal, hence codex when works they could say look it worked for 5 hours! Discarding that 90% of the time it's just pure trash.
While Anthropic/Boris is more about value now, more grounded/realistic, providing more consistent hence trustable/intuitive experience that you can steer. (Even if Dario says the opposite). The ceiling/best case scenario of a claude code session is a bit lower than Codex maybe, but less variance.
Well, if you had tried using GPT/Codex for development you would know that the output from those 5 hours would not be 90% trash, it would be close to 100% pure magic. I'm not kidding. It's incredible as long as you use a proper analyze-plan-implement-test-document process.
I’ve checked out codex after the glowing reviews here around September / October and it was, all in all, a letdown (this was writing greenfield modules in a larger existing codebase).
Codex was very context efficient, but also slow (though I used the highest thinking effort), and didn’t adapt do the wider codebase almost at all (even if I pointed it at the files to reference / get inspired by). Lots of defensive programming, hacky implementations, not adapting to the codebase style and patterns.
With Claude Code and starting each conversation by referencing a couple existing files, I am able to get it to write code mostly like I would’ve written it. It adapts to existing patterns, adjusts to the code style, etc. I can steer it very well.
And now with the new cheaper faster Opus it’s also quite an improvement. If you kick off sonnet with a long list of constraints (e.g. 20) it would often ignore many. Opus is much better at “keeping more in mind” while writing the code.
Note: yes, I do also have an agent.md / claude.md. But I also heavily rely on warming the context up with some context dumping at conversation starts.
All codex conversations need to be caveat with the model because it varies significantly. Codex requires very little tweaking but you do need to select the highest thinking model if you’re writing code and recommend the highest thinking NON-code model for planning. That’s really it, it takes task time up to 5-20m but it’s usually great.
Then I ask Opus to take a pass and clean up to match codebase specs and it’s usually sufficient. Most of what I do now is detailed briefs for Codex, which is…fine.
I will jump between a ChatGPT window and a VSCode window with the Codex plugin. I'll create an initial prompt in ChatGPT, which will ask the coding agent to audit the current implementation, then draft an implementation plan. The plan bounces between Chat and Codex about 5 times, with Chat telling Codex how to improve. Then Codex implements, creates an implementation summary, which I give to Chat. Chat then asks to add a couple of things fixes, then it's done.
Why non-thinking model? Also 5-20 minutes?! I guess I don’t know what kind of code you are writing but for my web app backends/frontends planning takes like 2-5 minutes tops with Sonnet and I have yet to feel the need to even try Opus.
I probably write overly detailed starting prompts but it means I get pretty aligned results. It does take longer but I try to think through the implementation first before the planning starts.
A lot of (carefully hedged) pro Codex posts on HN read suspect to me. I've had mixed results with both CC and Codex and these kinds of glowing reviews have the air of marketing rather than substance.
If only fair comparisons would not be so costly, in both time and money.
For example, I have a ChatGPT and a Gemini subscription, and thus could somewhat quickly check out their products, and I have looked at a lot of the various Google AI dev ventures, but I have not yet found the energy/will to get more into Gemini CLI specifically. Antigravity with Gemini 3 pro did some really wonky stuff when I tried it.
I also have a Windsurf subscription, which allows me to look at any frontier model for coding (well, most of the time, unless there's some sort of company beef going). This I have often used to check out Anthropic models, with much less success than Codex with > GPT-5.1 – but of course, that's without using Clode Caude (which I subscribed to for a month, idk, 6 months ago, and seemed fine back then but not mind blowingly so).
Idk! Codex (mostly using the vscode extension) works really well for me right now, but I would assume this is simply true across the board: Everything has gotten so much better. If I had to put my finger on what feels best about codex right now, specifically: Least amount of oversights and mistakes when working on gnarly backend code, with the amount of steering I am willing to put into it, mostly working off of 3-4 paragraph prompts.
I’ve been using frontier Claude and GPT models for a loooong time (all of 2025 ;)) and I can say anecdotally the post is 100% correct. GPT codex given good enough context and harness will just go. Claude is better at interactive develop-test-iterate because it’s much faster to get a useful response, but it isn’t as thorough and/or fills in its context gaps too eagerly, so needs more guidance. Both are great tools and complement each other.
Heya, I'm the author! I can promise you that I am 0% affiliated with OpenAI and have no qualms with calling them out for the larger moral, ethical, and societal questions that have emerged with the strategy they've pushed.
I do earnestly believe their models are currently the best to work with as software developers, but as I state in my post I think this is the state of the world today and have no premonition for that being true forever.
Same questions apply to Anthropic, Google, etc, etc — I'm not paid by anyone to say anything.
The usage limits on Claude have been making it too hard to experiment with. Lately, I get about an hour a day before hitting session/weekly limits. With Codex, the limits are higher than my own usage so I never see them.
Because of that, everyone who is new to this will be focused on Codex and write their glowing reviews of the current state of AI tools in that context.
As the author of the post I think it was a nice quick post to share my perspective of a behavior I’ve been seeing across many (but not all) developers recently, but I’m always open to feedback for how to improve my writing!
And as I mentioned here (https://news.ycombinator.com/item?id=46392900) I have no affiliation with any of the organizations, nor care to evangelize any of them. Nobody pays me to write, I’m just a guy on the internet sharing his thoughts, building software, and teaching people how to use AI better with any tool people want to use. :)
I've been using Claude code most of the year, and codex since soon after it released:
It's important to separate vibes coding from vibes engineering here. For production coding, I create fairly strict plans -- not details, but sequences, step requirements, and documented updating of the plan as it goes. I can run the same plan in both, and it's clear that codex is poor at instruction following because I see it go off plan most of the time. At the same time it can go on its own pretty far in an undirected way.
The result is when I'm doing serious planned work aimed for production PRs, I have to use Claude. When it's experimental and I don't care about quality but speed and distance, such as for prototyping or debugging, codex is great.
Edit: I don't think codex being poor at instruction following is inherent, just where they are today
Respectfully I don’t think the author appreciates that the configurability of Claude Code is its performance advantage. I would much rather just tell it what to do and have it go do it, but I am much more able to do that with a highly configured Claude Code than with Codex which is pretty much just set at the out of the box quality level.
I spend most of my engineering time these days not on writing code or even thinking about my product, but on Claude Code configuration (which is portable so should another solution arise I can move it). Whenever Claude Code doesn’t oneshot something, that is an opportunity for improvement.
Heya, I'm the author of the post and I just wanted to say I do appreciate the configurability! As I mentioned in the post, I have been that kind of developer in the past.
> This is a perfect match for engineers who love configuring their environments. I can’t tell you how many full days of my life I’ve lost trying out new Xcode features or researching VS Code extensions that in practice make me 0.05% more productive.
And I tried to be pretty explicit about the idea that this is a very personal choice.
> Personally — and I do emphasize this is a personal decision — I‘d rather write a well-spec’d plan and go do something else for 15 minutes. Claude’s Plan Mode is exceptional, and that‘s why so many people fall in love with Claude once they try it.2
For every person who feels like me today, there's someone who feels like you out there. And for every person who feels like you, there's someone like me (today) who finds it not as valuable to their workflow. That's the reason my conclusion was all about getting folks to try out both to see what works for them — because people change and it's worth finding out who you really at this moment in time.
Anyhow, I do think that Codex is also very configurable — I was just trying to emphasize that it's really great out the box while Claude Code requires more tuning. But that tuning makes it more personal, which as you mention is a huge plus! As I've touched on in a few posts [^1] [^2] Skills are to me a big deal, because they allow people to achieve high levels of customization without having to be the kind of developer that devotes a lot of time to creating their perfect set up. (Now supported in both Claude Code and Codex.)
I don't want this to turn into a bit of a ramble so I'll just say that I agree with you — but also there's a lot of nuance here because we're all having very personal coding experiences with AI — so it may not entirely sound like I agree with you. :)
Would love to hear more about your specific customizations, to make sure that I'm not missing out on anything valuable. :D
Skills, MCPs, /commands, agents, hooks, plugins, etc. I package https://charleswiltgen.github.io/Axiom/ as an easily-installable Claude Code plugin, and AFAICT I'm not able to do that for any other AI coding environment.
That hasn't been my experience, although I'm happy to accept that I'm the problem. Apparently they've released their skills support (?), so I should try again. https://developers.openai.com/codex/skills
It's hard to compare the two tools because they change so much and so fast.
Right now, as an example, claude code with opus 4.5 is a beast, but before that, with sonnet 4.0, codex was much better.
Gemini-cli, on the other hand, with gemini-flash-3.0 (which is strangely good for the "small and fast" model), it's very good (but the cli and the user experience are not on par with codex or claude yet).
So we need to be in constant observations of those tools. Currently (after gemini-flash-3.0 came out), I tend to submit the same task to claude (with opus) and gemini to understand the behaviour. gemini is surprising me.
Heya, author here! I completely agree with you — and why the post is titled Codex vs. Claude Code (Today). I also have this very specific disclaimer in the second paragraph to note that this post is a reflection of a moment in time. :D
> Before we continue, I need to make a disclaimer: This post is about the Claude Code and Codex, on December 22, 2025. Everything in AI changes so fast that I have almost no expectations about the validity of these statements in a year, or probably even 3-6 months from now.
That said I do what you do and try different models when I want to see if things have changed. I run my own private little benchmarks with a few complex real world tasks, and I really love seeing how things are progressing — both in terms of quality but also the novel quirks that are introduced, changed, or removed. :)
I don't think the comparison to programming languages holds, maybe very tenuously at best. Coding assistants evolve constantly, you can't even be talking about "Codex" without specifying the time range (ie, Codex 2025-10) because it's different from quarter to quarter. Same with CC.
I believe this is the main source of disagreement / disappointment when people read opinions / reviews, then proceed to have an experience very different from expected.
Ironically, this constant improvement/evolution erodes product loyalty -- personally, I'm a creature of habit and will stay with a tool past its expiry date; with coding assistants / sota llms, I cancel and switch subscriptions all the time.
Heya, author of the post here! I think you're right in everything you've said, but I want to note that the programming language comparison was meant to be metaphorical more than literal. Everything is changing so fast (as I mention in the post a few times), but I have seen some (far from all) people get locked into Claude Code or Codex in a way where they won't even consider alternatives the same way people they chose Ruby to start their career and now identify as Ruby developers.
My goal was to open people's minds just a little bit by saying exactly what you're getting at — everything is moving fast and we should be reassessing often. A meaningful difference is that you can start a codebase with Claude Code and then switch to Codex with almost no friction, while you can't just migrate a TypeScript app to Python in 15 minutes.
The process you have described for Codex is scary to me personally.
it takes only one extra line of code in my world(finance) to have catastrophic consequences.
even though i am using these tools like claude/cursor, i make sure to review every small bit it generated to a level, where i ask it create a plan with steps, and then perform each step, ask me for feedback, only when i give approval/feedback, it either proceeds for the next step or iterate on previous step, and on top of that i manually test everything I send for PR.
because there is no value in just sending a PR vs sending a verified/tested PR
with that said, I am not sure how much of your code is getting checked in without supervision, as it's very difficult for people to review weeks worth of work at a time.
Heya, I’m the author of the post! To be clear I have AI write probably 95% of my code these days, but I review every line of code that AI writes to make sure it meets my high standards. The same rules I’ve always had still apply — to quote @simonw “your job is to deliver code you have proven to work”.
So while I’m enthusiastic about AI writing my code in the literal sense, it’s still my code to understand and maintain. If I can’t do that then I work with AI to understand what was written — and if I can’t then I’ll often give it another go with another approach altogether so I can generate something I can understand. (Most of the time working together to understand the code works better, because I love to learn and am always open to pushing my boundaries to grow — and this process can tuned well to self-directed learning.)
And to quote a recent audit: “this is probably one of the cleanest codebases I’ve ever audited.” I say that emphasize the fact that I care a lot about the code that goes into my codebase, and I’m not interested in building layers of unchecked AI slop for code that goes into my apps.
Spec dev can certainly be effective, but having used Claude Code since its release, I’ve found the pattern of continuous refactoring of design and code produces amazing results.
And I’ll never use OpenAI dev tools because the company insists on a complete absence of ethical standards.
This is an interesting opinion but I would like to see some proof or at least more details.
What plans are you using, what did you build, what was the output from both on similar inputs, what's an example of a prompt that took you two hours to write, what was the output, etc?
I'll try to answer these one by one, but I will just note that a lot of my prompts are domain specific so it's hard to share those.
- I don't use any plans — my writing is the plan. The Plan Mode in Claude Code is excellent, but as I've switched to Codex (which doesn't have one) I will simply write up a nice long prompt and then add "Please ask any clarifying questions you may have, or for any additional details that you need" — and it works great! I may go back and forth for anywhere from 5-30 minutes depending on what else is needed, but that's basically the experience of using Plan Mode in Claude Code too.
- I've built quite a few recent features for my app Plinky [^1]. I've made a few meaningful contributions to my open source project Boutique [^2] (and have been having AI asynchronously sketch out a large new database relationships feature). I built my new blog and my workshops pages [^3] with Codex as well. Truth is I do practically everything in Codex and Claude Code these days, so I'd have more trouble listing what I haven't built lately.
- Plinky's upcoming Reader Mode is a good example of a prompt that took me two hours, but the feature isn't yet in the app so I'd prefer not to share the prompt. But I can share the first draft of the prompt for Boutique's relationships feature sine that's open source. [^4] I've been experimenting with using ChatGPT Pulse to make progress on it every day (simply by asking it to!), and much to my surprise it's been designing a new API day by day in a way that's far from perfect but certaintly has been very interesting.
The honest truth is that this one did not take two hours and I wrote it on the bus so it's probably not perfect, but the descriptive process is effectively the same. For a feature like Reader Mode you would have to capture more details to scale up to the additional complexity of a domain-specific feature with client and server components, a new download queueing pipeline, amongst other abstractions.
I do feel like the Codex CLI is quite a bit behind CC. If I recall correctly it took months for Codex to get the nice ToDo Tool Claude Code uses in memory to structure a task into substeps. Also I‘m missing the ability to have the main agent invoke subagents a lot.
All this of course can be added using MCPs, but it’s still friction. The Claude Code SDK is also way better than OpenAI Agents, it’s almost no comparison.
Also in general when I experienced bugs with Codex I was always almost sure to find an open GitHub issue with people already asking about a fix for months.
Still I like GPT-5.2 very much for coding and general agent tasks, and there is EveryCode which is a nice fork of Codex that mitigates a lot of shortcomings
Seems like you wrote at the same time I did my edit, yes Every Code is great however Ctlr+T is important to get terminal rendering otherwise is has performance problems for me
I think the author glosses over the real reason why tons of people use Codex over CC: limits. If you want to use CC properly you must use Opus 4.5 which is not even included in the Claude Pro plan. Meanwhile you can use Codex with gpt-5.2-codex on the ChatGPT Plus plan for some seriously long sessions.
Looks like Gemini plans have even more generous limits on the equivalently priced plans (Google AI Pro). I'd be interested in the experiences of people who used Google Antigravity/Gemini CLI/Gemini Code Assist for nontrivial tasks.
Heya, author here! I do agree with you that this is a big downside, but I don’t know if this is the primary reason.
In my experience teaching people, most people don’t actually know much at the time they make this decision. They’ve heard about Cursor, they’ve heard of Claude Code, and they may have heard about Codex. But what they’ve heard is anecdotes and marketing — they don’t yet have hands-on experience.
They make a big choice and then assume that this is how all AI works, because they don’t have a full breadth of context yet. And that’s to be expected! That’s how most things work.
That is why I teach the workshops I do to make AI accessible, so people can walk through the tradeoffs and make the best educated choices for them.
A couple of comments here have said that the post is subtly pro-Codex, but I tried to make my point very explicit: people should try a lot of things and see what works best for them. But it’s very hard to do that without investing a lot of time because the market is so nascent and moving so fast. This post exists to try and nudge people into exploring more of the tools they haven’t tried yet, so they can make their own informed decisions like you have. :)
All that’s to say, people definitely hit limits with Claude Code (as I have done myself) — especially if they’re hesitant to upgrade to Claude Max because they haven’t gotten enough out of Claude Pro. But I think the real reason people make the choices they do starts earlier in the process, even before they get a lot of hands on experience with Claude Code or Codex.
Thanks for the correction, looks like I misremembered. But limits are low enough with Sonnet that, I imagine you can barely do anything serious with Opus on the Pro plan.
Personally I bit the bullet and went with the Max plan for Claude Code. After tax it costs me ($108) less than I earn from one billable hour. I have been punishing it for the last two months, it defaults to Opus 4.5 and while I occasionally hit my session limit (it resets after an hour or so), I can't even scratch the surface of my monthly usage limit.
I've noticed a lot of these posts tend to go codex vs claude, but as author is someone who does AI workshops curious why Cursor is left out of this post (and more generally posts like this).
From my personal experience I find cursor to be much more robust because rather than "either / or" its both and can switch depending on the time or the task or whatever the newest model is.
It feels like the same way people often try to avoid "vendor lock in" in software world that Cursor allows freedom for that, but maybe I'm on my own here as I don't see it naturally come up in posts like these as much.
I got a student subscription to cursor and after giving it a good 6 hours I’ve abandoned it.
I extremely dislike the way it goes forth and bolts. I don’t trust these tools enough to just point it in the direction and say go, I like to be a human in the loop. Perhaps the use case I was working on then was difficult (quite old react native library upgrade across a medium sized codebase) but I eventually cracked this on Claude; cursor in both entropic and Gemini left me with an absolute mess.
Even repeatedly asking the prompt to keep me in the loop it kept on just running haywire.
Speaking from personal experience and talking to other users - the agents/harnesses of the vendors are just better and they are customized for their own models.
what kinds of tasks do you find this to be true for? For a while I was using claude code inside of the cursor terminal, but I found it to be basically the same as just using the same claude model in there.
Presumably the harness cant be doing THAT much differently right? Or rather what tasks are responsibilities of the harness could differentiate one harness from another harness
This becomes clearer for me with harder problems or long running tasks and sessions. Especially with larger context.
Examples that come to mind are how the context is filled up and how compaction works. Both Codex and Claude Code ship improvements regarding this specific to their own models and I’m not sure how this is reflected in tools like Cursor.
Heya, author here! That's a great question! I fully understand the vendor lock-in concern, but I'll just quickly note that when it comes to a first workshop I do whatever makes the person most comfortable. I let the attendee choose the tool they want — with a slight nudge towards Codex or Claude Code for reasons I'll mention below. But if they want to do the workshop in Cursor, VS Code, or heck MS Paint — I'll try to find a way to make it work as long as it means they're learning.
I actually started teaching these workshops by using Cursor, but found that it fell short for a few reasons.
Note: The way that my workshops work is that you have three hours to build something real. It may be scoped down like a single feature or a small app or a high quality prototype, but you'll walk away with what you wanted to build. More importantly you'll have learned the fundamentals of working with AI in the process, so you can continue this on your own and see meaningful results. We go through various exercises to really understand good prompting (since everyone thinks they're good but they rarely are), how to build context for models, and explore the landscape of tools that you can use to get better results. A lot of that time is actually spent in a Google Doc that I've prepped with resources — and the work we do there makes the code practically write itself by the time we're done.
Here's a short list of why I don't default to Cursor:
1. As I noted in another comment, the model performance is just so much better [^1] when accessed directly through Codex and Claude Code, which means more promising results more quickly. Previously the workshops were 3-4 hours just to finish, now it's a solid 3 with time to ask questions afterwards. You can't beat this experience, because it gives the student more time to pause and ask questions, seep in what they've done, and not spend time trying to understand the tools just to see results.
1a. The amount of time it took someone to set up Cursor was pretty long. The process for getting a good set up is pretty long — especially for someone non-technical. This may not be as big of a deal for developers using Cursor — but even they don't know a lot of the settings and tweaks to make to get Cursor to be great out the box.
2. The user experience of dropping a prompt into Codex/Claude Code and watch it start solving a problem is pretty amazing. I love GUIs — I spend my days building one [^3], but the TUI melting away everything to just being chat is an advantage when you have no mental model for how this stuff works.
3. As I said in #1, the results are just better. That's really the main reason! I
Not to toot my own horn, but the process works. These are all testimonials in the words of people who have attended a workshop, and I'm very proud of how people not only learn during the workshop but how it sets them off on a good path afterwards. [^2]. I have people messaging me 24 hours later telling me that they built an app their partner has wanted for years, to tell me that they've completed the app we started and it does everything they dreamed of, and hear more process over the weeks and months after because I urge them to keep sending me their AI wins. (It's truly amazing how much they grow, and I now have attendees teaching ME things — the ultimate dream of being a teacher knowing you gave them the nudge they needed.)
Hope that helps and isn't too much of an ad — I really just want to make it clear that I try to do what works best and if the best way to help people learn changes I will gladly change how I work. :)
I feel you brother/sister. I actually pay for Claude Code Max and also for the $20/mo Cursor plan. I use Claude Code via the VSCode extension running within the Cursor IDE. 95% of my usage is Claude Code via that extension (or through the CLI in certain situations) but it's great having Cursor as a backup. Sometimes I want to have another model check Claude's work, for example.
Github Copilot also allows you to use both models, codex, claude, and gemini on top.
Cursor has this "tool for kids" vibe, it's also more about the past - "tab, tab, enter" low-level coding versus the future - "implement task 21" high level delegating.
With claude code I'll ask it to read a couple of files and do x similar to existing thing y. It takes a few moments to read files and then just does it. All done in a minute or so.
I tried something similar with codex and it took 20 minutes reading around bits of file and this and that. I didn't bother letting it finish. Is this normal? Do I have something misconfigured? This was a couple of months ago.
I tried so hard to make Codex work, after the glowing reviews (not just from Internet randos/potential-shills, though; people I know well, also).
It's objectively worse for me on every possible axis than Claude Code. I even wondered if maybe I was on some kind of shadow-ban nerf-list for making fun of Sam Altman's WWDC outfit in a tweet 20 years ago. (^_^)
I don't love Claude's over-exuberant personality, and prefer Codex's terse (arguably sullen) responses.
But they both fuck up often (as they all do), and unlike Claude Code (Opus, always), Codex has been net-negative for me. I'm not speed-sensitive, I round-robin among a bunch of sessions, so I use the max thinking option at all times, but Codex 5.1 and 5.2 for me are just worse code, and worse than that, worse at code review to the point that it negated whatever gains I had gotten from it.
While all of them miss a ton of stuff (of course), and LLM code review just really isn't good unless the PR is tiny — Claude just misses stuff (fine; expected), while Codex comes up with plausible edge-case database query concurrency bugs that I have to look at, and squint at, and then think hmm fuck and manually google with kagi.com for 30 minutes (LIKE AN ANIMAL) only to conclude yeah, not true, you're hallucinating bud, to which Codex is just like. "Noted; you are correct. If you want, I can add a comment to that effect, to avoid confusion in future."
So for me, head-to-head, Claude murders Codex — and yet I know that isn't true for everybody, so it's weird.
What I do like Codex for is reviewing Claude's work (and of course I have all of them review my own work, why not?). Even there, though, Codex sometimes flags nonexistent bugs in Claude's code — less annoying, though, since I just let them duke it out, writing tests that prove it one way or the other, and don't have to manually get involved.
I must be doing something wrong. When I last tried to use Codex 5.2 (via Cursor), no amount of prompting could get it to stop aggressively asking me for permission to do things. This seems to be the opposite of the article's claim, which is that Codex is better for long-running, hands off tasks.
Heya, I'm the author of the post! This was probably unintentional but I think you're making a really valuable observation that will be helpful to others.
The models Cursor provides to use in their product are intermediated versions of models that companies like OpenAI and Anthropic offer. They are technically using Codex, but not in the way that they would be if you were in a tool like Codex (CLI) or Claude Code.
If you ask Cursor to solve a tough problem, Cursor will break down the problem into a different problem before sending that request to OpenAI so they can use Codex. They do this because:
1. To save money. By restructuring the prompt they can use less tokens, saving them money for running Cursor since they are the ones paying for the tokens with your subscription cost.
2. [Based on things the Cursor team has said] They believe they can construct a better intermediate prompt that is more representative of the problem you want to solve.
This extra level of abstraction means that you are not getting the best results when you use a tool like Cursor. OpenAI and Anthropic are running their harnesses Codex CLI and Claude Code at a loss (because VC), but providing better results. This is not the best way to make money, but it's a great way to build mindshare and hopefully get customers for life. (People are fickle and cheap though so I doubt this is a customers for life strategy the way people buy the same brand of deodorant once they start buying Dove.)
Happy to answer any questions you may have, but mostly I would highly suggest trying out Codex CLI and Claude Code to get a better feel for what I'm saying — and to also to get more out of your AI tools. :)
This blog post lacks almost any form of substance.
It could've been shortened to: Codex is more hands off, I personally prefer that over claude's more hands-on approach. Neither are bad. I won't bring you proof or examples, this is just my opinion based on my experience.
Heya, author here! Admittedly this was a quick blog post I fired off, much shorter than my usual writing.
My goal wasn't to create a complete comparison of both tools — but to provide a little theory a behavior I'm seeing. You're (absolutely) right that it's a theory not a study, and I made sure to state that in the post. :)
Mostly though the conclusion describes pretty succinctly why I wrote the post, as a way to get more people to try more of the tools so they can adequately form their own conclusions.
> I think back to coworkers I’ve had over the years, and their varying preferences. Some people couldn’t start coding until they had a checklist of everything they needed to do to solve a problem. Others would dive right in and prototype to learn about the space they would be operating in.
> The tools we use to build are moving fast and hard to keep up with, but we’ve been blessed with a plethora of choices. The good news is that there is no wrong choice when it comes to AI. That’s why I don’t dismiss people who live in Claude Code, even though I personally prefer Codex.
> The tool you choose should match how you work, not the other way around. If you use Claude, I’d suggest trying Codex for a week to see if maybe you’re a Codex person and didn’t know it. And if you use Codex, I’d recommend trying Claude Code for a week to see if maybe you’re more of a Claude person than you thought.
> Maybe you’ll discover your current approach isn’t the best fit for you. Maybe you won’t. But I’m confident you’ll find that every AI tool has its strengths and weaknesses, and the only way to discover what they are is by using them.
Hey! Didn't mean my comment negatively towards you in any way, though I now realize it might've come across as such. Blogs with opinions based on experiences alone are absolutely fine, thanks for sharing.
What I did mean is to indicate that your blog felt like a HN comment to me, where I generally expect a HN link to be news or facts that subsequently spark a discussion.
At the end of your post I guess I was hoping or expecting facts or examples, indicating it was engaging enough to read to the end.
Happy holidays!
It’s funny because my use of Claude Code is the opposite. I use slash commands with instructions to find context, and basically never interact with it while it is doing its thing.
> Codex is more hands off, I personally prefer that over claude's more hands-on approach
Agree, and it's a nice reflection of the individual companie's goals. OpenAI is about AGI, and they have insane pressure from investors to show that that is still the goal, hence codex when works they could say look it worked for 5 hours! Discarding that 90% of the time it's just pure trash.
While Anthropic/Boris is more about value now, more grounded/realistic, providing more consistent hence trustable/intuitive experience that you can steer. (Even if Dario says the opposite). The ceiling/best case scenario of a claude code session is a bit lower than Codex maybe, but less variance.
Well, if you had tried using GPT/Codex for development you would know that the output from those 5 hours would not be 90% trash, it would be close to 100% pure magic. I'm not kidding. It's incredible as long as you use a proper analyze-plan-implement-test-document process.
I’ve checked out codex after the glowing reviews here around September / October and it was, all in all, a letdown (this was writing greenfield modules in a larger existing codebase).
Codex was very context efficient, but also slow (though I used the highest thinking effort), and didn’t adapt do the wider codebase almost at all (even if I pointed it at the files to reference / get inspired by). Lots of defensive programming, hacky implementations, not adapting to the codebase style and patterns.
With Claude Code and starting each conversation by referencing a couple existing files, I am able to get it to write code mostly like I would’ve written it. It adapts to existing patterns, adjusts to the code style, etc. I can steer it very well.
And now with the new cheaper faster Opus it’s also quite an improvement. If you kick off sonnet with a long list of constraints (e.g. 20) it would often ignore many. Opus is much better at “keeping more in mind” while writing the code.
Note: yes, I do also have an agent.md / claude.md. But I also heavily rely on warming the context up with some context dumping at conversation starts.
All codex conversations need to be caveat with the model because it varies significantly. Codex requires very little tweaking but you do need to select the highest thinking model if you’re writing code and recommend the highest thinking NON-code model for planning. That’s really it, it takes task time up to 5-20m but it’s usually great.
Then I ask Opus to take a pass and clean up to match codebase specs and it’s usually sufficient. Most of what I do now is detailed briefs for Codex, which is…fine.
I will jump between a ChatGPT window and a VSCode window with the Codex plugin. I'll create an initial prompt in ChatGPT, which will ask the coding agent to audit the current implementation, then draft an implementation plan. The plan bounces between Chat and Codex about 5 times, with Chat telling Codex how to improve. Then Codex implements, creates an implementation summary, which I give to Chat. Chat then asks to add a couple of things fixes, then it's done.
Why non-thinking model? Also 5-20 minutes?! I guess I don’t know what kind of code you are writing but for my web app backends/frontends planning takes like 2-5 minutes tops with Sonnet and I have yet to feel the need to even try Opus.
I probably write overly detailed starting prompts but it means I get pretty aligned results. It does take longer but I try to think through the implementation first before the planning starts.
In my experience sonnet > opus, so it’s not surprise you don’t “need” opus. They charge a premium on sonnet now instead
A lot of (carefully hedged) pro Codex posts on HN read suspect to me. I've had mixed results with both CC and Codex and these kinds of glowing reviews have the air of marketing rather than substance.
If only fair comparisons would not be so costly, in both time and money.
For example, I have a ChatGPT and a Gemini subscription, and thus could somewhat quickly check out their products, and I have looked at a lot of the various Google AI dev ventures, but I have not yet found the energy/will to get more into Gemini CLI specifically. Antigravity with Gemini 3 pro did some really wonky stuff when I tried it.
I also have a Windsurf subscription, which allows me to look at any frontier model for coding (well, most of the time, unless there's some sort of company beef going). This I have often used to check out Anthropic models, with much less success than Codex with > GPT-5.1 – but of course, that's without using Clode Caude (which I subscribed to for a month, idk, 6 months ago, and seemed fine back then but not mind blowingly so).
Idk! Codex (mostly using the vscode extension) works really well for me right now, but I would assume this is simply true across the board: Everything has gotten so much better. If I had to put my finger on what feels best about codex right now, specifically: Least amount of oversights and mistakes when working on gnarly backend code, with the amount of steering I am willing to put into it, mostly working off of 3-4 paragraph prompts.
I’ve been using frontier Claude and GPT models for a loooong time (all of 2025 ;)) and I can say anecdotally the post is 100% correct. GPT codex given good enough context and harness will just go. Claude is better at interactive develop-test-iterate because it’s much faster to get a useful response, but it isn’t as thorough and/or fills in its context gaps too eagerly, so needs more guidance. Both are great tools and complement each other.
Heya, I'm the author! I can promise you that I am 0% affiliated with OpenAI and have no qualms with calling them out for the larger moral, ethical, and societal questions that have emerged with the strategy they've pushed.
I do earnestly believe their models are currently the best to work with as software developers, but as I state in my post I think this is the state of the world today and have no premonition for that being true forever.
Same questions apply to Anthropic, Google, etc, etc — I'm not paid by anyone to say anything.
For what it's worth I just switched from claude code to codex and have found it to be incredibly impressive.
You can check my history to confirm I criticize sama far too much to be an OpenAI shill.
The usage limits on Claude have been making it too hard to experiment with. Lately, I get about an hour a day before hitting session/weekly limits. With Codex, the limits are higher than my own usage so I never see them.
Because of that, everyone who is new to this will be focused on Codex and write their glowing reviews of the current state of AI tools in that context.
Yeah. I can excuse bad writing, I can tolerate evangelism. I don't have patience for both.
As the author of the post I think it was a nice quick post to share my perspective of a behavior I’ve been seeing across many (but not all) developers recently, but I’m always open to feedback for how to improve my writing!
And as I mentioned here (https://news.ycombinator.com/item?id=46392900) I have no affiliation with any of the organizations, nor care to evangelize any of them. Nobody pays me to write, I’m just a guy on the internet sharing his thoughts, building software, and teaching people how to use AI better with any tool people want to use. :)
Exactly my thoughts. Most of these posts are what I'll say "paid posts".
I've been using Claude code most of the year, and codex since soon after it released:
It's important to separate vibes coding from vibes engineering here. For production coding, I create fairly strict plans -- not details, but sequences, step requirements, and documented updating of the plan as it goes. I can run the same plan in both, and it's clear that codex is poor at instruction following because I see it go off plan most of the time. At the same time it can go on its own pretty far in an undirected way.
The result is when I'm doing serious planned work aimed for production PRs, I have to use Claude. When it's experimental and I don't care about quality but speed and distance, such as for prototyping or debugging, codex is great.
Edit: I don't think codex being poor at instruction following is inherent, just where they are today
Respectfully I don’t think the author appreciates that the configurability of Claude Code is its performance advantage. I would much rather just tell it what to do and have it go do it, but I am much more able to do that with a highly configured Claude Code than with Codex which is pretty much just set at the out of the box quality level.
I spend most of my engineering time these days not on writing code or even thinking about my product, but on Claude Code configuration (which is portable so should another solution arise I can move it). Whenever Claude Code doesn’t oneshot something, that is an opportunity for improvement.
Heya, I'm the author of the post and I just wanted to say I do appreciate the configurability! As I mentioned in the post, I have been that kind of developer in the past.
> This is a perfect match for engineers who love configuring their environments. I can’t tell you how many full days of my life I’ve lost trying out new Xcode features or researching VS Code extensions that in practice make me 0.05% more productive.
And I tried to be pretty explicit about the idea that this is a very personal choice.
> Personally — and I do emphasize this is a personal decision — I‘d rather write a well-spec’d plan and go do something else for 15 minutes. Claude’s Plan Mode is exceptional, and that‘s why so many people fall in love with Claude once they try it.2
For every person who feels like me today, there's someone who feels like you out there. And for every person who feels like you, there's someone like me (today) who finds it not as valuable to their workflow. That's the reason my conclusion was all about getting folks to try out both to see what works for them — because people change and it's worth finding out who you really at this moment in time.
Anyhow, I do think that Codex is also very configurable — I was just trying to emphasize that it's really great out the box while Claude Code requires more tuning. But that tuning makes it more personal, which as you mention is a huge plus! As I've touched on in a few posts [^1] [^2] Skills are to me a big deal, because they allow people to achieve high levels of customization without having to be the kind of developer that devotes a lot of time to creating their perfect set up. (Now supported in both Claude Code and Codex.)
I don't want this to turn into a bit of a ramble so I'll just say that I agree with you — but also there's a lot of nuance here because we're all having very personal coding experiences with AI — so it may not entirely sound like I agree with you. :)
Would love to hear more about your specific customizations, to make sure that I'm not missing out on anything valuable. :D
[1]: https://build.ms/2025/10/17/your-first-claude-skill/ [2]: https://build.ms/2025/12/1/scribblenauts-for-software/
Hey, I'm not very familiar with Claude Code. Can you explain what configuration you're referring to?
Is this just things like skills and MCPs, or something else?
Skills, MCPs, /commands, agents, hooks, plugins, etc. I package https://charleswiltgen.github.io/Axiom/ as an easily-installable Claude Code plugin, and AFAICT I'm not able to do that for any other AI coding environment.
You can do basically all that with codex, although claude might have slightly more convenient tooling. The end result will be the same anyway.
That hasn't been my experience, although I'm happy to accept that I'm the problem. Apparently they've released their skills support (?), so I should try again. https://developers.openai.com/codex/skills
OpenCode, Pi are even more configurable.
It's hard to compare the two tools because they change so much and so fast.
Right now, as an example, claude code with opus 4.5 is a beast, but before that, with sonnet 4.0, codex was much better.
Gemini-cli, on the other hand, with gemini-flash-3.0 (which is strangely good for the "small and fast" model), it's very good (but the cli and the user experience are not on par with codex or claude yet).
So we need to be in constant observations of those tools. Currently (after gemini-flash-3.0 came out), I tend to submit the same task to claude (with opus) and gemini to understand the behaviour. gemini is surprising me.
Heya, author here! I completely agree with you — and why the post is titled Codex vs. Claude Code (Today). I also have this very specific disclaimer in the second paragraph to note that this post is a reflection of a moment in time. :D
> Before we continue, I need to make a disclaimer: This post is about the Claude Code and Codex, on December 22, 2025. Everything in AI changes so fast that I have almost no expectations about the validity of these statements in a year, or probably even 3-6 months from now.
That said I do what you do and try different models when I want to see if things have changed. I run my own private little benchmarks with a few complex real world tasks, and I really love seeing how things are progressing — both in terms of quality but also the novel quirks that are introduced, changed, or removed. :)
I don't think the comparison to programming languages holds, maybe very tenuously at best. Coding assistants evolve constantly, you can't even be talking about "Codex" without specifying the time range (ie, Codex 2025-10) because it's different from quarter to quarter. Same with CC.
I believe this is the main source of disagreement / disappointment when people read opinions / reviews, then proceed to have an experience very different from expected.
Ironically, this constant improvement/evolution erodes product loyalty -- personally, I'm a creature of habit and will stay with a tool past its expiry date; with coding assistants / sota llms, I cancel and switch subscriptions all the time.
Heya, author of the post here! I think you're right in everything you've said, but I want to note that the programming language comparison was meant to be metaphorical more than literal. Everything is changing so fast (as I mention in the post a few times), but I have seen some (far from all) people get locked into Claude Code or Codex in a way where they won't even consider alternatives the same way people they chose Ruby to start their career and now identify as Ruby developers.
My goal was to open people's minds just a little bit by saying exactly what you're getting at — everything is moving fast and we should be reassessing often. A meaningful difference is that you can start a codebase with Claude Code and then switch to Codex with almost no friction, while you can't just migrate a TypeScript app to Python in 15 minutes.
All that's to say, we agree!
The process you have described for Codex is scary to me personally.
it takes only one extra line of code in my world(finance) to have catastrophic consequences.
even though i am using these tools like claude/cursor, i make sure to review every small bit it generated to a level, where i ask it create a plan with steps, and then perform each step, ask me for feedback, only when i give approval/feedback, it either proceeds for the next step or iterate on previous step, and on top of that i manually test everything I send for PR.
because there is no value in just sending a PR vs sending a verified/tested PR
with that said, I am not sure how much of your code is getting checked in without supervision, as it's very difficult for people to review weeks worth of work at a time.
just my 2 cents
Heya, I’m the author of the post! To be clear I have AI write probably 95% of my code these days, but I review every line of code that AI writes to make sure it meets my high standards. The same rules I’ve always had still apply — to quote @simonw “your job is to deliver code you have proven to work”.
So while I’m enthusiastic about AI writing my code in the literal sense, it’s still my code to understand and maintain. If I can’t do that then I work with AI to understand what was written — and if I can’t then I’ll often give it another go with another approach altogether so I can generate something I can understand. (Most of the time working together to understand the code works better, because I love to learn and am always open to pushing my boundaries to grow — and this process can tuned well to self-directed learning.)
And to quote a recent audit: “this is probably one of the cleanest codebases I’ve ever audited.” I say that emphasize the fact that I care a lot about the code that goes into my codebase, and I’m not interested in building layers of unchecked AI slop for code that goes into my apps.
Spec dev can certainly be effective, but having used Claude Code since its release, I’ve found the pattern of continuous refactoring of design and code produces amazing results.
And I’ll never use OpenAI dev tools because the company insists on a complete absence of ethical standards.
Anthropic is partnered with Palantir though…
This is an interesting opinion but I would like to see some proof or at least more details.
What plans are you using, what did you build, what was the output from both on similar inputs, what's an example of a prompt that took you two hours to write, what was the output, etc?
Heya, author here!
I'll try to answer these one by one, but I will just note that a lot of my prompts are domain specific so it's hard to share those.
- I don't use any plans — my writing is the plan. The Plan Mode in Claude Code is excellent, but as I've switched to Codex (which doesn't have one) I will simply write up a nice long prompt and then add "Please ask any clarifying questions you may have, or for any additional details that you need" — and it works great! I may go back and forth for anywhere from 5-30 minutes depending on what else is needed, but that's basically the experience of using Plan Mode in Claude Code too.
- I've built quite a few recent features for my app Plinky [^1]. I've made a few meaningful contributions to my open source project Boutique [^2] (and have been having AI asynchronously sketch out a large new database relationships feature). I built my new blog and my workshops pages [^3] with Codex as well. Truth is I do practically everything in Codex and Claude Code these days, so I'd have more trouble listing what I haven't built lately.
- Plinky's upcoming Reader Mode is a good example of a prompt that took me two hours, but the feature isn't yet in the app so I'd prefer not to share the prompt. But I can share the first draft of the prompt for Boutique's relationships feature sine that's open source. [^4] I've been experimenting with using ChatGPT Pulse to make progress on it every day (simply by asking it to!), and much to my surprise it's been designing a new API day by day in a way that's far from perfect but certaintly has been very interesting.
The honest truth is that this one did not take two hours and I wrote it on the bus so it's probably not perfect, but the descriptive process is effectively the same. For a feature like Reader Mode you would have to capture more details to scale up to the additional complexity of a domain-specific feature with client and server components, a new download queueing pipeline, amongst other abstractions.
Hope that answers your questions!
[^1]: https://plinky.app [^2]: https://github.com/mergesort/Boutique [^3]: https://build.ms [^4]: https://gist.github.com/mergesort/04a77c47ea4cb6433aa9ade4e1...
I do feel like the Codex CLI is quite a bit behind CC. If I recall correctly it took months for Codex to get the nice ToDo Tool Claude Code uses in memory to structure a task into substeps. Also I‘m missing the ability to have the main agent invoke subagents a lot.
All this of course can be added using MCPs, but it’s still friction. The Claude Code SDK is also way better than OpenAI Agents, it’s almost no comparison.
Also in general when I experienced bugs with Codex I was always almost sure to find an open GitHub issue with people already asking about a fix for months.
Still I like GPT-5.2 very much for coding and general agent tasks, and there is EveryCode which is a nice fork of Codex that mitigates a lot of shortcomings
You can use Every Code [1] (a Codex fork) for this, it can invoke agents, but not just codex ones, but claude and gemini as well.
[1] https://github.com/just-every/code
Seems like you wrote at the same time I did my edit, yes Every Code is great however Ctlr+T is important to get terminal rendering otherwise is has performance problems for me
> with people already asking about a fix for months.
OpenAI needs to get access to Claude Code to fix them :)
The general consensus today is that ToDo tool is obsolete and lowers performance for frontier models (Opus 4.5, GPT-5.2)
I think the author glosses over the real reason why tons of people use Codex over CC: limits. If you want to use CC properly you must use Opus 4.5 which is not even included in the Claude Pro plan. Meanwhile you can use Codex with gpt-5.2-codex on the ChatGPT Plus plan for some seriously long sessions.
Looks like Gemini plans have even more generous limits on the equivalently priced plans (Google AI Pro). I'd be interested in the experiences of people who used Google Antigravity/Gemini CLI/Gemini Code Assist for nontrivial tasks.
Heya, author here! I do agree with you that this is a big downside, but I don’t know if this is the primary reason.
In my experience teaching people, most people don’t actually know much at the time they make this decision. They’ve heard about Cursor, they’ve heard of Claude Code, and they may have heard about Codex. But what they’ve heard is anecdotes and marketing — they don’t yet have hands-on experience.
They make a big choice and then assume that this is how all AI works, because they don’t have a full breadth of context yet. And that’s to be expected! That’s how most things work.
That is why I teach the workshops I do to make AI accessible, so people can walk through the tradeoffs and make the best educated choices for them.
A couple of comments here have said that the post is subtly pro-Codex, but I tried to make my point very explicit: people should try a lot of things and see what works best for them. But it’s very hard to do that without investing a lot of time because the market is so nascent and moving so fast. This post exists to try and nudge people into exploring more of the tools they haven’t tried yet, so they can make their own informed decisions like you have. :)
All that’s to say, people definitely hit limits with Claude Code (as I have done myself) — especially if they’re hesitant to upgrade to Claude Max because they haven’t gotten enough out of Claude Pro. But I think the real reason people make the choices they do starts earlier in the process, even before they get a lot of hands on experience with Claude Code or Codex.
And it has gotten very bad. I almost never hit the limit on the pro plan before with CC but now it happens very fast.
A small correction: Opus 4.5 is included in the Pro plan nowadays, but yeah, the usage limits for it on the $20 sub are really, really low.
Both Claude Pro and Google Antigravity free tier have Opus 4.5
Opus IS included in Pro plan.
Opus 4.5 is included in the Pro plan.
Thanks for the correction, looks like I misremembered. But limits are low enough with Sonnet that, I imagine you can barely do anything serious with Opus on the Pro plan.
Personally I bit the bullet and went with the Max plan for Claude Code. After tax it costs me ($108) less than I earn from one billable hour. I have been punishing it for the last two months, it defaults to Opus 4.5 and while I occasionally hit my session limit (it resets after an hour or so), I can't even scratch the surface of my monthly usage limit.
I've noticed a lot of these posts tend to go codex vs claude, but as author is someone who does AI workshops curious why Cursor is left out of this post (and more generally posts like this).
From my personal experience I find cursor to be much more robust because rather than "either / or" its both and can switch depending on the time or the task or whatever the newest model is.
It feels like the same way people often try to avoid "vendor lock in" in software world that Cursor allows freedom for that, but maybe I'm on my own here as I don't see it naturally come up in posts like these as much.
I got a student subscription to cursor and after giving it a good 6 hours I’ve abandoned it.
I extremely dislike the way it goes forth and bolts. I don’t trust these tools enough to just point it in the direction and say go, I like to be a human in the loop. Perhaps the use case I was working on then was difficult (quite old react native library upgrade across a medium sized codebase) but I eventually cracked this on Claude; cursor in both entropic and Gemini left me with an absolute mess.
Even repeatedly asking the prompt to keep me in the loop it kept on just running haywire.
Speaking from personal experience and talking to other users - the agents/harnesses of the vendors are just better and they are customized for their own models.
what kinds of tasks do you find this to be true for? For a while I was using claude code inside of the cursor terminal, but I found it to be basically the same as just using the same claude model in there.
Presumably the harness cant be doing THAT much differently right? Or rather what tasks are responsibilities of the harness could differentiate one harness from another harness
This becomes clearer for me with harder problems or long running tasks and sessions. Especially with larger context.
Examples that come to mind are how the context is filled up and how compaction works. Both Codex and Claude Code ship improvements regarding this specific to their own models and I’m not sure how this is reflected in tools like Cursor.
Heya, author here! That's a great question! I fully understand the vendor lock-in concern, but I'll just quickly note that when it comes to a first workshop I do whatever makes the person most comfortable. I let the attendee choose the tool they want — with a slight nudge towards Codex or Claude Code for reasons I'll mention below. But if they want to do the workshop in Cursor, VS Code, or heck MS Paint — I'll try to find a way to make it work as long as it means they're learning.
I actually started teaching these workshops by using Cursor, but found that it fell short for a few reasons.
Note: The way that my workshops work is that you have three hours to build something real. It may be scoped down like a single feature or a small app or a high quality prototype, but you'll walk away with what you wanted to build. More importantly you'll have learned the fundamentals of working with AI in the process, so you can continue this on your own and see meaningful results. We go through various exercises to really understand good prompting (since everyone thinks they're good but they rarely are), how to build context for models, and explore the landscape of tools that you can use to get better results. A lot of that time is actually spent in a Google Doc that I've prepped with resources — and the work we do there makes the code practically write itself by the time we're done.
Here's a short list of why I don't default to Cursor:
1. As I noted in another comment, the model performance is just so much better [^1] when accessed directly through Codex and Claude Code, which means more promising results more quickly. Previously the workshops were 3-4 hours just to finish, now it's a solid 3 with time to ask questions afterwards. You can't beat this experience, because it gives the student more time to pause and ask questions, seep in what they've done, and not spend time trying to understand the tools just to see results. 1a. The amount of time it took someone to set up Cursor was pretty long. The process for getting a good set up is pretty long — especially for someone non-technical. This may not be as big of a deal for developers using Cursor — but even they don't know a lot of the settings and tweaks to make to get Cursor to be great out the box.
2. The user experience of dropping a prompt into Codex/Claude Code and watch it start solving a problem is pretty amazing. I love GUIs — I spend my days building one [^3], but the TUI melting away everything to just being chat is an advantage when you have no mental model for how this stuff works.
3. As I said in #1, the results are just better. That's really the main reason! I
Not to toot my own horn, but the process works. These are all testimonials in the words of people who have attended a workshop, and I'm very proud of how people not only learn during the workshop but how it sets them off on a good path afterwards. [^2]. I have people messaging me 24 hours later telling me that they built an app their partner has wanted for years, to tell me that they've completed the app we started and it does everything they dreamed of, and hear more process over the weeks and months after because I urge them to keep sending me their AI wins. (It's truly amazing how much they grow, and I now have attendees teaching ME things — the ultimate dream of being a teacher knowing you gave them the nudge they needed.)
Hope that helps and isn't too much of an ad — I really just want to make it clear that I try to do what works best and if the best way to help people learn changes I will gladly change how I work. :)
[^1] https://news.ycombinator.com/item?id=46393001 [^2]: https://build.ms/ai#testimonials [^3]: https://plinky.app
I feel you brother/sister. I actually pay for Claude Code Max and also for the $20/mo Cursor plan. I use Claude Code via the VSCode extension running within the Cursor IDE. 95% of my usage is Claude Code via that extension (or through the CLI in certain situations) but it's great having Cursor as a backup. Sometimes I want to have another model check Claude's work, for example.
Github Copilot also allows you to use both models, codex, claude, and gemini on top.
Cursor has this "tool for kids" vibe, it's also more about the past - "tab, tab, enter" low-level coding versus the future - "implement task 21" high level delegating.
Is it just me or is codex slow?
With claude code I'll ask it to read a couple of files and do x similar to existing thing y. It takes a few moments to read files and then just does it. All done in a minute or so.
I tried something similar with codex and it took 20 minutes reading around bits of file and this and that. I didn't bother letting it finish. Is this normal? Do I have something misconfigured? This was a couple of months ago.
On hard projects (really hard, like https://github.com/7mind/jopa), Codex fails spectacularly. The only competition is Claude vs Gemini 3 Pro.
I tried so hard to make Codex work, after the glowing reviews (not just from Internet randos/potential-shills, though; people I know well, also).
It's objectively worse for me on every possible axis than Claude Code. I even wondered if maybe I was on some kind of shadow-ban nerf-list for making fun of Sam Altman's WWDC outfit in a tweet 20 years ago. (^_^)
I don't love Claude's over-exuberant personality, and prefer Codex's terse (arguably sullen) responses.
But they both fuck up often (as they all do), and unlike Claude Code (Opus, always), Codex has been net-negative for me. I'm not speed-sensitive, I round-robin among a bunch of sessions, so I use the max thinking option at all times, but Codex 5.1 and 5.2 for me are just worse code, and worse than that, worse at code review to the point that it negated whatever gains I had gotten from it.
While all of them miss a ton of stuff (of course), and LLM code review just really isn't good unless the PR is tiny — Claude just misses stuff (fine; expected), while Codex comes up with plausible edge-case database query concurrency bugs that I have to look at, and squint at, and then think hmm fuck and manually google with kagi.com for 30 minutes (LIKE AN ANIMAL) only to conclude yeah, not true, you're hallucinating bud, to which Codex is just like. "Noted; you are correct. If you want, I can add a comment to that effect, to avoid confusion in future."
So for me, head-to-head, Claude murders Codex — and yet I know that isn't true for everybody, so it's weird.
What I do like Codex for is reviewing Claude's work (and of course I have all of them review my own work, why not?). Even there, though, Codex sometimes flags nonexistent bugs in Claude's code — less annoying, though, since I just let them duke it out, writing tests that prove it one way or the other, and don't have to manually get involved.
I must be doing something wrong. When I last tried to use Codex 5.2 (via Cursor), no amount of prompting could get it to stop aggressively asking me for permission to do things. This seems to be the opposite of the article's claim, which is that Codex is better for long-running, hands off tasks.
Heya, I'm the author of the post! This was probably unintentional but I think you're making a really valuable observation that will be helpful to others.
The models Cursor provides to use in their product are intermediated versions of models that companies like OpenAI and Anthropic offer. They are technically using Codex, but not in the way that they would be if you were in a tool like Codex (CLI) or Claude Code.
If you ask Cursor to solve a tough problem, Cursor will break down the problem into a different problem before sending that request to OpenAI so they can use Codex. They do this because: 1. To save money. By restructuring the prompt they can use less tokens, saving them money for running Cursor since they are the ones paying for the tokens with your subscription cost. 2. [Based on things the Cursor team has said] They believe they can construct a better intermediate prompt that is more representative of the problem you want to solve.
This extra level of abstraction means that you are not getting the best results when you use a tool like Cursor. OpenAI and Anthropic are running their harnesses Codex CLI and Claude Code at a loss (because VC), but providing better results. This is not the best way to make money, but it's a great way to build mindshare and hopefully get customers for life. (People are fickle and cheap though so I doubt this is a customers for life strategy the way people buy the same brand of deodorant once they start buying Dove.)
Happy to answer any questions you may have, but mostly I would highly suggest trying out Codex CLI and Claude Code to get a better feel for what I'm saying — and to also to get more out of your AI tools. :)