Every time I say I don't see the productivity boost from AI, people always say I'm using the wrong tool, or the wrong model. I use Claude with Sonnet, Zed with either Claude Sonnet 4 or Opus 4.6, Gemini, and ChatGPT 5.2. I use these tools daily and I just don't see it.
The vampire in the room, for me, seems to be feeling like I'm the only person in the room that doesn't believe the hype. Or should I say, being in rooms where nobody seems to care about quality over quantity anymore. Articles like this are part of the problem, not the solution.
Sure they are great for generating some level of code, but the deeper it goes the more it hallucinates. My first or second git commit from these tools is usually closer to a working full solution than the fifth one. The time spent refactoring prompts, testing the code, repeating instructions, refactoring naive architectural decisions and double checking hallucinations when it comes to research take more than the time AI saves me. This isn't free.
A CTO this week told me he can't code or brainstorm anymore without AI. We've had these tools for 4 years, like this guy says - either AI or the competition eats you. So, where is the output? Aside from more AI-tools, what has been released in the past 4 years that makes it obvious looking back that this is when AI became available?
I am with you on this, and you can't win, because as soon as you voice this opinion you get overwhelmed with "you dont have the sauce/prompt" opinions which hold an inherent fallacy because they assume you are solving the same problems as them.
I work in GPU programming, so there is no way in hell that JavaScript tools and database wrapper tasks can be on equal terms with generating for example Blackwell tcgen05 warp-scheduled kernels.
There's going to be a long tail of domain-specific tasks that aren't well served by current models for the foreseeable future, but there's also no question the complexity horizon of the SotA models is increasing over time. I've had decent results recently with non-trivial Cuda/MPS code. Is it great code/finely tuned? Probably not but it delivered on the spec and runs fast enough.
Yeah, the argument here is that once you say this, people will say "you just dont know how to prompt, i pass the PTX docs together with NSight output and my kernel into my agent and run an evaluation harness and beat cuBLAS". And then it turns out that they are making a GEMM on Ampere/Hopper which is an in-distribution problem for the LLMs.
It's the idea/mindset that since you are working on something where the tool has a good distribution, its a skill issue or mindset problem for everyone else who is not getting value from the tool.
Another thing I've never got them to generate is any G code. Maybe that'll be in the image/3d generator side indirectly, but I was kind of hoping I could generate some motions since hand coding coordinates is very tedious. That would be a productivity boost for me. A very very niche boost, since I rarely need bespoke G code, but still.
Many engineers get paid a lot of money to write low-complexity code gluing things together and tweaking features according to customer requirements.
When the difficulty of a task is neatly encompassed in a 200 word ticket and the implementation lacks much engineering challenge, AI can pretty reliably write the code-- mediocre code for mediocre challenges.
A huge fraction of the software economy runs on CRUD and some business logic. There just isn't much complexity inherent in any of the feature sets.
Complexity is not where the value to the business comes from. In fact, it's usually the opposite. Nobody wants to maintain slop, and whenever you dismiss simplicity you ignore all the heroic hard work done by those at the lower level of indirection. This is what politics looks like when it finally places its dirty hands on the tech industry, and it's probably been a long time coming.
As annoying as that is, we should celebrate a little that the people who understand all this most deeply are gaining real power now.
Yes, AI can write code (poorly), but the AI hype is now becoming pure hate against the people who sit in meetings quietly gathering their thoughts and distilling it down to the simple and almost poetic solutions nobody else but those who do the heads down work actually care about.
> A huge fraction of the software economy runs on CRUD and some business logic.
You vastly underestimate the meaning of CRUD applied in such a direct manner. You're right in some sense that "we have the technology", but we've had this technology for a very long time now. The business logic is pure gold. You dismiss this not realizing how many other thriving and well established industries operate doing simple things applied precisely.
That's a false dilemma. If that's what you want, you absolutely can use the AI levers to get more time and less context switching, so you can focus more on the "simple and poetic solutions".
1. Copy-pasting existing working code with small variations. If the intended variation is bigger then it fails to bring productivity gains, because it's almost universally wrong.
2. Exploring unknown code bases. Previously I had to curse my way through code reading sessions, now I can find information easily.
3. Google Search++, e.g. for deciding on tech choices. Needs a lot of hand holding though.
... that's it? Any time I tried doing anything more complex I ended up scrapping the "code" it wrote. It always looked nice though.
>> 1. Copy-pasting existing working code with small variations. If the intended variation is bigger then it fails to bring productivity gains, because it's almost universally wrong.
This does not match my experience. At all. I can throw extremely large and complex things at it and it nails them with very high accuracy and precision in most cases.
Here's an example: when Opus 4.5 came out I used it extensively to migrate our database and codebase from a one-Postgres-schema-per-tenant architecture to a single schema architecture. We are talking about eight years worth of database operations over about two dozen interconnected and complex domains. The task spanned migrating data out of 150 database tables for each tenant schema, then validating the integrity at the destination tables, plus refactoring the entire backend codebase (about 250k lines of code), plus all of the test suite. On top of that, there were also API changes that necessitated lots of tweaks to the frontend.
This is a project that would have taken me 4-6 months easily and the extreme tediousness of it would probably have burned me out. With Opus 4.5 I got it done in a couple of weeks, mostly nights and weekends. Over many phases and iterations, it caught, debugged and fixed its own bugs related to the migration and data validation logic that it wrote, all of which I reviewed carefully. We did extensive user testing afterwards and found only one issue, and that was actually a typo that I had made while tweaking something in the API client after Opus was done. No bugs after go-live.
So yeah, when I hear people say things like "it can only handle copy paste with small variations, otherwise it's universally wrong" I'm always flabbergasted.
Interesting. I've had it fail on much simpler tasks.
Example: was writing a flatbuffers routine which translated a simple type schema to fbs reflection schema. I was thinking well this is quite simple, surely Opus would have no trouble with it.
Output looked reasonable, compiled.. and was completely wrong. It seemed to just output random but reasonable looking indices and offsets. It also inserted in one part of the code a literal TODO saying "someone who understands fbs reflection should write this". Had to write it from scratch.
Another example: was writing a fuzzer for testing a certain computation. In this case, there was existing code to look at (working fuzzers for slighly different use cases), but the main logic had to be somewhat different. Opus managed to do the copy paste and then messed up the only part where it had to be a bit more creative. Again, showing the limitation of where it starts breaking. Overall I actually considered this a success, because I didn't have to deal with the "boring" bit.
Another example: colleague was using Claude to write a feature that output some error information from an otherwise completely encrypted computation. Claude proceeded to insert a global backdoor into the encryption, only caught in review. The inserted comments even explained the backdoor.
I would describe a success story if there was one. But aside from throwing together simple react frontends and SQL queries (highly copy-pasteable recurring patterns in the training set) I had literally zero success. There is an invisible ceiling.
I don't understand what including the time of "4 years" does for your arguments here. I don't think anyone is arguing that the usefulness of these AIs for real projects started at GPT 3.5/4. Do you think the capabilities of current AIs are approximately the same as GPT 3.5/4 4 years ago (actually I think SOTA 4 years ago today might have been LaMDA... as GPT 3.5 wasn't out yet)?
> I don't think anyone is arguing that the usefulness of these AIs for real projects started at GPT 3.5/4
Only not in retrospect. But the arguments about "if you're not using AI you're being left behind" did not depend on how people in 2026 felt about those tools retrospectively. Cursor is 3 years old and ok 4 years might be an exaggeration but I've definitely been seeing these arguments for 2-3 years.
Yeah. I started integrating AI into my daily workflows December 2024. I would say AI didn't become genuinely useful until around September 2025, when Sonnet 4.5 came out. The Opus 4.5 release in November was the real event horizon.
I'm an AI hipster, because I was confusing engagement for productivity before it was cool. :P
TFA mentions the slot machine aspect, but I think there are additional facets: The AI Junior Dev creates a kind of parasocial relationship and a sense of punctuated progress. I may still not have finished with X, but I can remember more "stuff" happening in the day, so it must've been more productive, right?
Contrast this to the archetypal "an idea for fixing the algorithm came to me in the shower."
I think Yegge hit the nail on the head: he has an addiction. Opus 4.5 is awesome but the type of stuff Yegge has been saying lately has been... questionable, to say the least. The kids call it getting "one-shotted by AI". Using an AI coding assistant should not be causing a person this much distress.
A lot of smart people think they're "too smart" to get addicted. Plenty of tales of booksmart people who tried heroin and ended up stealing their mother's jewelry for a fix a few months later.
I'm a recovering alcoholic. One thing I learned from therapists etc. along the way is that there are certain personality types with high intelligence, and also higher sensitivity to other things, like noise, emotional challenges, and addictive/compulsive behaviour.
It does not surprise me at all that software engineers are falling into an addiction trap with AI.
All this praise for AI.. I honestly don't get it. I have used Opus 4.5 for work and private projects. My experience is that all of the AIs struggle when the project grows. They always find some kind of local minimum where they cannot get out of but tell you this time their solution will work.. but it doesn't. They waste my time with this behaviour enormously. In the end I always have to do it myself.
Maybe when AIs are able to say: "I don't know how this works" or "This doesn't work like that at all." they will be more helpful.
What I use AIs for is searching for stuff in large codebases. Sometimes I don't know the name or the file name and describe to them what I am looking for. Or I let them generate some random task python/bash script. Or use them to find specific things in a file that a regex cannot find. Simple small tasks.
It might well be I am doing it totally wrong.. but I have yet to see a medium to large sized project with maintainable code that was generated by AI.
At what point does the project outgrow the AI in your experience? I have a 70k LOC backend/frontend/database/docker app that Claude still mostly one shots most features/tasks I throw at it. Perhaps, it's not as good remembering all the intertwined side-effects between functionalities/ui's and I have to let it know "in the calendar view, we must hide it as well", but that takes little time/effort.
Does it break down at some point to the extent that it simply does not finish tasks? Honest question as I saw this sentiment stated previously and assumed that sooner or later I'll face it myself but so far I didn't.
I find that with more complex projects (full-stack application with some 50 controllers, services, and about 90 distinct full-feature pages) it often starts writing code that simply breaks functionality.
For example, had to update some more complex code to correctly calculate a financial penalty amount. The amount is defined by law and recently received an overhaul so we had to change our implementation.
Every model we tried (and we have corporate access and legal allowance to use pretty much all of them) failed to update it correctly. Models would start changing parts of the calculation that didn't need to be updated. After saying that the specific parts shouldn't be touched and to retry, most of them would go right back to changing it again. The legal definition of the calculation logic is, surprisingly, pretty clear and we do have rigorous tests in place to ensure the calculations are correct.
Beyond that, it was frustrating trying to get the models to stick to our coding standards. Our application has developers from other teams doing work as well. We enforce a minimum standard to ensure code quality doesn't suffer and other people can take over without much issue. This standard is documented in the code itself but also explicitly written out in the repository in simple language. Even when explicitly prompting the models to stick to the standard and copy pasting it into the actual chat, it would ignore 50% of it.
The most apt comparison I can make is that of a consultant that always agrees with you to your face but when doing actual work, ignores half of your instructions and you end up running after them to try to minimize the mess and clean up you have to do. It outputs more code but it doesn't meet the standards we have. I'd genuinely be happy to offload tasks to AI so I can focus on the more interesting parts of work I have, but from my experience and that of my colleagues, its just not working out for us (yet).
I'm having trouble at 150k, but I'm not sure the issue is that per se, as opposed to the issue of the set of relevant context which is easy to find. The relevant part of the context threatens to bring in disparate parts of the codebase. The easy to find part determines whether a human has to manually curate the context.
I think most of us - if not _all_ of us - don't know how to use these things well yet. And that's OK. It's an entirely new paradigm. We've honed our skills and intuition based on humans building software. Humans make mistakes, sure, but humans have a degree and style of learning and failure patterns we are very familiar with. Humans understand the systems they build to a high degree, this knowledge helps them predict outcomes, and even helps them achieve the goals of their organisation _outside_ writing software.
I kinda keep saying this, but in my experience:
1. You trade the time you'd take to understand the system for time spent testing it.
2. You trade the time you'd take to think about simplifying the system (so you have less code to type) into execution (so you build more in less time).
I really don't know if these are _good_ tradeoffs yet, but it's what I observe. I think it'll take a few years until we truly understand the net effects. The feedback cycles for decisions in software development and business can be really long, several years.
I think the net effects will be positive, not negative. I also think they won't be 10x. But that's just me believing stuff, and it is relatively pointless to argue about beliefs.
Some interesting parts in the text. Some not so interesting ones. The author seems to be thinking that he's a big deal it seems, though - a month ago, I did not know who he is. My work environment has never heard of him (SDE at FAANG). Maybe I'm an outlier and he indeed influences the whole expectation management at companies with his writing, or maybe the success (?) of gastown got to him and he thinks he's bigger than he actually is. Time will tell. In any case, the glorification of oneself in an article like that throws me off for some reason.
Popular blogger from roughly a decade ago. His rants were frequently cited early in my career. I think he’s fallen off in popularity substantially since.
He's early Amazon early Google so he's seen two companies super scale. Few people last two paradigm shifts so that's no guarantee of credentials. But at the time he was famous for a specific accidentally-public post that exposed people to the amount that Bezos's influence ramified through Amazon and how his choices contrasted with Google's approach to platforms.
At least in France, the general work legislation is generous enough that people in the industry don't feel the need to unionize. We've got 5 weeks holidays per year, plus additional ones if your work contract has more than 35hours of work per week.
Needless to say, burnouts are pretty rare. When they do, it's mostly because of toxic management which can't fire you because of legislation, so they just make your life miserable until you snap. I've also seen it happen in some startups where people have to take super long holidays after a successful exit because they've worked insane hours. However it was mostly self inflicted.
We're certainly in the middle of a whirlwind of progress. Unfortunately, as AI capabilities increase, so do our expectations.
Suddenly, it's no longer enough to slap something together and call it a project. The better version with more features is just one prompt away. And if you're just a relay for prompts, why not add an agent or two?
I think there won't be a future where the world adapts to a 4-hour day. If your boss or customer also sees you as a relay for prompts, they'll slowly cut you out of the loop, or reduce the amount they pay you. If you instead want to maintain some moat, or build your own money-maker, your working hours will creep up again.
In this environment, I don't see this working out financially for most people. We need to decide which future we want:
1. the one where people can survive (and thrive) without stable employment;
2. the one where we stop automating in favor of stable employment; or
3. the one where only those who keep up stay afloat.
Am I getting Steve's point? It's a bit like what happened with the agricultural revolution.
A long time ago, food took effort to find, and calories were expensive.
Then we had a breakthrough in cost/per/calories.
We got fat, because we can not moderate our food intake. It is killing us.
A long time ago, coding took effort, and programmer productivity was expensive.
Then we had a breakthrough in cost/per/feature.
Now we are exhausted, because we can not moderate our energy and attention expenditure. It is killing us.
It should be banned in current form. No junior positions available - only those who lasted even have the chance to use them in commercial settings. After layoffs you will get how BAD it is :/ (if you are 35+)
Even as a software developer affected by it, I don't think it should be banned. Productivity improvements are how we get richer in aggregate over the long term, even if those impacted (like you & me) might feel the brunt of transitional pain.
>With a 10x boost, if you give an engineer Claude Code, then once they’re fluent, their work stream will produce nine additional engineers’ worth of value.
I keep hearing about this 10x productivity, but where is it materializing? Most developers at my company use Claude Code, but we don't seem to be shipping new features at ten times the rate. In fact, tickets still take roughly the same amount of time to complete.
I'm seeing that some tickets are "finished" (i.e. ready for PR) more quickly, but they end up needing so many changes and to be re-reviewed so many times it takes longer than it ever did because someone is saying yes to the LLM instead of designing software. When it's clear your review comments are just going back into the maw of the LLM, I've given up trying to guide and now just outright suggest actually workable designs (taking more of my time too), and that merely cuts down the number of times it will come back for review again.
Nothing is more infuriating at work than when you say something to someone in a message or PR comment, and they just paste the LLM response back to you.
He talks about this new tech for extracting more value from engineers as if it were fracking. When they become impermeable you can now inject a mixed high pressure cocktail of AI to get their internal hydrocarbons flowing. It works but now he feels all pumped out. But the vampire metaphor is hopefully better in that blood replenishes if you don't take too much. A succubus may be an improved comparison, in that a creative seed is extracted and depleted, then refills over a refractory period.
I’m in Steve’s demographic, showing similar symptoms, and I’m as worried as he is about how we’re going to cope.
It’s a matter of opportunity cost. It used to be that when I rested for an hour, I lost an hour of output. Now, when I rest for an hour, I lose what used to be a day of output.
I need to rewire my brain and learn how to split the difference. There’s no point in producing a lot of output if I don’t have time to live.
The idea that you’ll get to enjoy the spoils when you grow up is false. You won’t. Just produce 5x and take some time off every day. You may even be more likely to reflect, and end up producing the right thing.
He's totally correct on the extraction that companies do (always has been). What I kinda disagree is the notion that if a company doesn't go the same path as these others, where everyone is "10x'ing" with AI, that they will suddenly disappear. I really don't think it will work that way.
Yeah, some might if another company/startup goes after their business and they build faster, but building faster doesn't mean your building what people want/need. You might be building bloat (Windows/MS) that no one cares about.
Companies still need to know what to build, not just build something/anything faster.
1. Moats and products have already been built, so it's really about startups that are racing to get products/features to market.
2. I've slowly learnt in my own career that you need to really be careful with picking what you build. It doesn't matter if it's waterfall or a quick agile "experiment", it all takes time and focus. So the more you can design/refine/roadshow/validate your ideas before any code is touched, the better off you'll be.
But at scale. Yegge gets close to it in this blog (which actually made me lol, good to see that he is back on form), but shies away from it.
If AI is producing a real productivity boom then we should be seeing a flood of high-quality non-AI related software. If building and shipping software is now easier and faster then all of the software that we have that doesn't quite work right should be displaced by high quality successors. It should be happening right now.
So where is it? Why is all this velocity going into tooling around AI instead? Face it, an entire industry has fallen into the trap of building the automation instead of the product they were trying to automate the production of.
Where is the new high quality C compiler that actually compiles the linux kernel to a measurably higher quality than gcc? If AI is really increasing productivity shouldn't we have that instead of a press-oriented hype flop?
After at least a century of labour saving devices being produced and widely adopted in all areas of our lives, how much less time do we spend labouring now?
I am a long time fan of Steve Yegge but he's too much part of the groupthink at this point.
You can't win with Claude Code. I understand that his API key isn't on the PID controller, so he gets a less bad deal, but he's still breaking even with some gee whizz factor.
Agents are like people on a long enough timeline: they will eventually do the lazy thing. But this happens in minutes not years.
If you don't have them on tracks made of iron, you are on a sugar high that will crash.
Formal methods, zero sorry, or it's another bounty for the vibecode cleanup guys.
> Let’s start with the root cause, which is that AI does actually make you more 10x productive, once you learn how.
> But hey, don’t take it from me. Take it from… the Copilot people. According to The Verge and a bunch of other reputable news sources, Microsoft is openly encouraging their employees to use multiple tools, and as a result, Claude Code has rapidly become dominant across engineering at Microsoft.
And what wonders they've achieved with it! Truly innovative enhancements to notepad being witnessed right now! The inability to shut down your computer! I can finally glimpse the 10x productivity I've been missing out on!
It's real and I've been telling all the people around me who get vested in this sort of exponential growth, to be very wary of the impeding burnout, which spares no soul hungry to get high on information. getting high on information is now a thing, it is not cyberpunk fiction anymore, and burnout is a real threat - VR or not. perhaps one can burn out on tiktok these days.
> But hey, don’t take it from me. Take it from… the Copilot people. According to The Verge and a bunch of other reputable news sources, Microsoft is openly encouraging their employees to use multiple tools, and as a result, Claude Code has rapidly become dominant across engineering at Microsoft.
Well, that explains the sloppy results Microsoft is delivering lately!
What's with all the AI fear mongering and doomspeak? Feels like people have never been so excited about getting laid off and becoming obsolete. It used to be that workers fought for their rights but maybe decades of software engineering proliferation has dampened that survival instinct. These days feels like OpenAI is paying writers to scare as many people into believing they're no longer needed.
I also find it kind of funny that the only perspective on AI is about being laid off. This says a lot about who's writing all these articles. But beware the cobra effect https://en.wikipedia.org/wiki/Perverse_incentive . If the AI boom allows a company to lay off a large amount of its engineers the engineers can also simply band together and as-easily become a competitor unburdained by legacy code resulting in profits going down for existing businesses. Alternatively they can simply join the customers of various SaaS businesses and roll their own solution with same net effect.
> But if you haven’t used specifically Opus 4.5/4.6 with specifically Claude Code for at least an hour, then you’re in for a real shock. Because all your complaining about AI not being useful for real-world tasks is obsolete.
These hyperbolic takes from Steve are wearing thin.
It wasn't my experience that Opus 4.5/4.6 was a sea change. It was a nice incremental improvement.
> And unfortunately, all your other tools and models are pretty terrible in comparison.
Personally, I like Copilot CLI. $10 a month for 300 requests. Copilot will keep working until it fulfills your request, no matter how many tokens it uses.
Calling all other tools "pretty terrible" without specifics reminds me of crypto FOMO from the 2010s.
Luckily we work for ourselves in our studio, and I have no one to answer to except my business partner and customers, and tech is my domain. But I have concluded "we already build fast enough." Really how much faster do we need to build? Deployments: automated. Tests: automated. Migrations: automated. Frameworks: complete. Stack: stable. Scaling: solved. OKAY so now with AI we can build "MORE!" More of WHAT exactly? What makes our lives better? What makes our customers happier? How about I just directly feed customer support tickets into Claude and let it rip.
I'm increasingly thinking either people were terrible developers, used shit tools to begin with, or are in a mass psychosis. I certainly feel bad for anyone reporting to "the business guy." He never respected you to begin with, and now he literally thinks "why are you so slow? I can build Airbnb in a weekend."
For someone who previously could achieve nothing, these tools are magical, as they can now achieve something. It feels to them like infinity because their base was 0. That alone will create a lot of things they wouldn't have been able to, good for them. However for people who already know what they're doing, I only feel slightly pushed along some asymptote. My bottlenecks simply are not measured in tokens to screen.
The source of the addiction is that an amount of effort is highly likely to result in a fulfilling outcome. That makes you want to make more effort. In the past, a lot of work was very futile, very tedious and often felt hopeless and that made people essentially give up. So this is a very good problem to have. I guess people should monitor their own output and try to pace themselves. But also be grateful that we have these capabilities that allow us to solve so many problems and achieve so many of the things that we want in life.
> Jeffrey Emanuel and his 22 accounts at $4400/month
Paying $4.4k per month for the privilege of writing code is absolute madness, I'm not quite so sure with how we got to this point but it's still madness. Maybe Yegge is indeed right, maybe this is just like regular gambling/addiction, which sucks when it comes to being a programmer but at least it gets the dopamine levels higher.
It's not per se madness; companies pay much more than that for code. Instead it's an empirical question about whether they're getting that value from the code.
The difference is that if those companies were to rely only on the AI part, and hence to transform us (computer programmers) only in copy-pasters and less, in about one to two years the "reasoning" behind the latest AI models would have become stale, i.e. because of no new human input. So good luck with that.
But my comment was not about companies, it was just about writing code, about the freedom that used to come from it, about the agency that we used to have. There's no agency and no freedom left when you start paying that much money in order to write code. I guess that can work for some companies, but for sure it won't work for computer programmers as actual human beings (and imo this blog-post itself tries to touch on that aspect).
> all your complaining about AI not being useful for real-world tasks is obsolete... let’s not quibble about the exact productivity boost from AI
No, that's exactly the purpose of this ending up on HN.
> if you give an engineer Claude Code, then once they’re fluent, their work stream will produce nine additional engineers’ worth of value. For someone.
Nope.
> you decide you’re going to impress your employer, and work for 8 hours a day at 10x productivity. You knock it out of the park and make everyone else look terrible by comparison.
Pure junior dev fantasy. Nobody cares about how many hours you "really" work, or what you did as long as you meet their original requirements. They're going to ignore the rest no matter how much you try to talk a big game. This has been true since the beginning of employment.
> In that scenario, your employer captures 100% of the value from you adopting AI. You get nothing, or at any rate, it ain’t gonna be 9x your salary. And everyone hates you now.
Again, nobody cares.
> Congrats, you were just drained by a company. I’ve been drained to the point of burnout several times in my career, even at Google once or twice.
Pointless humblebrag that even we the readers don't care about.
> Now let’s look at Scenario B. You decide instead that you will only work for an hour a day, and aim to keep up with your peers using AI. On that heavily reduced workload, you manage to scrape by, and nobody notices.
This isn't a thing unless you were borderline worthless and junior to begin with.
> In this scenario, your company goes out of business. I’m sorry, but your victory over The Man will be pyrrhic, because The Man is about to be kicked in The Balls, since with everyone slacking off, a competitor will take them out pretty fast.
Hard disagree. Author has clearly never worked outside of a startup or silicon valley where the money is more mature.
Flagged for what is at best extreme ignorance, or more likely ragebait and bad faith hype for a ship that sailed several years ago. I don't know what else to do with these blog posts anymore.
> But if you haven’t used specifically Opus 4.5/4.6 with specifically Claude Code for at least an hour, then you’re in for a real shock. Because all your complaining about AI not being useful for real-world tasks is obsolete. AI coding hit an event horizon on November 24th, 2025. It’s the real deal.
Yeah, it is over for several roles, especially frontend web development given Opus 4.6 is able to one shot your React frontend from a Figma design 90% of the time.
Why would I want to hire 10 senior frontend developers on a $200K asking price salary at this point when one AI can replace 9 of them (yes it can) and require one single junior-level engineer at a significant lower price?
This idea is very tempting for many companies to continue slashing headcount with less employees to find 'cost savings' by using AI to do more with less.
I think Opus 4.5 & 4.6 are an impressive step up in capabilities but I'm really skeptical that this model is replacing the output of 9/10 skilled front end engineers as a project grows beyond the early stages.
> Yeah, it is over for several roles, especially frontend web development
Only if the front end was super simple in the first place, IMO. And also only for the v1, which is still useful, whereas for ongoing development I think AI leads people down a path of tools that cost more to maintain and build on.
It may be that AI leads to framework and architecture choices best suited to AI, with great results up front, and then all the same challenges and costs of quick and dirty development by a human. Except 10x faster so, by the time anyone in management realizes the mess they’re in, and the cost/benefit ratio tilts negative even in the short run as opposed to the obvious to engineers long term, there’s going to be so much more code in that bad style that it’s 10x more expensive for expert humans to fix it.
Every time I say I don't see the productivity boost from AI, people always say I'm using the wrong tool, or the wrong model. I use Claude with Sonnet, Zed with either Claude Sonnet 4 or Opus 4.6, Gemini, and ChatGPT 5.2. I use these tools daily and I just don't see it.
The vampire in the room, for me, seems to be feeling like I'm the only person in the room that doesn't believe the hype. Or should I say, being in rooms where nobody seems to care about quality over quantity anymore. Articles like this are part of the problem, not the solution.
Sure they are great for generating some level of code, but the deeper it goes the more it hallucinates. My first or second git commit from these tools is usually closer to a working full solution than the fifth one. The time spent refactoring prompts, testing the code, repeating instructions, refactoring naive architectural decisions and double checking hallucinations when it comes to research take more than the time AI saves me. This isn't free.
A CTO this week told me he can't code or brainstorm anymore without AI. We've had these tools for 4 years, like this guy says - either AI or the competition eats you. So, where is the output? Aside from more AI-tools, what has been released in the past 4 years that makes it obvious looking back that this is when AI became available?
I am with you on this, and you can't win, because as soon as you voice this opinion you get overwhelmed with "you dont have the sauce/prompt" opinions which hold an inherent fallacy because they assume you are solving the same problems as them.
I work in GPU programming, so there is no way in hell that JavaScript tools and database wrapper tasks can be on equal terms with generating for example Blackwell tcgen05 warp-scheduled kernels.
There's going to be a long tail of domain-specific tasks that aren't well served by current models for the foreseeable future, but there's also no question the complexity horizon of the SotA models is increasing over time. I've had decent results recently with non-trivial Cuda/MPS code. Is it great code/finely tuned? Probably not but it delivered on the spec and runs fast enough.
In my experience LLMs are useless for GPU compute code, just not enough in the training set.
Yeah, the argument here is that once you say this, people will say "you just dont know how to prompt, i pass the PTX docs together with NSight output and my kernel into my agent and run an evaluation harness and beat cuBLAS". And then it turns out that they are making a GEMM on Ampere/Hopper which is an in-distribution problem for the LLMs.
It's the idea/mindset that since you are working on something where the tool has a good distribution, its a skill issue or mindset problem for everyone else who is not getting value from the tool.
Now please get back to coding GPU stuff so we can train our models on your code. Thank you.
Another thing I've never got them to generate is any G code. Maybe that'll be in the image/3d generator side indirectly, but I was kind of hoping I could generate some motions since hand coding coordinates is very tedious. That would be a productivity boost for me. A very very niche boost, since I rarely need bespoke G code, but still.
Many engineers get paid a lot of money to write low-complexity code gluing things together and tweaking features according to customer requirements.
When the difficulty of a task is neatly encompassed in a 200 word ticket and the implementation lacks much engineering challenge, AI can pretty reliably write the code-- mediocre code for mediocre challenges.
A huge fraction of the software economy runs on CRUD and some business logic. There just isn't much complexity inherent in any of the feature sets.
Complexity is not where the value to the business comes from. In fact, it's usually the opposite. Nobody wants to maintain slop, and whenever you dismiss simplicity you ignore all the heroic hard work done by those at the lower level of indirection. This is what politics looks like when it finally places its dirty hands on the tech industry, and it's probably been a long time coming.
As annoying as that is, we should celebrate a little that the people who understand all this most deeply are gaining real power now.
Yes, AI can write code (poorly), but the AI hype is now becoming pure hate against the people who sit in meetings quietly gathering their thoughts and distilling it down to the simple and almost poetic solutions nobody else but those who do the heads down work actually care about.
> A huge fraction of the software economy runs on CRUD and some business logic.
You vastly underestimate the meaning of CRUD applied in such a direct manner. You're right in some sense that "we have the technology", but we've had this technology for a very long time now. The business logic is pure gold. You dismiss this not realizing how many other thriving and well established industries operate doing simple things applied precisely.
Some Ella Fitzgerald for you: https://youtube.com/watch?v=tq572nNpZcw
That's a false dilemma. If that's what you want, you absolutely can use the AI levers to get more time and less context switching, so you can focus more on the "simple and poetic solutions".
Exact same experience.
Here's what I find Claude Code (Opus) useful for:
1. Copy-pasting existing working code with small variations. If the intended variation is bigger then it fails to bring productivity gains, because it's almost universally wrong.
2. Exploring unknown code bases. Previously I had to curse my way through code reading sessions, now I can find information easily.
3. Google Search++, e.g. for deciding on tech choices. Needs a lot of hand holding though.
... that's it? Any time I tried doing anything more complex I ended up scrapping the "code" it wrote. It always looked nice though.
>> 1. Copy-pasting existing working code with small variations. If the intended variation is bigger then it fails to bring productivity gains, because it's almost universally wrong.
This does not match my experience. At all. I can throw extremely large and complex things at it and it nails them with very high accuracy and precision in most cases.
Here's an example: when Opus 4.5 came out I used it extensively to migrate our database and codebase from a one-Postgres-schema-per-tenant architecture to a single schema architecture. We are talking about eight years worth of database operations over about two dozen interconnected and complex domains. The task spanned migrating data out of 150 database tables for each tenant schema, then validating the integrity at the destination tables, plus refactoring the entire backend codebase (about 250k lines of code), plus all of the test suite. On top of that, there were also API changes that necessitated lots of tweaks to the frontend.
This is a project that would have taken me 4-6 months easily and the extreme tediousness of it would probably have burned me out. With Opus 4.5 I got it done in a couple of weeks, mostly nights and weekends. Over many phases and iterations, it caught, debugged and fixed its own bugs related to the migration and data validation logic that it wrote, all of which I reviewed carefully. We did extensive user testing afterwards and found only one issue, and that was actually a typo that I had made while tweaking something in the API client after Opus was done. No bugs after go-live.
So yeah, when I hear people say things like "it can only handle copy paste with small variations, otherwise it's universally wrong" I'm always flabbergasted.
Interesting. I've had it fail on much simpler tasks.
Example: was writing a flatbuffers routine which translated a simple type schema to fbs reflection schema. I was thinking well this is quite simple, surely Opus would have no trouble with it.
Output looked reasonable, compiled.. and was completely wrong. It seemed to just output random but reasonable looking indices and offsets. It also inserted in one part of the code a literal TODO saying "someone who understands fbs reflection should write this". Had to write it from scratch.
Another example: was writing a fuzzer for testing a certain computation. In this case, there was existing code to look at (working fuzzers for slighly different use cases), but the main logic had to be somewhat different. Opus managed to do the copy paste and then messed up the only part where it had to be a bit more creative. Again, showing the limitation of where it starts breaking. Overall I actually considered this a success, because I didn't have to deal with the "boring" bit.
Another example: colleague was using Claude to write a feature that output some error information from an otherwise completely encrypted computation. Claude proceeded to insert a global backdoor into the encryption, only caught in review. The inserted comments even explained the backdoor.
I would describe a success story if there was one. But aside from throwing together simple react frontends and SQL queries (highly copy-pasteable recurring patterns in the training set) I had literally zero success. There is an invisible ceiling.
I don't understand what including the time of "4 years" does for your arguments here. I don't think anyone is arguing that the usefulness of these AIs for real projects started at GPT 3.5/4. Do you think the capabilities of current AIs are approximately the same as GPT 3.5/4 4 years ago (actually I think SOTA 4 years ago today might have been LaMDA... as GPT 3.5 wasn't out yet)?
> I don't think anyone is arguing that the usefulness of these AIs for real projects started at GPT 3.5/4
Only not in retrospect. But the arguments about "if you're not using AI you're being left behind" did not depend on how people in 2026 felt about those tools retrospectively. Cursor is 3 years old and ok 4 years might be an exaggeration but I've definitely been seeing these arguments for 2-3 years.
Yeah. I started integrating AI into my daily workflows December 2024. I would say AI didn't become genuinely useful until around September 2025, when Sonnet 4.5 came out. The Opus 4.5 release in November was the real event horizon.
“We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”
― Roy Amara
I'm an AI hipster, because I was confusing engagement for productivity before it was cool. :P
TFA mentions the slot machine aspect, but I think there are additional facets: The AI Junior Dev creates a kind of parasocial relationship and a sense of punctuated progress. I may still not have finished with X, but I can remember more "stuff" happening in the day, so it must've been more productive, right?
Contrast this to the archetypal "an idea for fixing the algorithm came to me in the shower."
I think Yegge hit the nail on the head: he has an addiction. Opus 4.5 is awesome but the type of stuff Yegge has been saying lately has been... questionable, to say the least. The kids call it getting "one-shotted by AI". Using an AI coding assistant should not be causing a person this much distress.
A lot of smart people think they're "too smart" to get addicted. Plenty of tales of booksmart people who tried heroin and ended up stealing their mother's jewelry for a fix a few months later.
I'm a recovering alcoholic. One thing I learned from therapists etc. along the way is that there are certain personality types with high intelligence, and also higher sensitivity to other things, like noise, emotional challenges, and addictive/compulsive behaviour.
It does not surprise me at all that software engineers are falling into an addiction trap with AI.
All this praise for AI.. I honestly don't get it. I have used Opus 4.5 for work and private projects. My experience is that all of the AIs struggle when the project grows. They always find some kind of local minimum where they cannot get out of but tell you this time their solution will work.. but it doesn't. They waste my time with this behaviour enormously. In the end I always have to do it myself.
Maybe when AIs are able to say: "I don't know how this works" or "This doesn't work like that at all." they will be more helpful.
What I use AIs for is searching for stuff in large codebases. Sometimes I don't know the name or the file name and describe to them what I am looking for. Or I let them generate some random task python/bash script. Or use them to find specific things in a file that a regex cannot find. Simple small tasks.
It might well be I am doing it totally wrong.. but I have yet to see a medium to large sized project with maintainable code that was generated by AI.
At what point does the project outgrow the AI in your experience? I have a 70k LOC backend/frontend/database/docker app that Claude still mostly one shots most features/tasks I throw at it. Perhaps, it's not as good remembering all the intertwined side-effects between functionalities/ui's and I have to let it know "in the calendar view, we must hide it as well", but that takes little time/effort.
Does it break down at some point to the extent that it simply does not finish tasks? Honest question as I saw this sentiment stated previously and assumed that sooner or later I'll face it myself but so far I didn't.
I find that with more complex projects (full-stack application with some 50 controllers, services, and about 90 distinct full-feature pages) it often starts writing code that simply breaks functionality.
For example, had to update some more complex code to correctly calculate a financial penalty amount. The amount is defined by law and recently received an overhaul so we had to change our implementation.
Every model we tried (and we have corporate access and legal allowance to use pretty much all of them) failed to update it correctly. Models would start changing parts of the calculation that didn't need to be updated. After saying that the specific parts shouldn't be touched and to retry, most of them would go right back to changing it again. The legal definition of the calculation logic is, surprisingly, pretty clear and we do have rigorous tests in place to ensure the calculations are correct.
Beyond that, it was frustrating trying to get the models to stick to our coding standards. Our application has developers from other teams doing work as well. We enforce a minimum standard to ensure code quality doesn't suffer and other people can take over without much issue. This standard is documented in the code itself but also explicitly written out in the repository in simple language. Even when explicitly prompting the models to stick to the standard and copy pasting it into the actual chat, it would ignore 50% of it.
The most apt comparison I can make is that of a consultant that always agrees with you to your face but when doing actual work, ignores half of your instructions and you end up running after them to try to minimize the mess and clean up you have to do. It outputs more code but it doesn't meet the standards we have. I'd genuinely be happy to offload tasks to AI so I can focus on the more interesting parts of work I have, but from my experience and that of my colleagues, its just not working out for us (yet).
I'm having trouble at 150k, but I'm not sure the issue is that per se, as opposed to the issue of the set of relevant context which is easy to find. The relevant part of the context threatens to bring in disparate parts of the codebase. The easy to find part determines whether a human has to manually curate the context.
I think most of us - if not _all_ of us - don't know how to use these things well yet. And that's OK. It's an entirely new paradigm. We've honed our skills and intuition based on humans building software. Humans make mistakes, sure, but humans have a degree and style of learning and failure patterns we are very familiar with. Humans understand the systems they build to a high degree, this knowledge helps them predict outcomes, and even helps them achieve the goals of their organisation _outside_ writing software.
I kinda keep saying this, but in my experience:
1. You trade the time you'd take to understand the system for time spent testing it.
2. You trade the time you'd take to think about simplifying the system (so you have less code to type) into execution (so you build more in less time).
I really don't know if these are _good_ tradeoffs yet, but it's what I observe. I think it'll take a few years until we truly understand the net effects. The feedback cycles for decisions in software development and business can be really long, several years.
I think the net effects will be positive, not negative. I also think they won't be 10x. But that's just me believing stuff, and it is relatively pointless to argue about beliefs.
Some interesting parts in the text. Some not so interesting ones. The author seems to be thinking that he's a big deal it seems, though - a month ago, I did not know who he is. My work environment has never heard of him (SDE at FAANG). Maybe I'm an outlier and he indeed influences the whole expectation management at companies with his writing, or maybe the success (?) of gastown got to him and he thinks he's bigger than he actually is. Time will tell. In any case, the glorification of oneself in an article like that throws me off for some reason.
Popular blogger from roughly a decade ago. His rants were frequently cited early in my career. I think he’s fallen off in popularity substantially since.
He's early Amazon early Google so he's seen two companies super scale. Few people last two paradigm shifts so that's no guarantee of credentials. But at the time he was famous for a specific accidentally-public post that exposed people to the amount that Bezos's influence ramified through Amazon and how his choices contrasted with Google's approach to platforms.
https://news.ycombinator.com/item?id=3101876
This is a good time to repeat that software engineers need a union. We needed this ten years ago, and we need it a lot more now.
As a european, yes please America, get a union. Get 2 even. You're going too fast, you're way too successful, we can't keep up.
So yes, please adopt our work ethic and legal framework. It's going to help us tremendously.
Are there any software engineer\quality assurance\any other IT related unions in EU? How do I join one?
At least in France, the general work legislation is generous enough that people in the industry don't feel the need to unionize. We've got 5 weeks holidays per year, plus additional ones if your work contract has more than 35hours of work per week.
Needless to say, burnouts are pretty rare. When they do, it's mostly because of toxic management which can't fire you because of legislation, so they just make your life miserable until you snap. I've also seen it happen in some startups where people have to take super long holidays after a successful exit because they've worked insane hours. However it was mostly self inflicted.
What concrete interests would you like such a union to protect?
Should a strike happen if devs are told to use Claude, or should a strike happen if devs aren't given access to Claude?
We're certainly in the middle of a whirlwind of progress. Unfortunately, as AI capabilities increase, so do our expectations.
Suddenly, it's no longer enough to slap something together and call it a project. The better version with more features is just one prompt away. And if you're just a relay for prompts, why not add an agent or two?
I think there won't be a future where the world adapts to a 4-hour day. If your boss or customer also sees you as a relay for prompts, they'll slowly cut you out of the loop, or reduce the amount they pay you. If you instead want to maintain some moat, or build your own money-maker, your working hours will creep up again.
In this environment, I don't see this working out financially for most people. We need to decide which future we want:
1. the one where people can survive (and thrive) without stable employment;
2. the one where we stop automating in favor of stable employment; or
3. the one where only those who keep up stay afloat.
Am I getting Steve's point? It's a bit like what happened with the agricultural revolution.
A long time ago, food took effort to find, and calories were expensive. Then we had a breakthrough in cost/per/calories. We got fat, because we can not moderate our food intake. It is killing us.
A long time ago, coding took effort, and programmer productivity was expensive. Then we had a breakthrough in cost/per/feature. Now we are exhausted, because we can not moderate our energy and attention expenditure. It is killing us.
AI takes jobs faster than it creates new ones.
It should be banned in current form. No junior positions available - only those who lasted even have the chance to use them in commercial settings. After layoffs you will get how BAD it is :/ (if you are 35+)
Even as a software developer affected by it, I don't think it should be banned. Productivity improvements are how we get richer in aggregate over the long term, even if those impacted (like you & me) might feel the brunt of transitional pain.
Its how some people will get rich. It won't be how you get rich.
>With a 10x boost, if you give an engineer Claude Code, then once they’re fluent, their work stream will produce nine additional engineers’ worth of value.
I keep hearing about this 10x productivity, but where is it materializing? Most developers at my company use Claude Code, but we don't seem to be shipping new features at ten times the rate. In fact, tickets still take roughly the same amount of time to complete.
10x is doing a year's work in ~5 weeks
No shot
I'm seeing that some tickets are "finished" (i.e. ready for PR) more quickly, but they end up needing so many changes and to be re-reviewed so many times it takes longer than it ever did because someone is saying yes to the LLM instead of designing software. When it's clear your review comments are just going back into the maw of the LLM, I've given up trying to guide and now just outright suggest actually workable designs (taking more of my time too), and that merely cuts down the number of times it will come back for review again.
Nothing is more infuriating at work than when you say something to someone in a message or PR comment, and they just paste the LLM response back to you.
He talks about this new tech for extracting more value from engineers as if it were fracking. When they become impermeable you can now inject a mixed high pressure cocktail of AI to get their internal hydrocarbons flowing. It works but now he feels all pumped out. But the vampire metaphor is hopefully better in that blood replenishes if you don't take too much. A succubus may be an improved comparison, in that a creative seed is extracted and depleted, then refills over a refractory period.
I’m in Steve’s demographic, showing similar symptoms, and I’m as worried as he is about how we’re going to cope.
It’s a matter of opportunity cost. It used to be that when I rested for an hour, I lost an hour of output. Now, when I rest for an hour, I lose what used to be a day of output.
I need to rewire my brain and learn how to split the difference. There’s no point in producing a lot of output if I don’t have time to live.
The idea that you’ll get to enjoy the spoils when you grow up is false. You won’t. Just produce 5x and take some time off every day. You may even be more likely to reflect, and end up producing the right thing.
He's totally correct on the extraction that companies do (always has been). What I kinda disagree is the notion that if a company doesn't go the same path as these others, where everyone is "10x'ing" with AI, that they will suddenly disappear. I really don't think it will work that way. Yeah, some might if another company/startup goes after their business and they build faster, but building faster doesn't mean your building what people want/need. You might be building bloat (Windows/MS) that no one cares about.
Companies still need to know what to build, not just build something/anything faster.
I agree with this.
1. Moats and products have already been built, so it's really about startups that are racing to get products/features to market.
2. I've slowly learnt in my own career that you need to really be careful with picking what you build. It doesn't matter if it's waterfall or a quick agile "experiment", it all takes time and focus. So the more you can design/refine/roadshow/validate your ideas before any code is touched, the better off you'll be.
Nobody wants to admit that we are living through this: https://xkcd.com/1319/
But at scale. Yegge gets close to it in this blog (which actually made me lol, good to see that he is back on form), but shies away from it.
If AI is producing a real productivity boom then we should be seeing a flood of high-quality non-AI related software. If building and shipping software is now easier and faster then all of the software that we have that doesn't quite work right should be displaced by high quality successors. It should be happening right now.
So where is it? Why is all this velocity going into tooling around AI instead? Face it, an entire industry has fallen into the trap of building the automation instead of the product they were trying to automate the production of.
Where is the new high quality C compiler that actually compiles the linux kernel to a measurably higher quality than gcc? If AI is really increasing productivity shouldn't we have that instead of a press-oriented hype flop?
After at least a century of labour saving devices being produced and widely adopted in all areas of our lives, how much less time do we spend labouring now?
I am a long time fan of Steve Yegge but he's too much part of the groupthink at this point.
You can't win with Claude Code. I understand that his API key isn't on the PID controller, so he gets a less bad deal, but he's still breaking even with some gee whizz factor.
Agents are like people on a long enough timeline: they will eventually do the lazy thing. But this happens in minutes not years.
If you don't have them on tracks made of iron, you are on a sugar high that will crash.
Formal methods, zero sorry, or it's another bounty for the vibecode cleanup guys.
> Let’s start with the root cause, which is that AI does actually make you more 10x productive, once you learn how.
> But hey, don’t take it from me. Take it from… the Copilot people. According to The Verge and a bunch of other reputable news sources, Microsoft is openly encouraging their employees to use multiple tools, and as a result, Claude Code has rapidly become dominant across engineering at Microsoft.
And what wonders they've achieved with it! Truly innovative enhancements to notepad being witnessed right now! The inability to shut down your computer! I can finally glimpse the 10x productivity I've been missing out on!
It's real and I've been telling all the people around me who get vested in this sort of exponential growth, to be very wary of the impeding burnout, which spares no soul hungry to get high on information. getting high on information is now a thing, it is not cyberpunk fiction anymore, and burnout is a real threat - VR or not. perhaps one can burn out on tiktok these days.
https://archive.md/ks83q
Let's spare the guy some web traffic.
he's hosted on Medium
> But hey, don’t take it from me. Take it from… the Copilot people. According to The Verge and a bunch of other reputable news sources, Microsoft is openly encouraging their employees to use multiple tools, and as a result, Claude Code has rapidly become dominant across engineering at Microsoft.
Well, that explains the sloppy results Microsoft is delivering lately!
What's with all the AI fear mongering and doomspeak? Feels like people have never been so excited about getting laid off and becoming obsolete. It used to be that workers fought for their rights but maybe decades of software engineering proliferation has dampened that survival instinct. These days feels like OpenAI is paying writers to scare as many people into believing they're no longer needed.
I also find it kind of funny that the only perspective on AI is about being laid off. This says a lot about who's writing all these articles. But beware the cobra effect https://en.wikipedia.org/wiki/Perverse_incentive . If the AI boom allows a company to lay off a large amount of its engineers the engineers can also simply band together and as-easily become a competitor unburdained by legacy code resulting in profits going down for existing businesses. Alternatively they can simply join the customers of various SaaS businesses and roll their own solution with same net effect.
You remember glassholes? These days we have gasholes instead.
> But if you haven’t used specifically Opus 4.5/4.6 with specifically Claude Code for at least an hour, then you’re in for a real shock. Because all your complaining about AI not being useful for real-world tasks is obsolete.
These hyperbolic takes from Steve are wearing thin.
It wasn't my experience that Opus 4.5/4.6 was a sea change. It was a nice incremental improvement.
> And unfortunately, all your other tools and models are pretty terrible in comparison.
Personally, I like Copilot CLI. $10 a month for 300 requests. Copilot will keep working until it fulfills your request, no matter how many tokens it uses.
Calling all other tools "pretty terrible" without specifics reminds me of crypto FOMO from the 2010s.
Luckily we work for ourselves in our studio, and I have no one to answer to except my business partner and customers, and tech is my domain. But I have concluded "we already build fast enough." Really how much faster do we need to build? Deployments: automated. Tests: automated. Migrations: automated. Frameworks: complete. Stack: stable. Scaling: solved. OKAY so now with AI we can build "MORE!" More of WHAT exactly? What makes our lives better? What makes our customers happier? How about I just directly feed customer support tickets into Claude and let it rip.
I'm increasingly thinking either people were terrible developers, used shit tools to begin with, or are in a mass psychosis. I certainly feel bad for anyone reporting to "the business guy." He never respected you to begin with, and now he literally thinks "why are you so slow? I can build Airbnb in a weekend."
For someone who previously could achieve nothing, these tools are magical, as they can now achieve something. It feels to them like infinity because their base was 0. That alone will create a lot of things they wouldn't have been able to, good for them. However for people who already know what they're doing, I only feel slightly pushed along some asymptote. My bottlenecks simply are not measured in tokens to screen.
I'm starting to think Steve Yegge lost it. Or he never had it in the first place.
He never had it. I haven't worked out if he's just a highly skilled troll to be fair.
The source of the addiction is that an amount of effort is highly likely to result in a fulfilling outcome. That makes you want to make more effort. In the past, a lot of work was very futile, very tedious and often felt hopeless and that made people essentially give up. So this is a very good problem to have. I guess people should monitor their own output and try to pace themselves. But also be grateful that we have these capabilities that allow us to solve so many problems and achieve so many of the things that we want in life.
> Jeffrey Emanuel and his 22 accounts at $4400/month
Paying $4.4k per month for the privilege of writing code is absolute madness, I'm not quite so sure with how we got to this point but it's still madness. Maybe Yegge is indeed right, maybe this is just like regular gambling/addiction, which sucks when it comes to being a programmer but at least it gets the dopamine levels higher.
It's not per se madness; companies pay much more than that for code. Instead it's an empirical question about whether they're getting that value from the code.
The difference is that if those companies were to rely only on the AI part, and hence to transform us (computer programmers) only in copy-pasters and less, in about one to two years the "reasoning" behind the latest AI models would have become stale, i.e. because of no new human input. So good luck with that.
But my comment was not about companies, it was just about writing code, about the freedom that used to come from it, about the agency that we used to have. There's no agency and no freedom left when you start paying that much money in order to write code. I guess that can work for some companies, but for sure it won't work for computer programmers as actual human beings (and imo this blog-post itself tries to touch on that aspect).
> all your complaining about AI not being useful for real-world tasks is obsolete... let’s not quibble about the exact productivity boost from AI
No, that's exactly the purpose of this ending up on HN.
> if you give an engineer Claude Code, then once they’re fluent, their work stream will produce nine additional engineers’ worth of value. For someone.
Nope.
> you decide you’re going to impress your employer, and work for 8 hours a day at 10x productivity. You knock it out of the park and make everyone else look terrible by comparison.
Pure junior dev fantasy. Nobody cares about how many hours you "really" work, or what you did as long as you meet their original requirements. They're going to ignore the rest no matter how much you try to talk a big game. This has been true since the beginning of employment.
> In that scenario, your employer captures 100% of the value from you adopting AI. You get nothing, or at any rate, it ain’t gonna be 9x your salary. And everyone hates you now.
Again, nobody cares.
> Congrats, you were just drained by a company. I’ve been drained to the point of burnout several times in my career, even at Google once or twice.
Pointless humblebrag that even we the readers don't care about.
> Now let’s look at Scenario B. You decide instead that you will only work for an hour a day, and aim to keep up with your peers using AI. On that heavily reduced workload, you manage to scrape by, and nobody notices.
This isn't a thing unless you were borderline worthless and junior to begin with.
> In this scenario, your company goes out of business. I’m sorry, but your victory over The Man will be pyrrhic, because The Man is about to be kicked in The Balls, since with everyone slacking off, a competitor will take them out pretty fast.
Hard disagree. Author has clearly never worked outside of a startup or silicon valley where the money is more mature.
Flagged for what is at best extreme ignorance, or more likely ragebait and bad faith hype for a ship that sailed several years ago. I don't know what else to do with these blog posts anymore.
The last time I saw so much blue and orange (see the images) was the era of Battlefield 3/Battlefield 4 box art. I really do miss it tbh.
https://upload.wikimedia.org/wikipedia/en/6/69/Battlefield_3...
https://cdn11.bigcommerce.com/s-yzgoj/images/stencil/1280x12...
Downvoted within 1 minute. This website is a trash fire.
> Downvoted within 1 minute. This website is a trash fire.
because your comment was completely offtopic.
Its not off topic. The ai art in the blog post is really ugly and detracts from what's being discussed.
[dead]
> But if you haven’t used specifically Opus 4.5/4.6 with specifically Claude Code for at least an hour, then you’re in for a real shock. Because all your complaining about AI not being useful for real-world tasks is obsolete. AI coding hit an event horizon on November 24th, 2025. It’s the real deal.
Yeah, it is over for several roles, especially frontend web development given Opus 4.6 is able to one shot your React frontend from a Figma design 90% of the time.
Why would I want to hire 10 senior frontend developers on a $200K asking price salary at this point when one AI can replace 9 of them (yes it can) and require one single junior-level engineer at a significant lower price?
This idea is very tempting for many companies to continue slashing headcount with less employees to find 'cost savings' by using AI to do more with less.
I think Opus 4.5 & 4.6 are an impressive step up in capabilities but I'm really skeptical that this model is replacing the output of 9/10 skilled front end engineers as a project grows beyond the early stages.
> Yeah, it is over for several roles, especially frontend web development
Only if the front end was super simple in the first place, IMO. And also only for the v1, which is still useful, whereas for ongoing development I think AI leads people down a path of tools that cost more to maintain and build on.
It may be that AI leads to framework and architecture choices best suited to AI, with great results up front, and then all the same challenges and costs of quick and dirty development by a human. Except 10x faster so, by the time anyone in management realizes the mess they’re in, and the cost/benefit ratio tilts negative even in the short run as opposed to the obvious to engineers long term, there’s going to be so much more code in that bad style that it’s 10x more expensive for expert humans to fix it.