They lied to you. Building software is hard

(blog.nordcraft.com)

136 points | by xiaohanyu 4 days ago ago

111 comments

  • gdubs 12 hours ago ago

    One of my all-time favorite quotes is from Zen Mind, Beginner's Mind and it goes: “In the beginner’s mind there are many possibilities, but in the expert’s there are few.”

    There's such a wide divergence of experience with these tools. Often times people will say that anyone finding incredible value in them must not be very good. Or that they fall down when you get deep enough into a project.

    I think the reality is that to really understand these tools, you need to open your mind to a different way of working than we've all become accustomed to. I say this as someone who's made a lot of software, for a long time now. (Quite successfully too!)

    In someways, while the ladder may be getting pulled up on Junior developers, I think they're also poised to be able to really utilize these tools in a way that those of us with older, more rigid ways of thinking about software development might miss.

    • bsoles 11 hours ago ago

      Over the last 25 years of building commercial software, but being a programming enthusiast since I was 15 years old, I came to the conclusion that self-improvement (in the sense of gaining real expertise in a field, building a philosophy of things, and doing the right things) is in direct opposition to creating "value" in the corporate/commercial sense of today.

      Using AI/LLMs, you perhaps will create more commercial value for yourself or your employer, but it will not make you a better learner, developer, creator, or person. Going back to the electronic calculator analogy that people like to refer to these days when discussing AI, I also now think that, yes, electronic calculators actually made us worse with being able to use our brains for complex things, which is the thing that I value more than creating profits for some faceless corporation that happens to be my employer at the moment.

      • gdubs 10 hours ago ago

        Why are you so certain that LLMs/AI can't be used as a tool to learn and grow?

        Like Herbie Hancock once said, a computer is a tool, like an axe. It can be used for terrible things, or it can be used to build a house for your neighbor.

        It's up to people how we choose to use these tools.

        • bsoles 8 hours ago ago

          > Why are you so certain that LLMs/AI can't be used as a tool to learn and grow?

          Because every other post in here, for example, starts with "I vibe coded..." and not with "I learned something new today on ChatGPT".

          • nerdsniper 7 hours ago ago

            I’m vibe coding apps that help me explore stuff and learn things. That’s their specific purpose.

    • phicoh 12 hours ago ago

      There have always been young people who can quickly hack something together with whatever new tools are available. That way of working never lasts, but the tools do last.

      When tools prove their worth, they get taken into to normal way software is produced. Older people start using them, because they see the benefit.

      The key thing about software production is that it is a discussion among humans. The computer is there to help. During a review, nobody is going to look at what assembly a compiler produces (with some exceptions of course).

      When new tools arrive, we have to be able to blindly trust them to be correct. They have to produce reproducible output. And when they do, the input to those tools can become part of the conversation among humans.

      (I'm ignoring editors and IDEs here for the moment, because they don't have much effect on design, they just make coding a bit easier).

      In the past, some tools have been introduced, got hyped, and faded into obscurity again. Not all tools are successful, time will tell.

    • mnky9800n 9 hours ago ago

      I was talking about this with someone today, that before perhaps there is an exactness you expect. But actually, what really matters is "good enough." And if AI written code takes you to "good enough" according to whatever metric you've set, then what exactly is the problem? Because a lot of the technical part of the job is taking X data, doing f(x) transformation to that data, and thus Y is born and handed to the next step. So if it passes whatever metric you have set to make sure that going from X to Y handles Z% of the problem space, and doesn't create downstream issues (probably this should be part of your metric), then you have done your job. And yes, of course sometimes the job will require you writing the code yourself because that level if precision is necessary. But why should we consider that always to be the case? And thus, actually, there are probably new programming languages and paradigms to consider that we haven't thought of yet that makes this kind of problem solving more efficient. Because right now we are not super effective at juggling both the human and the machine's problem space context. Except some experts who say they can orchestrate tens of agents all at once doing whatever. I dunno. I think right now is exciting and not hand wringing. A computer is meant to help you think. Why shouldn't new computational tools bring excitement?

    • commandlinefan 12 hours ago ago

      ... and the biggest problem is that the people who _do_ know how hard it is to build software are the ones whose input on the matter is most likely to be discounted as "sour grapes"/"fear of obsolescence".

    • hn_throwaway_99 12 hours ago ago

      I definitely agree with this. Older folks have to deal with the double whammy of being familiar with what they already know, plus there is a good bit of research that learning and absorbing new things just gets harder past mid-40s or so.

      That said, I don't think this negates what TFA is trying to say. The difficulty with software has always been around focusing on the details while still keeping the overall system in mind, and that's just a hard thing to do. AI may certainly make some steps go faster but it doesn't change that much about what makes software hard in the first place. For example, even before AI, I would get really frustrated with product managers a lot. Some rare gems were absolutely awesome and worth their weight in gold, but many of them just never were willing to go to the details and minutiae that's really necessary to get the product right. With software engineers, if you don't focus on the details the software often just flat out doesn't work, so it forces you to go to that level (and I find that non-detail oriented programmers tend to leave the profession pretty quickly). But I've seen more that a few situations where product managers manage to skate by without getting to the depth necessary.

      • atmavatar 11 hours ago ago

        > Older folks have to deal with the double whammy of being familiar with what they already know, plus there is a good bit of research that learning and absorbing new things just gets harder past mid-40s or so.

        Unfortunately, since the tech industry still largely skews young, reticence to chase every new hype cycle also feeds into the perception of an inability to learn new things, even after many prove to be fads (e.g., blockchain).

    • bdangubic 11 hours ago ago

      This reminds of talking to my nephew at Thanksgiving years ago. He was studying for an exam after the holidays and I was looking at his screen open to a Google Doc which looked like his study notes except - they were being edited as I was watching - by someone else. I asked about it and he goes “we have a single Google Doc where all students collaborate on the study notes.” My mind was blown, I was also using Google Docs but not in a millions years would it cross my mind its utility for such a thing he and his classmates were using it for. Can’t wait to see what new blood “Juniors” brings to the table!

      • AuthAuth 11 hours ago ago

        All students collaborating on notes kind of defeats the point no? As I see it study notes are reminders to link you back to when you were reviewing the material. If you never wrote the notes you wont get that connection back to the material.

        • bitwize 9 hours ago ago

          On the one hand, this is the kind of closed mind the zen guy in the root comment was talking about.

          On the other hand, you're probably right...

          • saulpw 6 hours ago ago

            Perhaps wisdom is closing your mind to common stupidity.

        • bdangubic 5 hours ago ago

          The shared study notes represent shared understanding of the topics at hand. Different people grasp concepts in different way and seeing how other people think/understand/deduce/... (at least for me) makes a world of difference.

          Like seeing a PR and going "holy s**, would never have dreamed of doing it that way" - I have learned A LOT in a looooong SWE career from that...

      • whattheheckheck 3 hours ago ago

        Collective cognition is effectively what all knowledge work is. The programmers are the dunces that can't keep it all in their heads and need explicit type systems and databases to manage state unlike the genius business analysts and SMEs

  • xiaohanyu 4 days ago ago

    "If you are looking for that one trick that lets you get ahead and jumpstart your career, my advice to you is: Don’t choose the path of least resistance. When training a muscle, you only get stronger with resistance. The same is true for learning any new skill. It is when you struggle with a specific problem or concept that you tend to remember."

    Pretty nice description.

    • advisedwang 13 hours ago ago

      As with anything, there's also too much of a good thing though.

      In my own career I switched role to get more time on a area where I felt I needed more growth an practice. Turns out I never got really very good at it, and basically was just in a role I wasn't great at for 6 years. It was miserable. My lesson is "if you know you are bad at something, don't make it load-bearer in your life or career".

      • hobs 13 hours ago ago

        There's a reason that one of the big corporate skills books is Strength Finder - because fundamentally playing to your weaknesses isn't a good play, its that you need to consistently challenge yourself to keep building whatever muscle you choose to do. You don't want to build strength by lifting 10,000 pounds all at once, but by increasing your load every day.

        In most professions barely anyone is doing the continual education or paying attention to the "scene" for that profession, if you do that alone you're probably already in the top 10%.

        • Joel_Mckay 13 hours ago ago

          "A Specialist knows more and more about less and less until he knows absolutely everything about nothing.

          A Generalist knows less and less about more and more until he knows absolutely nothing about everything"

          Getting paid well doing something you actually enjoy doing is key =3

          https://stevelegler.com/2019/02/16/ikigai-a-four-circle-mode...

  • adam_arthur 12 hours ago ago

    LLMs have clearly accelerated development for the most skilled developers.

    Particularly when the human acts as the router/architect.

    However, I've found Claude Code and Co only really work well for bootstrapping projects.

    If you largely accept their edits unchanged, your codebase will accrue massive technical debt over time and ultimately slow you down vs semi-automatic LLM use.

    It will probably change once the approach to large scale design gets more formalized and structured.

    We ultimately need optimized DSLs and aggressive use of stateless sub-modules/abstractions that can be implemented in isolation to minimize the amount of context required for any one LLM invocation.

    Yes, AI will one shot crappy static sites. And you can vibe code up to some level of complexity before it falls apart or slows dramatically.

    • Sohcahtoa82 9 hours ago ago

      Agreed.

      What I've found is that AI can be alright at creating a Proof of Concept for an app idea, and it's great as a Super Auto-complete, but anything with a modicum of complexity, it simply can't handle.

      When your code is hundreds of thousands of lines, asking an agent to fix a bug or implement a feature based on a description of the behavior just doesn't work. The AI doesn't work on call graphs, it basically just greps for strings it thinks might be relevant to find things. If you know exactly where the bug lies, it can usually find it with context given to it, but at that point, you're just as good fixing the bug yourself rather than having the AI do it.

      The problem is that you have non-coders creating a PoC, then screaming from the rooftops how amazing AI is and showing off what it's done, but then they go quiet as the realization sets in that they can't get the AI to flesh it out into a viable product. Alternatively, they DO create a product that people start paying to use, and then they get hacked because the code is horribly insecure and hard-codes API keys.

    • athenot 12 hours ago ago

      > We ultimately need optimized DSLs and aggressive use of stateless sub-modules/abstractions that can be implemented in isolation to minimize the amount of context required for any one LLM invocation.

      Containment of state also happens to benefit human developers too, and keep complexity from exploding.

      • adam_arthur 12 hours ago ago

        Yes!

        I've found the same principles that apply to humans apply to LLMs as well.

        Just that the agentic loops in these tools aren't (currently) structured and specific enough in their approach to optimally bound abstractions.

        At the highest level, most applications can be written in simple, plain english (expressed via function names). Both humans and LLMs will understand programs much better when represented this way

    • CuriouslyC 12 hours ago ago

      Valknut is pretty good at forcing agents to build more maintainable codebases. It helps them dry out code, separate concerns cohesively and organize complexity. https://github.com/sibyllinesoft/valknut

    • themafia 12 hours ago ago

      > accrue massive technical debt

      The primary difference between a programmer and an engineer.

    • lowbloodsugar 12 hours ago ago

      >If you largely accept their edits unchanged, your codebase will accrue massive technical debt over time and ultimately slow you down vs semi-automatic LLM use.

      Worse, as its planning the next change, it's reading all this bad code that it wrote before, but now that bad code is blessed input. It writes more of it, and instructions to use a better approach are outweighed by the "evidence".

      Also, it's not tech debt: https://news.ycombinator.com/item?id=27990979#28010192

      • adam_arthur 12 hours ago ago

        People can take on debt for all sorts of things. To go on vacation, to gamble.

        Debt doesn't imply it's productively borrowed or intelligently used. Or even knowingly accrued.

        So given that the term technical debt has historically been used, it seems the most appropriate descriptor.

        If you write a large amount of terrible code and end up with a money producing product, you owe that debt back. It will hinder your business or even lead to its collapse. If it were quantified in accounting terms, it would be a liability (though the sum of the parts could still be net positive)

        Most "technical debt" is not buying the code author anything and is materialized through negligence rather than intelligently accepting a tradeoff

        • lowbloodsugar 10 hours ago ago

          All those examples were borrowing money. What you're describing as "technical debt" doesn't involve borrowing anything. The equivalent for a vacation would be to take your kids to a motel with a pool and dress up as Mickey Mouse and tell them its "Disney World debt". You didn't go in debt. You didn't go to Disney World. You just spent what money you do have on a shit solution. Your kids quite possibly had fun, even.

          > term technical debt has historically been used

          There are plenty of terms that we no longer use because they cause harm.

    • krainboltgreene 9 hours ago ago

      > LLMs have clearly accelerated development for the most skilled developers.

      Have they so clearly? What's the evidence?

      • thegrim000 8 hours ago ago

        Most people's "truth" nowadays is what they've heard enough people say is true. Not objective data/measures. What people believe is true, and say is true, IS truth, to them.

    • sjdixjjxs 10 hours ago ago

      > We ultimately need optimized DSLs and aggressive use of stateless sub-modules/abstractions that can be implemented in isolation to minimize the amount of context required for any one LLM invocation

      Wait till you find out about programming languages and libraries!

      > It will probably change once the approach to large scale design gets more formalized and structured

      This idea has played out many times over the course of programming history. Unfortunately, reality doesn’t mesh with our attempts to generalize.

  • citelao 13 hours ago ago

    Perhaps this is a bit OT, since the article focuses more on self-development ("When training a muscle, you only get stronger with resistance"), but I wonder about the subtitle:

    > Every week there seems to be a new tool that promises to let anyone build applications 10x faster. The promise is always the same and so is the outcome.

    Is the second sentence true? Regardless of AI, I think that programming (game development, web development, maybe app development) is easier than ever? Compare modern languages like Go & Rust to C & C++, simply for their ease-of-compilation and execution. Compare modern C# to early C#, or modern Java to early Java, even.

    I'd like to think that our tools have made things easier, even if our software has gotten commensurately more complicated. If they haven't, what's missing? How can we build better tools for ourselves?

    • dijit 13 hours ago ago

      I'm not sure.

      Think of the Game hits from the 90's. A room full of people made games which shaped a generation. Maybe it was orders of magnitude harder then, but today, it's multiple orders of magnitude more people required to make them.

      Same is true for websites. Sure, the websites were dingy with poor UX and oodles of bugs... but the size of the team required to make them was absolutely tiny compared to today.

      Things are simultaneously the best they've ever been, and the worst they've ever been, it's a weird situation to be in for sure.

      But truthfully; orders of magnitude more powerful hardware was the real unlock.

      Why is slack and discord popular? Because it's possible to use multiple gigabytes of ram for a chat client.

      25 years ago? Multiple gigabytes of ram put your machine firmly in the "I have unlimited money and am probably a server doing millions of things" class.

      • hibikir 12 hours ago ago

        Copying a game from the 90s is easier than ever. We see small teams making 90s level games all the time. It just happens that in the market, those are now just indies.

        The market demands not just better, more complicated games, but mostly much higher art budgets. Go look at, say, Super Metroid, and compare it to Team Cherry's games in the same genre, made mostly by three people. Compare Harvest Moon from the 90s with Stardew Valley, made one person. Compare old school Japanese RPGs with Undertale, again with a tiny team. Lead developer who is also the lead music composer. And it's not like those games didn't sell: Every game I mentioned destroyed the old games in revenue, even though the per-unit price was tiny. Silksong managed to overload Steam on release!

        And it's not just games. I was a professional programmer in the 90s. My team's job involved mostly work that today nobody would ever write, because libraries just do it for you. We just have higher demands than we ever did.

        • ryandrake 10 hours ago ago

          I wonder if the gaming market is actually demanding more complicated games, or if it's just that complicated games with massive budgets are all the studios are offering, so gamers accept what they're offered.

        • godelski 10 hours ago ago

          I think you're ignoring multiple critical variables, including what the parent mentioned.

          A pretty obvious one is that there's magnitudes more players these days and many more options for how they can play. Hell, there's even a few more billion people on the planet so it's more than just percentage of people owning systems that can play games. I'll let you think about others because I want to focus on what the patent said, but if top selling games weren't making at least an order of magnitude more money then that'd be a very concerning sign.

          The parent said hardware was a big unlock and this is undoubtedly true. I don't just mean that with better hardware we can do more and I don't think the parent did either. Hardware is an unlock because it enables you to be incredibly lazy. If your players have powerful hardware you can get away with thinking less about optimization. You can get away with thinking less about memory management. You can get away with thinking less about file sizes.

          The hardware inherently makes game development easier. We all know the quake fast inverse square root for a reason. Game development used to be famous for optimization for a reason. It was absolutely necessary. Many old games are famous for pushing the limits of the hardware. Where hardware was the major bottleneck.

          But then look at things like you mentioned. Undertail is also famous for its poor code quality. All the dialogue in a single file using a bunch of switch statements? It's absurd!

          But this is both a great thing and a terrible thing. It's great because it unlocks the door for so many to share their stories and games. But it's terrible because it wastes money, money that the consumer pays. It encourages a "good enough" attitude, where the bar keeps decreasing and faster than hardware can keep up. It is lazy and hurts consumers. It makes a naïve assumption that there's only one program running on a system at a time.

          It's an attitude not limited to the game industry. We ship minimal viable products. The minimum moves, and not always up. It goes down when hardware can pick up the slack or when consumers just don't know any better.

          Things like electron are great, since they can enable developers to get going faster. But at the same time it creates massive technical debt. The fact that billion dollar companies use a resource hog like that is not something to be proud of, it should be mocked and shamed. Needing a fucking browser to chat or listen to music?! It's nothing short of absurd! Consumers don't know any better but why devs celebrate this is beyond me.

          People should move fast and break things. It's a good way to innovate and figure out how things work. But it has a cost. It leaves a bunch of broken stuff in its wake. Someone has to deal with that trash. I don't care much about the startup breaking some things but I sure do care when it's the most profitable businesses on the planet. They can pay for their messes. They create bigger messes. FFS, how does a company like Microsoft solve slow file browsers by just starting it early and running in the background?! These companies do half a dozen rounds of interviews and claim they have the best programmers? I call bullshit.

      • jtolmar 12 hours ago ago

        Modern AAA games take tons of people because of ballooning scope and graphical fidelity expectations. Games like Super Mario World have went from highly technical team efforts to something a person with no training can accomplish solo. (However, 3D tools have lagged behind dramatically. Solo dev Mario 64 is possible but needs way more specialized knowledge.)

      • pixl97 12 hours ago ago

        > it's multiple orders of magnitude more people required to make them.

        That's something that seems to eat up AAA games, each person they add adds less of a person due to communication effects and inefficiencies. That and massive amounts of created artwork/images/stories.

        There are a lot of indie game studios that make games much more complicated than what was in the 90s, and have a lot less people than AAA teams.

        And ya, tons of memory has unlocked tons of capability.

      • jefftk 12 hours ago ago

        > Think of the Game hits from the 90's. A room full of people made games which shaped a generation. Maybe it was orders of magnitude harder then, but today, it's multiple orders of magnitude more people required to make them.

        I think this is more about rising consumer expectations than rising implementation difficulty.

        • ForHackernews 12 hours ago ago

          Aren't those the same thing? If your consumers demand more, that's more difficult to implement.

      • _trampeltier 13 hours ago ago

        I guess tools help, but libraries help more and the whole internet of infos and libraries much much more.

      • anonymous344 12 hours ago ago

        it's not the making part, it's the making a competitive end-result. in 2000 only needed to make something and it was good enough. now you need marketing budget of 10 000$ and skills for that also

        • Jensson 12 hours ago ago

          > in 2000 only needed to make something and it was good enough. now you need marketing budget of 10 000$ and skills for that also

          You needed much more marketing budget in 2000 than today, I think you have that reversed. There is a reason indie basically wasn't a thing until steam could do marketing for you.

    • kerblang 12 hours ago ago

      Short answer: No, things have not gotten easier. The toolchain is insanely huge. A computer science graduate is woefully underprepared for the tool tsnuami that will completely swamp their careers. Many of these tools are worse than no tool at all.

      Causes: Bubble economics, perverse incentives, lack of objectivity, and more.

      The good news is that huge competitive advantages are available to those who refuse to accept norms without careful evaluation.

    • vbezhenar 12 hours ago ago

      Modern Java is vastly more complicated than early Java. So I'm not sure I follow your reasoning. Programming nowadays is absurdly complicated. I have 20 years of experience and I can't imagine how new developer could learn it all.

      I don't really know if AI makes programming easier or harder. At one side, you can explore any topic with AI. This is super powerful ability when it comes to learning. At another side, the temptation to offload your work to AI is big and if you do that, you'll learn nothing. So it comes down to a person type, I guess. Some people will use AI to learn and some people will use AI to avoid learning, both behaviours are empowered.

      I have simple and useless answer how to solve that. Throw it all out. Start from the scratch. Start with simple CPU. Start with simple OS. Start with simple protocols. Do not write frameworks. Make the number of layers between your code and hardware as small as possible. So it's actually possible to understand it all. Right now the number of abstraction layers is too big. Of course nobody's going to do that, people will put more abstraction layers and it'll work, it always works. But that sucks. Software stack was much simpler 20-30 years ago. We didn't even had source control, I was the young developer who introduced subversion into our company, but we still delivered useful software.

    • HumblyTossed 12 hours ago ago

      Having learned programming in the 80s (as a teen), I would say it was much easier back then. Programmers have made things vastly more complicated these days.

    • pmichaud 12 hours ago ago

      I think as many other people who replied to you have said, it's a mixed bag. It's better in some sense, with abstractions and frameworks that sand down sharp edges, and libraries that can do everything. But it's also crushingly more complex. Back in the day you had to know and care about memory allocation and ASM, but all the knowledge you needed was in a manual or two that you owned and could actually know the contents of.

    • drdec 13 hours ago ago

      Maybe the outcome they had in mind was "it helps, but nowhere near 10x"?

      Also, I'm not sure anyone was making 10x claims about the tools you cite.

    • commandlinefan 12 hours ago ago

      > programming is easier than ever

      Or does it just seem that way because you've had a whole lifetime to digest it one little bit at a time so that it all seems intuitive now? If "easy to understand and get started with" were the bar for programming capability, we'd have stopped with COBOL.

    • matthewkayin 11 hours ago ago

      > Compare modern languages like Go & Rust to C & C++, simply for their ease-of-compilation and execution.

      Except that at least for game development, C and C++ are still the go-to tools?

    • cess11 12 hours ago ago

      I think it is about as hard as it ever was. The tricky part is learning to think through problems in a certain way, when you have that it doesn't matter much whether you're reading hexdumps and slinging low-level code on a 68k chip or clicking about in Godot and watching videos about clicking.

      Crapping out code that does the thing was never the hard part, the hard part is reading the crap someone did and changing it. There are tradeoffs here, perhaps you might invest in modeling up front and use more or less formal methods, or you're just great at executing code over and over very fast with small adjustments and interpreting the result. Either way you'll eventually produce something robust that someone else can change reasonably fast when needed.

      The additions to Java and C# are a lot about functional programming concepts, and we've had those since forever way back in the sixties. Map/reduce/filter are old concepts, and every loop is just recursion with some degree of veiling, it's not a big thing whether you piece it together in assembly or Scheme, typing it out isn't where you'll spend most of your time. That'll be reading it once it's no longer yesterday that you wrote it.

      If I were to invent a 10x-meganinja-dev-superpower-tool it would be focused on static and execution analysis, with strong extendability in a simple DSL or programming language, and decent visualisation API:s. It would not be 'type here to spin the wheels and see what code drops out', that part is solved many times over already, in Wordpress, JAXB oriented CRM and so on. The ability to confidently implement change in a large, complex system is enabled by deterministic immediate analysis and visualisation.

      Then there are the soft skills. While you're doing it you need to keep bosses and "stakeholders" happy and make sure they do not start worrying about the things you do. So you need to communicate reliably and clearly, in a language they understand, which is commonly pictures with simple words they use a lot every day and little arrows that bring the message together. Whether you use this or that mainstream programming language will not matter at all in this.

    • wrs 13 hours ago ago

      You missed the word "anyone". Of course tools for programmers have seen huge improvements. The "promise" referred to here is that you don't need to learn programming skills to be an effective programmer.

  • stronglikedan 13 hours ago ago

    Building software is actually so easy that my 8 year old niece can do it. Shipping software is what's hard.

    • giancarlostoro 13 hours ago ago

      Shipping is easy, shipping stable functional (lets lump in scalable) software on the other hand.

      • 9rx 13 hours ago ago

        Also easy. The only hard thing found around software is the people.

        • acedTrex 13 hours ago ago

          This is true, its easy to ship software to 0 users.

    • catoc 13 hours ago ago

      I understand the “8 year old niece” is hyperbole, but really? Everyone can build apps?

      “Build me a recipe app”, sure.

      Building anything substantial has consistently failed for me unless you take claude or codex by the hand and guide them through it step by step.

    • camnora 13 hours ago ago

      Not to mention selling it

    • enos_feedler 13 hours ago ago

      But who are you shipping it to if everyone is building it?

  • mlsu 12 hours ago ago

    Fred Brooks, from "No Silver Bullet" (1986)

    > All software construction involves essential tasks, the fashioning of the complex conceptual structures that compose the abstract software entity, and accidental tasks, the representation of these abstract entities in programming languages and the mapping of these onto machine languages within space and speed constraints. Most of the big past gains in software productivity have come from removing artificial barriers that have made the accidental tasks inordinately hard, such as severe hardware constraints, awkward programming languages, lack of machine time. How much of what software engineers now do is still devoted to the accidental, as opposed to the essential? Unless it is more than 9/10 of all effort, shrinking all the accidental activities to zero time will not give an order of magnitude improvement.

    AI, the silver bullet. We just never learn, do we?

    • idle_zealot 12 hours ago ago

      There are mixed views here. Some are making the claim relevant to the Silver Bullet observation, than LLMs are cutting down time spent on non-essential work. But the view that's really driving hype is that the machine can do essential work, design the system for you, and implement it, explore the possibility space and make judgments about the tradeoffs, and make decisions.

      Now, can it actually do those things? Not in my estimation. But from the perspective of a less experienced developer it can sure look like it does. It is, after all, primarily a plausibility engine.

      I'm all for investing in integrating these generative tools into workflows, but as of yet they should not be given agency, or even the aesthetic appearance of agency. It's too tempting to the human brain to shut down when it looks like someone or something else is driving and you're just navigating and correcting.

      And eventually, with a few more breakthroughs in architecture maybe this tech actually will make digital people who can do all the programming work, and we can all retire (if we're still alive). Until then, we need to defend against sleepwalking into a future run by dumb plausibility-generators being used as accountability sinks.

      • charcircuit 11 hours ago ago

        >Now, can it actually do those things? Not in my estimation

        Just today I asked my clawbot to generate a daily report for me and it was able to build an entire scraping skill for itself to use for making the report. It designed it along with making decisions along the way including changing data sources when it realized one it was trying was blocking it as a bot.

    • raincole 12 hours ago ago

      I think software was indeed 9/10 accidental activities before AI. Probably still mostly accidental activities with the current LLM.

      The essence: query all the users within a certain area and do it as fast as possible

      The accident: spending an hour to survey spatial tree library, another hour debating whether to make our own, one more hour reading the algorithm, a few hours to code it, a few days to test and debug it

      Many people seem to believe implementing the algorithm is "the essence" of software development so they think the essence is the majority. I strongly disagree. Knowing and writing the specific algorithm is purely accidental in my opinion.

      • idle_zealot 12 hours ago ago

        Isn't the solution to that standardizing on good-enough implementations of common data structures, algorithms, patterns, etc? Then those shared implementations can be audited, iteratively improved, critiqued, etc. For most cases, actual application code should probably be a small core of businesses logic gluing together a robust set of collectively developed libraries.

        What the LLM-driven approach does is basically the same thing, but with a lossy compression of the software commons. Surely having a standard geospatial library is vastly preferable to each and every application generating its own implementation?

        • raincole 11 hours ago ago

          I mean, of course libraries are great. But the process to create a standardized, widely accepted library/framework usually involves with another kind of accidental complexity: the "designed by committee" complexity. Every user, and every future user will have different ideas about how it should work and what options it should support. People need to communicate their opinions to the maintainers, and sometimes it can even get political.

          At the end, the 80% features and options will bloat the API and documentation, creating another layer of accidental activity: every user will need to rummage through the doc and something source code to find the 20% they need. Figuring how to do what you want with ImageMagick or FFmpeg always involved with a lot of reading time before LLM. (These libraries are so huge that I think most people only use more like 2% instead of 20% of them.)

          Anyway, I don't claim AI would eliminate all the accidental activities and the current LLM surely can't. But I do think there are an enormous amount of them in software development.

      • etamponi 12 hours ago ago

        It that's the essence, then of course 9/10 is accident. I think that's not software engineering though.

        The essence: I need to make this software meet all the current requirements while making it easy to modify in the future.

        The accident: ?

        Said another way: everyone agrees that LLMs make it very easy to build throw away code and prototypes. I could build these kind of things when I was 15, when I still was on a 56k internet connection and I only knew a bit of C and html. But that's not what software engineers (even junior software engineers) need to do.

  • prewett 6 hours ago ago

    I wonder if 3D printing is a good analogy. The promise was "you can print anything you want!" From my observation, the reality is that you can 3D print cheap plastic crap that looks like voxel rendering made manifest. This turns out to be handy in a lot of situations, like making custom jigs for something, but you're not going to be 3D printing custom jewelry, or custom furniture. Sure, you hear stories about how SpaceX is 3D printing rocket engines, but you can't afford a machine like that, and even if you could, you won't be printing custom jewelry with it.

    So, sure, some people are going to be using AI to create professional software, but they aren't going to tell you about all the engines that blew up along the way, and who knows which ones are going to blow up in the future. But custom utility software might get a whole lot more common.

  • didgetmaster 13 hours ago ago

    Anything (software or physical things) that is fast, easy, and cheap to build; will never be a financial success for a single company. The minute you get some market traction, your competitors will come in and take away all your customers.

    • charcircuit 12 hours ago ago

      If you were given a copy of the entire software stack that runs YouTube I would bet $1000000 you can't take all of YouTube's customers. Businesses are more than just the software.

      • didgetmaster 10 hours ago ago

        Are you saying that you think that YouTube was fast, easy, and cheap to build?

        • charcircuit 9 hours ago ago

          As AI progresses it will be. What makes YouTube valuable is the company's relationship with advertisers, content creators, and users.

  • dfabulich 12 hours ago ago

    This article includes a graph with a negative slope, claiming that AI tools are useful for beginners, but less and less useful the more coding expertise you develop.

    That doesn't match my experience. I think AI tools have their own skill curve, independent of the skill curve of "reading/writing good code." If you figure out how to use the AI tools well, you'll get even more value out of them with expertise.

    Use AI to solve problems you know how to solve, not problems that are beyond your understanding. (In that case, use the AI to increase your understanding instead.)

    Use the very newest/best LLM models. Make the AI use automated tests (preferring languages with strict type checks). Give it access to logs. Manage context tokens effectively (they all get dumber the more tokens in context). Write the right stuff and not the wrong stuff in AGENTS.md.

    • PaulRobinson 12 hours ago ago

      That sounds exhausting.

      I'd rather spend my time thinking about the problem and solving it, than thinking about how to get some software to stochasticaly select language that appears like it is thinking about the problem to then implement a solution I'm going to have to check carefully.

      Much of the LLM hype cycle breaks down into "anyone can create software now", which TFA makes a convincing argument for being a lie, and "experts are now going to be so much more productive", which TFA - and several studies posted here in recent months - show is not actually the case.

      Your walk-through is the reason why. You've not got magic for free, you've got something kinda cool that needs operational management and constant verification.

      • Throaway1985232 12 hours ago ago

        I’ve seen otherwise intelligent and capable people get so addicted to the convenience and potential of LLMs, that they start to lose their ability to slowly go through problems step by step. it’s sad.

      • sgarland 5 hours ago ago

        Agreed. My work is mandating Claude Code usage this week for everyone. I spent all day today getting it to write tickets, code, and tests for something I knew how to do. I don’t understand the appeal. Telling the AI “commit those changes and then push,” then waiting for the result, takes way longer than gcmsg <commit msg> && gp.

  • jackinthehat 10 hours ago ago

    years ago I watched a very senior engineer refuse to use an IDE debugger because “real understanding means doing it in your head.” He was brilliant - and also spent two days chasing a bug a junior fixed in 10 minutes by setting a breakpoint. The junior didn’t understand more; he just had a better tool for that moment.

    Tools don’t make you wiser or lazier by default — they amplify whatever habits you already have. If you’re using them to avoid thinking, that shows. If you’re using them to explore faster, that shows too.

    Beginner’s mind isn’t about ignorance; it’s about being willing to try leverage where it exists.

  • zkmon 12 hours ago ago

    > With no-code tools you often reach a hard limit where the tool simply does not make sense to use anymore.

    No-code is the same trend that has abstracted out all the generic stuff into infrastructure layers, letting the developers to focus on Lambda functions, while everything in the lower levels is config-driven. This was happening all the time, pushing the developer to easier higher layers and absorbing all complexity and algorithmic work into config-driven layers.

    Runtime cost of a Lambda function might far exceed that of a fully hand-coded application hosted on your local server. But there could be other factors to consider.

    Same with AI. You get a jump-start with full speed, and then you can take the wheel.

    • etamponi 12 hours ago ago

      The point of the article is that the jump start that AI gives you is not the same as the one that well thought frameworks give you. What AI writes falls apart and leaves you with the ruins.

  • countWSS 12 hours ago ago

    There is a point in there, long-range analysis and debugging without AI is much harder, AI spots lots of non-obvious stuff very fast. If we consider "spotting non-obvious flaws" a skill, this will atrophy as beginners will learn to use AI to scan code for flaws,it is effective but doesn't teach anything, reading long blocks of code and mentally simulating it is a incredibly valuable skill and it will find stuff AI misses(something that is too complex, e.g. nested/recursive control flow,async and co-routines/threads interacting,etc), AI goes for obvious stuff first and has to be manually pointed to "identify flaws, focusing on X".

  • Tiberium 12 hours ago ago

    Am I missing something or is the actual point of the article just "don't start learning programming by using AI"? The title seems very different from the content.

  • coffeefirst 13 hours ago ago

    One more thing…

    The newbie prototype was never all that hard. You could, in my day, have a lot of fun that first week with dreamweaver, Visual Basic, or cargo cutting HTML.

    There’s nothing wrong with this.

    But to get much further than that ceiling you probably needed to crack a book.

  • raincole 12 hours ago ago

    I only realize how spot on the muscle training analogy is. In the modern world, very few people are hired for their muscles alone. Actually building muscle costs money for the absolute majority.

    This is how I see hand-building software goes.

  • ElijahLynn 11 hours ago ago

    I read a quote from somebody in the industry recently that stuck. I don't remember who it was.

    "Writing software is easy, changing it is hard."

    • ryandvm 11 hours ago ago

      Absolutely true. Especially so with poorly abstracted software design.

      This is why so many new teams' first order of business is invariably a suggestion to "rewrite everything".

      They're not going to do a better job or get a better product, it's just the only way they're going to get a software stack that does what they want.

  • vineethy 12 hours ago ago

    strongly disagree with this article. I think using the tools can actually directly lead to a junior engineer getting closer to a senior engineer. Telling junior engineers that they have to get better at typing out code in order to be better engineers misses what actually makes someone a better engineer.

    It's worth actually being specific about what differentiates a junior engineer from a senior engineer. There's two things: communication and architecture. the combination of these two makes you a better problem solver. talking to other people helps you figure out your blindspots and forces you to reduce complex ideas down to their most essential parts. the loop of solving a problem and then seeing how well the solution worked gives you an instinct for what works and what doesn't work for any given problem. So how do agents make you better at these two things?

    If you are better at explaining what you want, you can get the agents to do what you want a lot better. So you'd end up being more productive. I've seen junior developers that were pretty good problem solvers improve their ability to communicate technical ideas after using agents.

    Senior engineers develop instincts for issues down the road. So when they begin any project, they'll take this into account and work by thinking through this. They can get the agents to build towards a clean architecture from the get go such that issues are easily traceable and debuggable. Junior developers get better at architecture by using agents because they can quickly churn through candidate solutions. this helps them more rapidly learn the strengths and weaknesses of different architectures.

    • Thanemate 12 hours ago ago

      People don't develop the ability to solve algebraic equations when they see a professor solving it on the whiteboard. That's just the introduction to the methodology. The way people develop problem solving is by solving problems themselves.

      This is why everyone's thirsty for senior/staff engineers who are AI powered right now, because their entire work experience was the typical SWE experience.

      I cannot wait for the industry to have a highly skilled SWE drought in the next 5 years, so I can sweep in and become the AI powered engineer who saves the day because other junior-mid SWE's outsourced their problem solving way too early, either due to falling for the "don't be left behind" narrative (which is absurd because what about people who will get into CS in 6 years from now? Do they miss some metaphorical train?) or because their manager forced them to adopt the tools.

    • sgarland 5 hours ago ago

      > It's worth actually being specific about what differentiates a junior engineer from a senior engineer. There's two things: communication and architecture.

      Uhhh… also skills and abilities? You won’t develop either of those by repeatedly asking an AI to solve problems for you.

    • lowbloodsugar 12 hours ago ago

      What comes to mind is Java vs assembly. Claude is just a really really high level language compiler. I work with senior Java devs who have never written assembly.

      On the learning front, I spend the weekend asking Claude questions about Rust, and then getting it to write code that achieved the result I wanted. I also now have a much better understanding of the different options because I've gotten three different working examples and gotten to tinker with them. It's a lot faster to learn how an engine works when you have a working engine on a dyno than when you have no engine. Claude built me a diesel, a gasoline and an electric engine and then I took them apart.

  • anonymous344 12 hours ago ago

    true. I've built a simple app that solved the annoying problem usually in that app-space of giving/typing time and date. after years and years, people still pay for it, which im very grateful. i even saw many M$ companies build their products yet lacked the simple mind to ease the user ecperience with non-default date and time selector..

  • Bishonen88 11 hours ago ago

    https://www.youtube.com/watch?v=7lzx9ft7uMw

    ^ Everything App for Personal use that I'm thinking about making public in some way

    ~50k loc with ~400 files. Docker, postgres, react + fastify I'd say between 15 and 20 hours of vibe coding

    - Tasks, Goals, Habits

    - Calendar showing all of the above with two way google sync

    - Household sharing of markdown notes, goals and more

    - Financial projections, spending, earning, recurring transactions and more

    - Meal tracking with pics, last eaten, star rating and more

    - Gantt chart for goals

    - Dashboard for at a glance view

    - PWA for android with layout optimizations

    - Dark mode

    ... and more

    Could've I done it in the last 5 years? Yes. It would've taken 3-4 months if not more though. Now we could talk 24/7 about whether it's clean code, super maintainable, etc. etc. The code written by hand wouldn't be either if it'd be me just doing a hobby project.

    Shipping is rather straightforward as well thanks to LLM's. They hold your hand most of the way. Being a techie makes this much, much easier...

    I think developers are cooked one way or another. Won't take long now. Same question asked a year ago was dramatically different. AI were helpful to some extent but couldn't code up basic things.

  • worik 12 hours ago ago

    > They make the simple parts of software development simpler, but the complex parts can often become more difficult.

    This is so frustratingly common.

  • yowlingcat 12 hours ago ago

    Something that's been on my mind recently - what if gen AI coding tools are ultimately attention casinos in the same way social media is? You burn through tons of tokens and you pay per token, it feels productive and engaging, but ultimately the more you try and fail, the more money the vendor makes. Their expressed (though perhaps not stated) economic goal may be to keep you in the "goldilocks zone" of making enough progress to not give up, but not so much progress that you 1-shot to the end state without issues.

    I'm not saying that they can actually do that per sé; switching costs are so low that if you are doing worse than an existing competitor, you'd lose that volume. Nor am I saying they are deliberately bilking folks -- I think it would be hard to do that without folks cottoning on.

    But, I did see an interesting thread on Twitter that had me pondering [1]. Basically, Claude Code experimented with RAG approaches over the simple iterative grep that they now use. The RAG approach was brittle and hard to get right in their words, and just brute forcing it with grep was easier to use effectively. But Cursor took the other approach to make semantic searching work for them, which made me wonder about the intrinsic token economics for both firms. Cursor is incentivized to minimize token usage to increase spread from their fixed seat pricing. But for Claude, iterative grep bloating token usage doesn't harm them and in fact increases gross tokens purchased, so there is no incentive to find a better approach.

    I am sure there are many instances of this out there, but it does make me inclined to wonder if it will be economic incentives rather than technical limitations that eventually put an upper limit on closed weight LLM vendors like OpenAI and Claude. Too early to tell for now, IMO.

    [1] https://x.com/antoine_chaffin/status/2018069651532787936

    • Throaway1985232 12 hours ago ago

      Well, the first time i got really excited about an LlM was when it told me “yes, if you give me your game ideas and we iterate together, i can handle 100% of the coding.” lies, pure lies.

  • sumanep 12 hours ago ago

    Who lied to me?

  • 13415 12 hours ago ago

    I have to strongly disagree with that. I provably never was as productive as when I used REALbasic, which was a classical RAD tool. I sold the software made with it successfully for quite a while.

    As most people here probably know, it's now called Xojo and in my opinion both somewhat outdated and expensive. So I'm not recommending it, but credit to were it's due and it certainly was due for early versions of REALbasic when it was still affordable shareware.

    The problem with all RAD tools seems to be that they eventually morph into expensive corporate tools no matter what their origins were. I don't know any cross-platform exception (I don't count Purebasic as RAD and it's also not structured).

    As for AI, it seems to be just the same. The right AI tool accelerates the easy parts so you have more time for the hard parts. Another thing that bothers me a lot when alleged "professionals" are arguing against everyday computing for everyone. They're accelerating the death of general computing platforms and in the end no one will benefit from that.

  • 1970-01-01 4 days ago ago

    We have hard evidence of it becoming easier every damn day. AI is taking these jobs. The models aren't perfect, but the speed tradeoff is so massive that you really can't say it's "hard" to build anything anymore. Nobody is lying.

    • contagiousflow 13 hours ago ago

      What is the hard evidence?

      Edit: What I mean by this is there may be some circumstantial evidence (less hiring for juniors, more AI companies getting VC funding). We currently have no _hard_ evidence that programming has had a substantial speed increase/deskilling from LLMs yet. Any actual __science__ on this has yet to show this. But please, if you have _hard_ evidence on this topic I would love to see it.

      • fullshark 13 hours ago ago

        Closest I guess is hiring of juniors is down, but it's possibly just due to a post COVID pullback being credited to AI.

        I definitely think a lot of junior tasks are being replaced with AI, and companies are deciding it's not worth filling junior roles at least temporarily as a result.

        • datsci_est_2015 13 hours ago ago

          I don’t think this is unique to software. Across the US over the past decades there’s been a massive contraction in companies being willing to “train-up” employees. It’s greedy, and it works for their bottom lines. But it’s a tragedy of the commons and a race to the bottom. It also explains the dearth of opportunities for getting into the trades, despite sky-high demand.

          If anything, the expectations for an individual developer have never been higher, and now you’re not getting any 22-26 year olds with enough software experience to be anything but a drain on resources when the demand for profitability is yesterday.

          Maybe we need to go back to ZIRP if only to get some juniors back on to the training schedule, across all industries.

          For other insanely toxic and maladaptive training situations, also see: medicine in the US.

        • chasd00 13 hours ago ago

          > I definitely think a lot of junior tasks are being replaced with AI

          I think team expansion is being reduced as well. If you took a dev team of 5, armed them all with Claude Code + training on where to use it and where not to I think you could get the same productivity as hiring 2 additional FTE software devs. I'm assuming your existing 5 devs fully adopt the tool and not reject it like a bad organ transplant. Maybe an analogy could be the invention of email reducing the need for corporate typing pools and therefore fewer jr. secretaries ( typists) are hired.

          /i'm just guessing that being a secretary is in the career progression path of someone in the typing pool but you get the idea.

          edit: one thing i missed in my email analogy is that when email was invented it was free and available to anyone that could set up sendmail/*.MTA

        • chasd00 13 hours ago ago

          > I definitely think a lot of junior tasks are being replaced with AI

          one last thing to point out then my lunch is over. I think AI coding agents are going to hit services/marketplaces like Fiverr especially hard. I think the AI agents are the new gig-economy with respect to code, I spent about $50 on Claude Code pay-as-you-go over the past 3 days to put together a website i've had in the back of my mind for months. Claude Code got it to a point where I can easily pick up and run with to finish it out over a few more nights/weekends. UI/UX is especially tedious for me and Claude Code was able to take my vague descriptions and make the interface nicely organized and contemporary. The architecture is perfectly reasonable for what i want to do ( Auth0 + react + python(flask) + postgres + an OAuth2 integration to a third party ). It got all of that about 95% right on the first try.. for $50!. Services/marketplaces like Fiverr have to be thinking really hard right now.

    • xmprt 13 hours ago ago

      If you think of building software as just writing the code then sure AI makes things a lot easier. But if software engineering also includes security, setting up and maintaining infrastructure, choosing the right tradeoffs, understanding how to deal with evolving requirements without ballooning code complexity, etc., then AI struggles with that at the moment.

      • fastball 13 hours ago ago

        With infra-as-code, an LLM can also set up and maintain infra. Security is another issue and 100% that still seems to be the biggest footgun with agentic software development, but honestly that is mostly just a prompting/context issue. You can definitely get an LLM to write secure code, it is just arguably not any model's "default".

        • omnimus 12 hours ago ago

          The problem is not if the LLM writes secure code. The problem is if you can know and understand that the code is reasonably secure. And that requires pretty deep understanding of the program and that understanding is (for most people) built by developing the program.

          I am not sure how it's for others byt for me it's a lot harder to read chunk of code to understand and verify it than to take the problem head on with code and then maybe consult it using LLM.

        • chasd00 13 hours ago ago

          I think the industry is going to end up with exceptional software engineers organizing and managing many average coding assistants. The problem is the vast majority of us are not exceptional software engineers (obviously).

    • themafia 12 hours ago ago

      > but the speed tradeoff

      If you only care about a single metric you can convince yourself to make all kinds of bad decisions.

      > Nobody is lying.

      Nobody is being honest either. That happens all the time.

  • tom2948329494 12 hours ago ago

    > The problem is that while these tools can help you build a simple prototype incredibly quickly, when it comes to building functional applications they are much more limited

    As someone with 0 (zero) swift skills and who has built a very well functioning iOS app purely with AI, I disagree.

    AI made me infinitly faster because without it I wouldn‘t even have tried to build it.

    And yes, I know the limits and security concerns and understand enough to be effective with AI.

    You can build functioning applications just fine.

    It‘s complexity and novel problems where AI _might_ struggle, but not every software is complex or novel.

    • anonymous344 12 hours ago ago

      do you make money with it? like monthly subscription? because that's my achilles heel, how to synch the mysql backend with apple's payment system so it knows when user ordered or cancelled

      • tom2948329494 12 hours ago ago

        Yes, using apples own storekit for in-app purchases.

  • threethirtytwo 12 hours ago ago

    Software is the hardest thing on planet earth. That's why there's this concept of bootcamps. No other profession has this concept of "bootcamps".

    Building a plane is easier than building software. That's why they don't have bootcamps for building planes or becoming a rocket engineer. Building rockets or planes as an engineer is a breeze so there's no point in making a bootcamp.

    That's the awesome thing about being a swe, it's so hard that it's beyond getting a university degree, beyond requiring higher math to learn. Basically the only way to digest the concept of software is to look at these "tutorials" on the internet or have AI vibe code the whole thing (which shows how incredibly hard it is, just ask chatGPT).

    My friend became a rocket engineer and he had to learn calculus, physics and all that easy stuff which university just transferred into his brain in a snap. He didn't have to go through an internet tutorial or bootcamp.