AI is an impediment to learning web development

(ben.page)

199 points | by bdlowery 12 hours ago ago

188 comments

  • faizshah 9 hours ago ago

    The copy-paste programmer will always be worse than the programmer who builds a mental model of the system.

    LLMs are just a faster and more wrong version of the copy-paste stackoverflow workflow it’s just now you don’t even need to ask the right question to find the answer.

    You have to teach students and new engineers to never commit a piece of code they don’t understand. If you stop at “I don’t know why this works” then you will never be able to get out of the famous multi hour debug loop that you get into with LLMs or similarly the multi day build debugging loop that everyone has been through.

    The real thing that LLMs do that is bad for learning is that you don’t need to ask it the right question to find your answer. This is good if you already know the subject but if you don’t know the subject you’re not getting that reinforcement in your short term memory and you will find things you learned through LLMs are not retained as long as if you did more of it yourself.

    • nyrikki 8 hours ago ago

      It is a bit more complicated, as it can be harmful for experts also, and the more reliable it gets the more problematic it becomes.

      Humans suffer from automation bias and other cognitive biases.

      Anything that causes disengagement from a process can be challenging, especially with long term maintainability and architectural erosion, which is mostly what I actively search for to try and avoid complacency with these tools.

      But it takes active effort to avoid for all humans.

      IMHO, writing the actual code has always been less of an issue than focusing on domain needs, details, and maintainability.

      As distrusting automation is unfortunately one of the best methods of fighting automation bias I do try to balance encouraging junior individuals to use tools that boost productivity while making sure they still maintain ownership of the delivered product.

      If you use the red-green refactor method, avoiding generative tools for the test and refactor steps seems to work.

      But selling TDD in general can be challenging especially with Holmström's theorem, and the tendency of people to write implementation tests vs focusing on domain needs.

      It is a bit of a paradox that the better the tools become, the higher the risk is, but I would encourage people to try the above, just don't make the mistake of copying the prompts required to get from red to green as the domain tests, there is a serious risk of coupling to the prompts.

      We will see if this works for me long term, but I do think beginners manually refactoring with though could be an accelerator.

      But only with intentionally focusing on learning why over time.

      • faizshah 7 hours ago ago

        I completely reject this way of thinking. I remember when I was starting out it was popular to say you learn less by using an IDE and you should just use a text editor because you never learn how the system works if you rely on a run button or debug button or a WYSIWYG editor.

        Well in modern software we stand on the shoulders of many many giants, you have to start somewhere. Some things you may never need to learn (like say learning git at a deep level when learning the concept of add, commit, rebase, pull, push, cherry pick and reset are enough even if you use a GUI to do it) and some thins you might invest in over time (like learning things about your OS so you can optimize performance).

        The way you use automation effectively is to automate the things you don’t want to learn about and work on the things you do want to learn about. If you’re a backend dev who wants to learn how to write an API in Actix go ahead and copy paste some ChatGPT code, you just need to learn the shape of the API and the language first. If you’re a rust dev who wants to learn how Actix works don’t just copy and paste the code, get ChatGPT to give you a tutorial and then your write your API and use the docs yourself.

        • yoyohello13 6 hours ago ago

          Using an IDE does stunt learning though. Whether that’s a problem is up for debate. But relying on the run button, or auto completion does offload the need to remember how the cli fits together or learn library apis .

          • faizshah 6 hours ago ago

            Do you have any evidence of that?

            From personal experience and from the popularity of rstudio, jupyter etc. the evidence points in the other direction.

            It’s because when you start out you just need to learn what code looks like, how does it run (do all the lines execute at once or sequentially), that you have these boxes that can hold values etc.

          • FridgeSeal 6 hours ago ago

            Given that form most of my projects, the run button is a convenient wrapper over “cargo run” or “cargo test — test-name”, or “python file.py” not super convinced of the argument.

            Maybe in C/C++ where build systems are some kind of lovecraftian nightmare?

            • jwrallie 4 hours ago ago

              Some Makefile knowledge does not hurt much, but other than that it starts to become a nightmare.

              Another big difference is the size of the standard library. One can hold on his brain all the information needed to program in C, but I would argue that for C++ or Java it would be too taxing and an IDE is almost a requirement, the alternative being consulting the documentation often.

        • chipotle_coyote 5 hours ago ago

          > If you’re a rust dev who wants to learn how Actix works don’t just copy and paste the code, get ChatGPT to give you a tutorial

          But if you don't know how Actix works, how can you be sure that the ChatGPT-generated tutorial is going to be particularly good? It might spit out non-idiomatic, unnecessarily arcane, or even flat-out wrong information, asserted confidently, and you may not have any good way of determining that. Wouldn't you be better off "using the docs yourself" in the first place, assuming they have a halfway decent "Getting Started" section?

          I know it's easy to draw an analogy between "AI-assisted coding" and autocompleting IDEs, but under the hood they're really not the same thing. Simpler autocompletion systems offer completions based on the text you've already typed in your project and/or a list of keywords for the current language; smarter ones, like LSP-driven ones, perform some level of introspection of the code. Neither of those pretend to be a replacement for understanding "how the system works." Just because my editor is limiting its autocomplete suggestions to things that make sense at a given cursor position doesn't mean I don't have to learn what those methods actually do. An LLM offering to write a function for you based on, say, the function signature and a docstring does let you skip the whole "learn what the methods actually do" part, and certainly lets you skip things like "what's idiomatic and elegant code for this language". And I think that's what the OP is actually getting at here: you can shortcut yourself out of understanding what you're writing far more easily with an LLM than with even the best non-LLM-based IDE.

        • nyrikki 5 hours ago ago

          Strawman on the WYSIWIG editor vs text editor question. That is not an "automated decision-making system"

          > Automation bias is the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct.

          Note this 1998 NASA paper to further refute the IDE/WYSIWYG editor claims.

          https://ntrs.nasa.gov/citations/19980048379

          > This study clarified that automation bias is something unique to automated decision making contexts, and is not the result of a general tendency toward complacency.

          The problems with automation bias has been known for decades and the studies in the human factors field is quite robust.

          While we are still way to early in the code assistant world to have much data IMHO, there is evidence called out even in studies that edge towards positive results in coding assistants that point out issues with complacency and automation bias.

          https://arxiv.org/abs/2208.14613

          > On the other hand, our eye tracking results of RQ2 suggest that programmers make fewer fixations and spend less time reading code during the Copilot trial. This might be an indicator of less inspection or over-reliance on AI (automation bias), as we have observed some participants accept Copilot suggestions with little to no inspection. This has been reported by another paper that studied Copilot [24].

          • faizshah 5 hours ago ago

            Some decisions like “how do I mock a default export in jest again?” are low stakes. While other decisions like “how should I modify our legacy codebase to use the new grant type” are high stakes.

            Deciding what parts of your workflow to automate is whats important.

      • nonrandomstring 7 hours ago ago

        > It is a bit of a paradox that the better the tools become, the higher the risk is

          "C makes it easy to shoot yourself in the foot; C++ makes it harder,
           but when you do it blows your whole leg off". -- Bjarne Stroustrup
    • Buttons840 7 hours ago ago

      You suggest learning the mental model behind the system, but is there a mental model behind web technologies?

      I'm reminded of the Wat talk: https://www.destroyallsoftware.com/talks/wat

      Is it worth learning the mental model behind this system? Or am I better off just shoveling LLM slop around until it mostly works?

      • faizshah 7 hours ago ago

        The modern software space is too complex for any one person to know everything. There’s no one mental model. Your expertise over time comes from learning multiple mental models.

        For example if you are a frontend developer doing typescript in React you could learn how React’s renderer works or how typescript’s type system works or how the browser’s event listeners work. Over time you accumulate this knowledge through the projects you work on and the things you debug in prod. Or you can purposefully learn it through projects and studying. We also build up mental models of the way our product and it’s dependencies work.

        The reason a coworker might appear to be 10x or 100x more productive than you is because they are able to predict things about the system and arrive at solution faster. Why are they able to do that? It’s not because they use vim or type at 200 wpm. It’s because they have a mental of the way the system works that might be more refined than your own.

      • tambourine_man 7 hours ago ago

        Of course there is. The DOM stands for Document Object Model. CSS uses the box model. A lot of thought went behind all these standards.

        JavaScript is weird, but show me a language that doesn’t have its warts.

        • hansvm 7 hours ago ago

          > JavaScript is weird, but show me a language that doesn’t have its warts.

          False equivalence much? Languages have warts. JS is a wart with just enough tumorous growth factors to have gained sentience and started its plans toward world domination.

        • senko 7 hours ago ago

          All languages have their warts. In JavaScript, the warts have their language.

      • qwery 6 hours ago ago

        > Is it worth learning the mental model behind this system?

        If you want to learn javascript, then yes, obviously. You also need to learn the model to be able to criticise it (effectively) -- or to make the next wat.

        > am I better off just shoveling LLM slop around until it mostly works?

        Probably not, but this depends on context. If you want a script to do a thing once in a relatively safe environment, and you find the method effective, go for it. If you're being paid as a professional programmer I think there is generally an expectation that you do programming.

      • umpalumpaaa 6 hours ago ago

        Uhhh. I hated JS for years and years until I started to actually look at it.

        If you just follow a few relatively simple rules JS is actually very nice and "reliable". Those "rules" are also relatively straight forward: let/const over var, === unless you know better, make sure you know about Number.isInteger, isSafeInteger, isObject etc etc. (there were a few more rules like this - fail to recall all of them - has been a few years since i touched JS) - hope you get the idea.

        Also when I looked at JS I was just blown away by all the things people built on top of it (babel, typescript, flowtype, vue, webpack, etc etc).

        • Buttons840 5 hours ago ago

          That's a pile of tricks, not a mental model though.

          A mental model might be something like "JavaScript has strict and non-strict comparisons", but there are no strict less-than comparisons for example, so remembering to use === instead of == is just a one-off neat tip rather than an application of some more general rule.

    • rsynnott 7 hours ago ago

      I always wonder how much damage Stackoverflow did to programmer education, actually. There’s a certain type of programmer who will search for what they want to do, paste the first Stackoverflow answer which looks vaguely related, then the second, and so forth. This is particularly visible when interviewing people.

      It is… not a terribly effective approach to programming.

      • noufalibrahim 7 hours ago ago

        I'd qualify that (and the llm situation) with a level of abstraction.

        It's one thing to have the llm generate a function call for you where you don't remember all the parameters. That's a low enough abstraction where it serves as a turbo charged doc lookup. It's also probably okay to get a basic setup (toolchain etc. for an ecosystem you're unfamilar with(. But to have it solve entire problems for you especially when you're learning is a disaster.

      • exe34 7 hours ago ago

        my workflow with stackoverflow is to try to get working code that does the minimum of what I'm trying to do, and only try to understand it after it works as I want it to. otherwise there's an infinite amount of code out there that doesn't work (because of version incompatibility, plain wrong code, etc) and I ran out of patience long ago. if it doesn't run, I don't want to understand it.

        • faizshah 6 hours ago ago

          This is in my opinion the right way to use it. You can use stackoverflow or ChatGPT to get to “It works!” But don’t stop there, stop at “It works, and I know why it works and I think this is the best way to do it.” If you just stop at “It works!” You didn’t learn anything and might be unknowingly making new problems.

        • username135 6 hours ago ago

          My general philosophy as well.

      • underlipton 6 hours ago ago

        Leaning on SO was always the inevitable conclusion, though. "Write once" (however misinterpreted that may be) + age discrimination fearmongering hindering the transfer of knowledge from skilled seniors to juniors + the increasingly brutal competition to secure one's position by producing, producing, producing. With the benefit of the doubt and the willingness to cut/build in slack all dead, of course "learning how to do it right" is a casualty. Something has to give, and if no one's willing to volunteer a sacrifice, the break will happen wherever physically or mechanically convenient.

    • sgustard 7 hours ago ago

      Quite often I'm incorporating a new library into my source. Every new library involves a choice: do I just spend 15 minutes on the Quick Start guide (i.e. "copy-paste"), or a day reading detailed docs, or a week investigating the complete source code? All of those are tradeoffs between understanding and time to market. LLMs are another tool to help navigate that tradeoff, and for me they continue to improve as I get better at asking the right questions.

      • travisgriggs 6 hours ago ago

        Or "do I even need a library really?" These libraries do what I need AND so many other things that I don't need. Am I just bandwagoning. For my very simple purposes, maybe my own "prefix(n)" method is better than a big ol' library.

        Or not.

        All hail the mashup.

      • LtWorf 7 hours ago ago

        If you spend less than 15 minutes before even deciding which library to include and if include it at all. You're probably doing it wrong.

        • smikhanov 7 hours ago ago

          No, that person is doing it right. That’s 15 minutes of your life you’ll never get back; no library is worth it.

          • faizshah 6 hours ago ago

            If your goal is “ship it” then you might be right. If your goal is “ship it, and don’t break anything else, and don’t cause any security issues in the future and don’t rot the codebase, and be able to explain why you did it that way and why you didn’t use X” then you’re probably wrong.

    • stonethrowaway 8 hours ago ago

      If engineers are still taught engineering as a discipline then it doesn’t matter what tools they use to achieve their goals.

      If we are calling software developers who don’t understand how things work, and who can get away with not knowing how things work, engineers, then that’s a separate discussion of profession and professionalism we should be having.

      As it stands there’s nothing fundamentally rooted in software developers having to understand why or how things work, which is why people can and do use the tools to get whatever output they’re after.

      I don’t see anything wrong with this. If anyone does, then feel free to change the curriculum so students are graded and tested on knowing how and why things work the way they do.

      The pearl clutching is boring and tiresome. Where required we have people who have to be licensed to perform certain work. And if they fail to perform it at that level their license is taken away. And if anyone wants to do unlicensed work then they are held accountable and will not receive any insurance coverage due to a lack of license. Meaning, they can be criminally held liable. This is why some countries go to the extent of requiring a license to call yourself an engineer at all.

      So where engineering, actual engineering, is required, we already have protocols in place that ensure things aren’t done on a “trust me bro” level.

      But for everyone else, they’re not held accountable whatsoever, and there’s nothing wrong with using whatever tools you need or want to use, right or wrong. If I want to butt splice a connector, I’m probably fine. But if I want to wire in a 3 phase breaker on a commercial property, I’m either looking at getting it done by someone licensed, or I’m looking at jail time if things go south. And engineering or no different.

      • RodgerTheGreat 7 hours ago ago

        In many parts of the world, it is illegal to call yourself an "engineer" without both appropriate certification/training and legal accountability for the work one signs off upon, as with lawyers, medical doctors, and so on. It's frankly ridiculous that software "engineers" are permitted the title without the responsibility in the US.

        • djeastm an hour ago ago

          >as with lawyers, medical doctors, and so on. It's frankly ridiculous that software "engineers" are permitted the title without the responsibility in the US.

          It's because 1) most of us don't work on things that can get people jailed or killed and 2) the US leans towards not regulating language so much.

          But if it makes you more comfortable, just think of the term "software engineer" as tongue-in-cheek, like some people call janitors "sanitation engineers"

        • stonethrowaway 6 hours ago ago

          Yet my comment keeps getting upvoted and downvoted. I guess I’m either saying something controversial, which I don’t think I am since I am stating the obvious, or potentially the anti-AI crowd doesn’t like my tone. I’m not pro or against AI (I don’t have a dog in this race). Everything at your disposal is potentially a tool to use how you see fit, whether it be AI or a screwdriver.

      • faizshah 6 hours ago ago

        If your goal is just to get something working then go right ahead. But if your goal is to be learning and improving your process and not introducing any new issues and not introducing a new threat etc. then you’re better off not just stopping at “it works” but also figuring out why it works and if this is the right way to make it work.

        The idea that wanting to become better at using something is pearl clutching is frankly why everything has become so mediocre.

    • baxtr 7 hours ago ago

      I wonder if this is an elitist argument.

      AI empowers normal people to start building stuff. Of course it won’t be as elegant and it will be bad for learning. However these people would have never learned anything about coding in the first place.

      Are we senior dev people a bit like carriage riders that complain about anyone being allowed to drive a car?

      • UncleMeat 7 hours ago ago

        My spouse is a university professor. A lot of her students cheat using AI. I am sure that they could be using AI as a learning mechanism, but they observably aren't. Now, the motivations for using AI to pass a class are different but I think that it is important to recognize that there is using AI to build something and learn and there is using AI to build something.

        Engineering is also the process of development and maintenance over time. While an AI tool might help you build something that functions, that's just the first step.

        I am sure that there are people who leverage AI in such a way that the build a thing and also ask it a lot of questions about why it is built in a certain way and seek to internalize that. I'd wager that this is a small minority.

        • KoolKat23 6 hours ago ago

          Back in school on occasion it was considered cheating to use a calculator, the purpose to encourage learning. It would be absurd in the work environment to ban the use of calculators, it's your responsibility as an employee to use it correctly. As you say the first step.

          • UncleMeat 5 hours ago ago

            I'm sure at some point that universities will figure out how to integrate AI into pedagogy in a way that works other than a blanket ban. It also doesn't surprise me that until people figure out effective strategies that they say "no chatgpt on your homework."

      • faizshah 7 hours ago ago

        It has nothing to do with your level of knowledge or experience as a programmer. It has to do with how you learn: https://www.hup.harvard.edu/books/9780674729018

        To learn effectively you need to challenge your knowledge regularly, elaborate on that knowledge and regularly practice retrieval.

        Building things solely relying on AI is not effective for learning (if that is your goal) because you aren’t challenging your own knowledge/mental model, retrieving prior knowledge or elaborating on your existing knowledge.

      • senko 7 hours ago ago

        The problem with using the current crop of LLMs for coding, if you're not a developer, is that they're leaky abstractions. If something goes wrong (as it usually will in software developent), you'll need to understand the underlying tech.

        In contrast, if you're a frontend developer, you don't need to know C++ even though browsers are implemented in it. If you're a C++ developer, you don't need to know assembly (unless you're working on JIT).

        I am convinced AI tools for software development will improve to the point that non-devs will be able to build many apps now requiring professional developers[0]. It's just not there yet.

        [0] We already had that. I've seen a lot of in-house apps for small businesses built using VBA/Excel/Access in Windows (and HyperCard etc on Mac). They've lost that power with the web, but it's clearly possible.

      • lovethevoid 7 hours ago ago

        I'm a huge fan of drivers with no experience or knowledge of a car getting on the highway. After all, look at how empowered they are!

      • lawn 7 hours ago ago

        Maybe the senior developers are just jaded having to maintain code that nobody, not even their authors, know how it's supposed to work?

        • jcgrillo 7 hours ago ago

          I've gotten a lot of mileage in my career by following this procedure:

          1. Talk to people, read docs, skim the code. The first objective is to find out what we want the system to do.

          2. Once we've reverse-engineered a sufficiently detailed specification, deeply analyze the code and find out how well (or more often poorly) it actually meets our goals.

          3. Do whatever it takes to make the code line up with the specification. As simply as possibly, but no simpler.

          This recipe gets you to a place where the codebase is clean, there are fewer lines of code (and therefore fewer bugs, better development velocity, often better runtime performance). It's hard work but there is a ton of value to be gained from understanding the problem from first principles.

          EDIT: You may think the engineering culture of your organization doesn't support this kind of work. That may be true, in which case it's incumbent upon you to change the culture. You can attempt this by using the above procedure to find a really nasty bug and kill it loudly and publicly. If this results in a bunch of pushback then your org is beyond repair and you should go work somewhere else.

  • tekchip 8 hours ago ago

    While I don't disagree and understand the authors concern the bottom line is the author, and others of the same mind, will have to face facts. LLMs are a genie that isn't going back in that bottle. Humans have LLMs and will use them. The teaching angle needs to change to acknowledge this. "You need to learn long hand math because you won't just have a calculator in your pocket." Whoopsie! Everyone has a smart phone. Now I'm going back to school for my degree and classes are taught expecting calculators and even encouraging the use of various math and graphing websites.

    By all means urge folks to learn the traditional, arguably better, way but also teach them to use the tools available well and safely. The tools aren't going away and the tools will continue to improve. Endeavour to make coders who use the tools well to produce valuable well written code 2x, 5x, 8x, 20x the amount of code as those of today.

    • BoiledCabbage 7 hours ago ago

      > You need to learn long hand math because you won't just have a calculator in your pocket." Whoopsie! Everyone has a smart phone.

      I hear this so often, that I have to reply. It's a bad argument. You do need to learn longhand math - and be comfortable with arithmetic. The reason given was incorrect (and a bit flippant), but you actually do need to learn it.

      Anyone in any engineering, or STEM based field needs to be able to estimate and ballpark numbers mentally. It's part of reasoning with numbers. Usually that means mentally doing a version of that arithmetic on rounded version of those numbers.

      Not being comfortable doing math, means not being able to reason with numbers which impacts every day things like budgeting and home finances. Have a conversation with someone who isn't comfortable with math and see how much they struggle with intuition for even easy things.

      The reason to know those concepts is because basic math intuition is an essential skill.

      • lannisterstark 4 hours ago ago

        >t's a bad argument. You do need to learn longhand math - and be comfortable with arithmetic. The reason given was incorrect (and a bit flippant), but you actually do need to learn it.

        But...this applies to engineering and/or webdev too. You can't just expect to copy paste a limited solution limited to 4096 output tokens or whatever that would work in a huge system you have at your job, which the LLM has 0 context of.

        Smaller problems, sure, but they're also YMMV. And honestly if I can solve smaller irritating problems using LLMs so I can shift my focus to more annoying, larger tasks, why not?

        What I am saying is that you also do need to know fundamentals of webdev to use LLMs to do webdev effectively.

    • jcgrillo 6 hours ago ago

      You still have to manually review and understand every single line of code and your dependencies. To do otherwise is software malpractice. You are responsible for every single thing your computers do in production, so act like it. The argument that developers can all somehow produce 10x or more the lines of code by leaning on a LLM falls over in the face of code review. I'd say at most you'll get 2x, but even that's pushing it. I personally will reject pull requests if I ask the author a question about how something works and they can't answer it. Thankfully, this hasn't happened (yet).

      If you have an engineering culture at your company of rubber-stamp reviews, or no reviews at all, change that culture or go somewhere better.

    • lawn 7 hours ago ago

      > "You need to learn long hand math because you won't just have a calculator in your pocket." Whoopsie! Everyone has a smart phone.

      That's a shitty argument, and it wasn't even true back in the day (cause every engineer had a computer when doing their work).

      The argument is that you won't develop a mental model if you rely on the calculator for everything.

      For example, how do you quickly make an estimate if the result you calculated is reasonable, or if you made an error somewhere? In the real world you can't just lookup the answer, because there isn't one.

      • KoolKat23 6 hours ago ago

        This allows you more time to develop a mental model, perhaps not at a learning stage but at a working stage. The LLM shows you what works and you can optimize it thereafter. It will even give you handy inline commentary (probably better than what a past developer provided on existing code).

  • steve_adams_86 9 hours ago ago

    I’ve come to the same conclusion in regards to my own learning, even after 15 years doing this.

    When I want a quick hint for something I understand the gist of, but don’t know the specifics, I really like AI. It shortens the trip to google, more or less.

    When I want a cursory explanation of some low level concept I want to understand better, I find it helpful to get pushed in various directions by the AI. Again, this is mostly replacing google, though it’s slightly better.

    AI is a great rubber duck at times too. I like being able to bounce ideas around and see code samples in a sort of evolving discussion. Yet AI starts to show its weaknesses here, even as context windows and model quality has evidently ballooned. This is where real value would exist for me, but progress seems slowest.

    When I get an AI to straight up generate code for me I can’t help but be afraid of it. If I knew less I think I’d mostly be excited that working code is materializing out of the ether, but my experience so far has been that this code is not what it appears to be.

    The author’s description of ‘dissonant’ code is very apt. This code never quite fits its purpose or context. It’s always slightly off the mark. Some of it is totally wrong or comes with crazy bugs, missed edge cases, etc.

    Sure, you can fix this, but this feels a bit too much like using the wrong too for the job and then correcting it after the fact. Worse still is that in the context of learning, you’re getting all kinds of false positive signals all the time that X or Y works (the code ran!!), when in reality it’s terrible practice or not actually working for the right reasons or doing what you think it does.

    The silver lining of LLMs and education (for me) is that they demonstrated something to me about how I learn and what I need to do to learn better. Ironically, this does not rely on LLMs at all, but almost the opposite.

    • prisenco 7 hours ago ago

      I'm of the same mind, AI is a useful rubber duck. A conversation with a brilliant idiot: Can't take what it says at face value but perfect for getting the gears turning.

      But after a year of autocomplete/code generation, I chucked that out the window. I switched it to a command (ctrl-;) and hardly ever use it.

      Typing is the smallest part of coding but it produces a zen-like state that matures my mental model of what's being built. Skipping it is like weightlifting without stretching.

    • bsder 5 hours ago ago

      > I really like AI. It shortens the trip to google, more or less.

      Is this "AI is good" or "Google is shit" or "Web is shit and Google reflects that"?

      This is kind of an important distinction. Perhaps I'm viewing the past through rose-tinted glasses, but it feels like searching for code stuff was way better back about 2005. If you searched and got a hit, it was something decent as someone took the time to put it on the web. If you didn't get a hit, you either hit something predating the web (hunting for VB6 stuff, for example) or were in very deep (Use the Source, Luke).

      Hmmm, that almost sounds like we're trying to use AI to unwind an Eternal September brought on by StackOverflow and Github. I might buy that.

      The big problem I have is that AI appears to be polluting any remaining signal faster than AI is sorting through the garbage. This is already happening in places like "food recipes" where you can't trust textual web results anymore--you need a secondary channel (either a prep video or a primary source cookbook, for example) to authenticate that the recipe is factually correct.

      My biggest fear is that this has already happened in programming, and we just haven't noticed yet.

      • lannisterstark 4 hours ago ago

        >Is this "AI is good" or "Google is shit" or "Web is shit and Google reflects that"?

        It's a -"I can ask it questions based on results and/or ask it to tweak things and/or ask it dumb what if questions" that I can't on web "good."-

        Honestly one of the major plus towards LLMs is that it is immediate and interactive.

        >Hmmm, that almost sounds like we're trying to use AI to unwind an Eternal September brought on by StackOverflow and Github. I might buy that.

        I mean, what's the alternative? You're not going back to web of 90s (and despite nostalgia, web of 90s was fairly bad).

        >you can't trust textual web results anymore--you need a secondary channel

        I wonder how good LLMs would be at verifying other LLMs that are trained differently - eg sometimes I switch endpoints in LibreChat midpoint during a problem after I am satisfied with an answer to a problem, and ask a different LLM to verify everything. It's pretty neat at catching tidbits.

  • boredemployee 9 hours ago ago

    Well, I must admit, LLMs made me lose the joy of learning programming and made me realize I like to solve problems.

    There was a time I really liked to go through books, documentation, learn and run the codes etc. but these days are gone for me. I prefer to enjoy free time and go to the gym now

    • SoftTalker 9 hours ago ago

      I'm the same, and I think it's a product of getting older and being more and more acutely aware of the passage of time and not wanting to spend time on bullshit things. Nothing to do with LLMs. I still like solving problems in code but I no longer get any joy from learning yet another new language or framework to do the same things we've been doing for the past 30 years, but with a different accent.

    • mewpmewp2 9 hours ago ago

      It is kind of opposite to me. I do a lot more side projects now, because I enjoy building, and I enjoy using LLMs as this multiplying tool so I build more with the same amount of time. I think integrating LLM with your workflow is also problem solving and an exciting novel way to problem solve at this. It gets my imagination really running and it is awesome to be able to exchange back and forth to overall see things from more perspectives since LLM can give me more different and varied point of views than I alone could have come up with.

      • anonzzzies 8 hours ago ago

        Same here; I am building so much more and faster than ever in my life and it is great. When I was a kid in the early 80s learning about AI, like everyone who mentored me, I thought it would be replacing programmers by 2000; that might still happen but for now the productivity is a blast.

      • VBprogrammer 8 hours ago ago

        LLMs really help me with the blank page problem. Just getting something, even partially working, to built upon can be a huge win.

      • aerhardt 8 hours ago ago

        I am in your camp. LLMs have made everything better for me, both learning and producing.

      • volker48 7 hours ago ago

        Same for me. Many times I would have an idea, but I would think ahead of all the mundane and tedious things I would need to complete to implement it and not even get started. Now I work with the LLM to do those more tedious and mechanical parts and frankly the LLM is generating pretty similar code to what I would have written anyway and if not I just rewrite it. A few times I've even been pleasantly surprised when the LLM took an approach I wouldn't have considered and I actually liked it better.

    • atomic128 8 hours ago ago

      This sentiment, I observe it everywhere. My coworkers and the people I interact with in engineering communities. A process of hollowing out and loss of motivation, a loss of meaning and purpose, as people delegate more and more of their thinking to the LLM.

      Some may ruminate and pontificate and lament the loss of engineering dignity, maybe even the loss of human dignity.

      But some will realize this is an inevitable result of human nature. They accept that the minds of their fellow men will atrophy through disuse, that people will rapidly become dependent and cognitively impaired. A fruitful stance is to make an effort to profit from this downfall, instead of complaining impotently. See https://news.ycombinator.com/item?id=41733311

      There's also an aspect of tragic comedy. You can tell that people are dazzled by the output of the LLM, accepting its correctness because it talks so smart. They have lost the ability to evaluate its correctness, or will never develop said ability.

      Here is an example from yesterday. This is totally nonsensical and incorrect yet the commenter pasted it into the thread to demonstrate the LLM's understanding: https://news.ycombinator.com/item?id=41747089

      Grab your popcorn and enjoy the show. "Nothing stops this train."

      • lgka 8 hours ago ago

        The wrong July 16th answer is hilarious! Half of the examples that are posted here as proof of the brilliance of LLMs are trivially wrong.

      • olddustytrail 7 hours ago ago

        No, they posted it to the thread to illustrate that the problem was in the training set. Which it obviously was.

        I don't know where your interpretation came from. Perhaps you're an LLM and you hallucinated it? :)

      • bongodongobob 8 hours ago ago

        It's called getting older. You guys are so dramatic about the llm stuff lol

        • sibeliuss 7 hours ago ago

          So dramatic... There are so many people who are so psyched on what LLMs have allowed them to achieve, and so many beginners that can't believe they've suddenly got an app, and so many veterans who feel disenchanted, and so on and so forth. I'm quite tired of everyone generalizing LLM reactions based on their own experience!

        • lgka 8 hours ago ago

          No, it is called having one's open source output stolen by billionaires who then pay enough apologists, directly or indirectly, to justify the heist.

          • bongodongobob 7 hours ago ago

            No one stole anything from you. Other than maybe your self esteem.

            • atomic128 6 hours ago ago

              Large language models are used to aggregate and interpolate intellectual property.

              This is performed with no acknowledgement of authorship or lineage, with no attribution or citation.

              In effect, the intellectual property used to train such models becomes anonymous common property.

              The social rewards (e.g., credit, respect) that often motivate open source work are undermined.

            • loqeh 6 hours ago ago

              Anyone who contradicts a (probably paid) apologist must be either old or lacking in self esteem. Well done, your masters will be happy.

              • bongodongobob 6 hours ago ago

                lol you think I'm paid to argue with VC bros on Hackernews?

                "Anyone who contradicts my opinion is paid off"

                I do it for the love of the game.

            • LtWorf 6 hours ago ago

              Self esteem? I mean plagiarism is a compliment. It's also a licence violation and a shame rich capitalists do it to screw everyone else as usual.

              • bongodongobob 6 hours ago ago

                How are rich people screwing you with AI? Which people?

  • yumraj 8 hours ago ago

    I’ve been thinking about this, since LLMs helped me get something done quickly in languages/frameworks that I had no prior experience in.

    But I realized a few things, that while they are phenomenally great when starting new projects and small code bases:

    1) one needs to know programming/soft engineering in order to use these well. Else, blind copying will hurt and you won’t know what’s happening when code doesn’t work

    2) production code is a whole different problem that one will need to solve. Copy pasters will not know what they don’t know and need to know in order to have production quality code

    3) Maintenance of code, adding features, etc is going to become n-times harder the more the code is LLM generated. Even large context windows will start failing, and hell hallucinations may screw up without one even realizing

    4) debugging and bug fixing, related to maintenance above, is going to get harder.

    These problems may get solved, but till then:

    1) we’ll start seeing a lot more shitty code

    2) the gap between great engineers and everyone else will become wider

    • KoolKat23 6 hours ago ago

      In my opinion this is unlikely to be a real problem. In one breath people are saying all they're giving you is stack overflow boilerplate and then in the same breath stating it is going to provide some unseen entropic answer.

      The truth of the matter, yes, organisations are likely to see less uniformity in their codebase but issues are likely to be more isolated/less systemic. More code will also be pushed faster. Okay so yes, there is some additional complexity.

      However, as they say, if you can't beat 'em, join 'em. The easiest way to stay on top of this will be to use LLM's to review your existing codebase for inconsistencies, provide overviews and commentary over how it all works, basically simplifying and speeding up working with this additional complexity. The code itself is being abstracted away.

    • ainiriand 8 hours ago ago

      Related discussion we were having now on Mastodon: https://floss.social/@janriemer/113260186319661283

      • yumraj 7 hours ago ago

        I hadn’t even gone that far in my note above, but that is exactly correct.

        We’ll have a resurgence of “edge-cases” and all kinds of security issues.

        LLMs are a phenomenal Stackoverflow replacement and better at creating larger samples than just a small snippet. But, at least at the moment, that’s it.

        • james_marks 7 hours ago ago

          100% on the SO replacement, which is a shame, as I loved and benefited deeply from SO over the years.

          I wonder about the proliferation of edge cases. Probably true, but an optimistic outlook, and at least in my own work, LLM’s deliver a failing test faster given new information, and the edge gets covered faster.

          • yumraj 5 hours ago ago

            Perhaps.

            I was referring to the above Mastodon thread, which if I understood correctly (I just scanned, didn't get too deep), was referring to ASCII vs Unicode in generated Rust code. And, I was reminded of issues we've come across over the years regarding assumptions around names, addresses, date/time handling and so on to name a few.

            So, my worry is generated code will take the easy way out, create something that will be used, the LLM-user will not even realize the issue since they'll lack deeper experience ... and much later, in production, users will run into the "edge-case" bugs later on. It's a hypothesis at this point, nothing more..

    • tomrod 8 hours ago ago

      A big part of the solution to this will be more, more focused, and more efficient QA.

      Test-driven development can inherently be cycled until correct (that's basically equivalent to what a Generative Adversarial Network does under the hood anyhow).

      I heard a lot of tech shops gutted their QA departments. I view that as a major error on their parts, if QA folks are current modern tooling (not only GenAI) and not trying to do everything manually.

      • yumraj 7 hours ago ago

        Many years ago I was at a very large software company, that everyone has heard of.

        Blackbox QA was entirely gutted, only some whitebox QA. Their titles were changed to software engineer from QA engineer. Dev were supposed to do TDD and that’s it, and there’s a fundamental issue there which looks like people don’t even realize.

        Anyway, we digress.

    • falcor84 8 hours ago ago

      > Even large context windows will start failing

      What do you mean by that?

      • yumraj 7 hours ago ago

        If you have a large code base, a software engineer has to look at many files, and step through a big stack to figure out the bugs. Forget about concurrency and multi-threaded scenarios.

        I’m assuming that an LLM will have to ingest that entire code base as part of the prompt to find the problem, refactor the code, add features that span edits across numerous files.

        So, at some point, even the largest context window won’t be sufficient. So what do you do?

        Perhaps a RAG of the entire codebase, I don’t know. Smarter people will have to figure it out.

        • falcor84 an hour ago ago

          Some LLMs already have a context window of 1M tokens, which I believe is already more than any human dev, but yes, I agree that it's not enough to look at it statically. Rather a multistep approach utilizing RAG and/or working directly with the language server would be the way to go. This recent post from Aider about using o1 as an architect seems to me like a good move in this direction - https://aider.chat/2024/09/26/architect.html

    • lawn 7 hours ago ago

      And maintenance, with adding features to legacy code and debugging, is much more common (and important) than getting small green&field projects up and running.

      • yumraj 7 hours ago ago

        Exactly my point.

  • Rocka24 9 hours ago ago

    I strongly disagree, I was able to learn so much about web development by using AI, it streamlines the entire knowledge gathering and dissemination process. By asking for general overviews then poking into the specifics of why things work the way they do, its possible to get an extremely functional and practical knowledge of almost any application of programming. For the driven and ambitious hacker, LLMs are practically invaluable when it comes to self learning. I think you have a case where you're simply dealing with the classic self-inflicting malady of laziness.

    • lovethevoid 8 hours ago ago

      What have you learned about web development using AI that skimming the MDN docs couldn't net you?

      • Rocka24 8 hours ago ago

        Well the issue isn't about acquiring the knowledge in general. I think so far in my learning journey I've come to realize that "practical learning" is much better than learning in the hopes that something will be useful. For instance, almost everyone in the American education system at some point was forced to memorize that one mol of gas occupies 22.4 L at STP but almost noone will ever use that knowledge again.

        Going through the actual real world issues of web development with an LLM on the side that you can query for any issue is infinitely more valuable than taking a course in web development imo because you actually learn how to DO the things, instead of getting a toolbox which half of the items you don't use ever and a quarter of which you have no idea how to functionally use. I strongly support learning by doing and I also think that the education system should be changed in support of that idea.

        • righthand 8 hours ago ago

          There are plenty of courses, classes, and schooling as you dewcribe, it’s just a matter of cost. A LLM is more useful for studying because it feels interactive however a lot of software development in general is applying what learned and what you need.

          If you want to spend 10 years growing your career and understanding math with the help of an LLM you will work it out eventually, by gambling on your role at a company and their offering of projects.

          If you want to spend 6 months - 6 years understanding the pieces you need for a professional career at various levels (hence the range), you pay for that kind of education.

        • albedoa 7 hours ago ago

          So...are you able to articulate what you have learned about web development through LLMs that skimming MDN wouldn't net you?

          Your "strong" disagreement and claim that you were able to learn so much about web development by using AI should be able to withstand simple and obvious followup questions such as "like what?".

          • briandear 5 hours ago ago

            “Tell me how MVC works” is far better than reading MDN documents. “Show me how to build a basic command line application.” “Tell me about polymorphism and give me some examples on how to use it.” “Teach me the basics on how to build an authentication system.” An LLM is great at that.

            Reading MDM docs doesn’t put any context around what you’re reading. And reading all the docs doesn’t teach you any more than reading a dictionary teaches you how to write.

            Docs are a reference, they really aren’t a place to start when you don’t know what you’re reading. Besides it’s boring.

            You don’t learn Russian by reading Tolstoy. You read Tolstoy once you have some idea of what the words mean.

            • albedoa 2 hours ago ago

              Everyone understands that, Brian! If i told you that "I was able to learn so much about Russian by using AI", would you expect that I SHOULD or SHOULD NOT be able to tell you something about what I've learned without writing multiple paragraphs about something else entirely? We are now four levels deep from the actual question.

        • lovethevoid 7 hours ago ago

          MDN is like the gold standard in free practical real world application learning lol

          https://developer.mozilla.org/en-US/docs/Learn

      • sibeliuss 6 hours ago ago

        Who wants to read all of those docs that they know nothing about?

        Better: use an LLM, get something working, realize it doesn't work _exactly_ as you need it to, then refer to docs. Beginners will need to learn this, however. They still need to learn to think. But practical application and a true desire to build will get them to where they need to be.

      • briandear 5 hours ago ago

        Skimming docs is like reading the dictionary. AI is more like a conversation.

    • jay_kyburz 7 hours ago ago

      When I ask AI questions about things I know very little, I seem to get quite good results. When I ask it questions about things I know a lot about, I get quite bad answers.

  • lofaszvanitt 9 hours ago ago

    When a person is using LLMs for work and the result is abysmal, that person must go. So easy. LLMs will make people dumber in the long term, because the machine thinks instead of them and they will readily accept the result it gives if it works. This will have horrifying results in 1-2 generations. Just like social media killed people's attention spam.

    But of course we don't need to regulate this space. Just let it go, all in wild west baby.

    • fhd2 8 hours ago ago

      It's the curse of our industry that we have such long feedback cycles. In the first year or so of a new system, bad code is similarly productive than good code, and often faster or at least cheaper to produce.

      Now a few more years down the line, you might find yourself in a mess and productivity grinds to a halt. Most of the decision makers who caused this situation are typically not around anymore at that point.

    • thatcat 9 hours ago ago

      California passed regulations based on model size.

      https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml...

      • 1propionyl 8 hours ago ago

        It was not signed into law, it was vetoed by the governor.

        https://www.gov.ca.gov/wp-content/uploads/2024/09/SB-1047-Ve...

        • migro23 7 hours ago ago

          Thanks for the link. Reading the rationale for not signing the legislation the governor wrote

          > "By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 - at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good".

          This doesn't make much sense to me. Smaller models might be more dangerous so lets not place safeguards on the larger models because they advance in favor of the public good? Some pretty big assumptions are made here, does this make sense to anyone else?

          • LtWorf 6 hours ago ago

            It makes sense to think he got bribed.

            • migro23 6 hours ago ago

              If true, that's pretty sad.

  • wkirby 8 hours ago ago

    The reason I am a software engineer — why it keeps me coming back every week — is the satisfying click when something I didn’t understand becomes obvious. I’ve talked to a lot of engineers over the last 15 years of doing this, and for most of them, they possess some version of the same compulsion. What makes good engineers tick is, imo, a tenacity and knack for solving puzzles. LLMs are useful when they let you get to the meat of the problem faster, but as the article says, they’re a hindrance when they are relied on to solve the problem. Knowing the difference is hard, a heuristic I work on with my team is “use an LLM if you already know the code you want to write.” If you don’t already know the right answer you won’t know if the LLM is giving you garbage.

  • xyst 8 hours ago ago

    Anybody remember the days of “macromedia”? I think it was dreamweaver that spit out WYSIWYG trash from people that didn’t know better.

    For a period of time there was a segment of development cleaning up this slop or just redoing it entirely.

    The AI-generated slop reminds me of that era.

  • xnx 9 hours ago ago

    "Modern" web development is so convoluted I'm happy to have a tool to help me sort through the BS and make something useful. In the near future (once the thrash of fad frameworks and almost-databases has passed) there may be a sane tech stack worth knowing.

    • lolinder 9 hours ago ago

      This exact comment (with subtle phrasing variations) shows up in every article that includes "web" in the title, but I feel like I'm living in an alternate universe from those who write comments like these. Either that or the comments got stuck in the tubes for a decade and are just now making it out.

      My experience is that React is pretty much standard these days. People create new frameworks still because they're not fully satisfied with the standard, but the frontend churn is basically over for anyone who cares for it to be. The tooling is mature, IDE integration is solid, and the coding patterns are established.

      For databases, Postgres. Just Postgres.

      If you want to live in the churn you always can and I enjoy following the new frameworks to see what they're doing differently, but if you're writing this live in 2024 and not stuck in 2014 you can also just... not?

      • zelphirkalt 8 hours ago ago

        React and frameworks based on it being used mostly for websites, where none of that stuff is needed in the first place, is part of what is wrong with frontend development.

        • lolinder 8 hours ago ago

          Then write your websites JavaScript-free or with minimal vanilla JS, no frameworks (much less framework churn) needed. That's been possible since the foundation of the web, and is nearly unchanged to this day for backwards compatibility reasons.

          • zelphirkalt 8 hours ago ago

            Yes, of course, you are right. And that is what I would do. And actually what I did do. Recently made a JS-free personal website, still fully responsive and has some foldable content and so on.

            However, people at $job would not listen to me, when I said, that it could be done without jumping on the React hype train and went ahead with React and a framework based on React, to make a single page app, completely unnecessary and occupying multiple frontend devs fulltime with that, instead of simply using a traditional web framework with a templating engine and knowledge about HTML and CSS. So I am no longer in that role to make some as-little-as-possible-JS thing happen. I was a fullstack developer, but I don't want to deal with the madness, so I withdrew from the frontend part.

            See, I don't have a problem with doing this. It is just that people think they need a "modern web framework" and single page apps and whatnot, when they actually don't and have very limited interactive widgets on their pages and have rather pages of informational nature. Then comes the router update taking 2 weeks, or a framework update taking 2-3 weeks, or new TS version being targeted... Frequent work, that wouldn't even exist with a simpler approach.

    • grey-area 9 hours ago ago

      You don't have to use 'Modern Frameworks' (aka an ahistorical mish-mash of Javascript frameworks) to do web development at all. I'm really puzzled as to why people refer to this as modern web development.

      If you're looking for a sane tech stack there are plenty of languages to use which are not javascript and plenty of previous frameworks to look at.

      Very little javascript is needed for a useful and fluid front-end experience and the back end can be whatever you like.

      • zelphirkalt 8 hours ago ago

        Well, I wish more developers had your insight and could make it heard at their jobs. Then the web would be in a better state than it is today.

        • lovethevoid 8 hours ago ago

          Vast majority of the web's downfalls stem from advertising and tracking. Unless you're proposing a way to remove advertising, then the problems will remain no matter what tech the developers opted for.

          • mdhb 8 hours ago ago

            You are conflating two entirely different issues. Both are true but neither at the expense of the other

            • lovethevoid 8 hours ago ago

              They aren't entirely different issues at all and are quite tightly interwoven. It doesn't matter how many ms you shave off by using/not using react when your page loads a full screen video ad and has 50MB of trackers to aid in its quest to access personal info.

              • mdhb 6 hours ago ago

                They are very literally different issues.

              • jay_kyburz 7 hours ago ago

                One is a tech problem, the other is a business problem.

                There are no ads on my websites.

      • xnx 8 hours ago ago

        Absolutely true. All technologies that previously existed (e.g. PHP3 + MySQL) still exist. Unfortunately, if you're looking to make use of other people's libraries, it is very difficult to find them for "obsolete" stacks.

    • mplewis 9 hours ago ago

      It’s only been thirty years, but keep waiting. I’m sure that solution is just around the corner for you.

  • dennisy 9 hours ago ago

    I feel this idea extends past just learning, I worry using LLMs to write code is making us all lazy and unfocused thinkers.

    I personally have banned myself from using any in editor assistance where you just copy the code directly over. I do still use chatGPT but without copy pasting any code, more along the lines of how I would use search.

    • steve_adams_86 9 hours ago ago

      I do this as well. I have inline suggestions enabled with supermaven (I like the tiny, short, fast suggestions it creates), but otherwise I’m really using LLMs to validate ideas, not actually generate code.

      I find supermaven helps keep me on track because its suggestions are often in line with where I was going, rather than branching off into huge snippets of slightly related boilerplate. That’s extremely distracting.

      • dennisy 9 hours ago ago

        Yes! This is the other point is that it is also just distracting as you are thinking through a hard problem to have code just popping up which you inevitably end up reading even if you know what you planned to write.

        Just had a glimpse at supermaven and not sure why that would be better, the site suggest it is a faster copilot.

        • steve_adams_86 7 hours ago ago

          It’s better for me because the suggestions are much faster and typically more brief and useful. However, I haven’t used copilot for quite a while, so it might be similar these days. I recall it having very verbose, tangential suggestions.

  • orwin 9 hours ago ago

    For people who like me mostly do Backend/Network/System development and who disagree on how helpfull LLMs are (basically a waste of time if you're using it for anything other than rubber ducking/writing tests cases/autocomplete), LLMs can basically write a working front-end page/component in 10s. Not an especially well-designed one, but "good enough". I find it especially shine in writing the html/css parts. It cannot write a FSM on its own, so basically when i write a page, i still write the states, actions and the reducer, but then i can generate the rest and it's really good.

    • dopp0 8 hours ago ago

      which LLM are you using for those frontend usecases? chatgpt? and you ask in prompts for some framework such as tailwind?

  • btbuildem 8 hours ago ago

    I disagree with the premise of the article -- for several reasons. You could argue that an LLM-based assistant is just a bigger footgun, sure. Nothing will replace a teacher who explains the big picture and the context. Nothing will replace learning how to manage, handle and solve problems. But having a tireless, nimble assistant can be a valuable learning tool.

    Web development is full of arbitrary, frustrating nonsense, layered on and on by an endless parade of contributors who insist on reinventing the wheel while making it anything but round. Working with a substantial web codebase can often feel like wading through a utility tunnel flooded with sewage. LLMs are actually a fantastic hot blade that cuts through most of the self-inflicted complexities. Don't learn webpack, why would you waste time on that. Grunt, gulp, burp? Who cares, it's just another in a long line of abominations careening towards a smouldering trash heap. It's not important to learn how most of that stuff works. Let the AI bot churn through that nonsense.

    If you don't have a grasp on the basics, using an LLM as your primary coding tool will quickly leave you with a tangle of incomprehensible, incoherent code. Even with solid foundations and experience, it's very easy to go just a little too far into the generative fairytale.

    But writing code is just a small part of software development. While reading code doesn't seem to get talked about as much, it's the bread and butter of any non-solo project. It's also a very good way to learn -- look at how others have solved a problem. Chances that you're the first person trying to do X are infinitesimally small, especially as a beginner. Here, LLMs can be quite valuable to a beginner. Having a tool that can explain what a piece of terse code does, or why things are a certain way -- I would've loved to have that when I was learning the trade.

  • aatarax 6 hours ago ago

    This section sums it up and I agree with the author here

    > LLMs are useful if you already have a good mental model and understanding of a subject. However, I believe that they are destructive when learning something from 0 to 1.

    Super useful if you have code in mind and you can get an LLM to generate that code (eg, turning a 10 minute task into a 1 minute task).

    Somewhat useful if you have a rough idea in mind, but need help with certain syntax and/or APIs (eg, you are an experienced python dev but are writing some ruby code).

    Useful for researching a topic.

    Useless for generating code where you have no idea if the generated code is good or correct.

  • yhoots an hour ago ago

    sounds more like this happened because the instructors failed to tell the students not to just ChatGPT all the answers, OR the students didn't listen.

    this is somewhat analogous to learning arithmetic but just using a calculator to get the answer. but coding is more complicated at least in terms of the sheer number of concepts.

    yet we dont ban calculators from the classroom. we just tell students to use them mindfully. the same should apply to LLMs.

    i wish when I was learning coding I had LLMs. but you do need to have the desire to understand how something really works. and also the pain of debugging something trivial for hours does help retention :).

    • obscuretone an hour ago ago

      I'm old enough to remember calculators being banned from the classroom because "you won't always have one in your pocket"

  • gwbas1c 8 hours ago ago

    All the mistakes Ben describes smell like typical noob / incompetent programmer mistakes.

    All the LLM is doing is helping people make the same mistakes... faster.

    I really doubt that the LLM is the root cause of the mistake, because (pre LLM) I've come across a lot of similar mistakes. The LLM doesn't magically understand the problem; instead a noob / incompetent programmer misapplies the wrong solution.

    • mdhb 8 hours ago ago

      The examples he gives were explicitly called out as mistakes you wouldn’t normally make as a beginner because they are so esoteric and I don’t disagree with him at all on that one.

      • gwbas1c 8 hours ago ago

        > A page written in HTML and vanilla JavaScript, loaded from the public/ directory, completely outside of the Next.js + React system.

        I once had a newcomer open up a PR that completely bypassed the dependency injection system.

        > Vanilla JavaScript loaded in via filesystem APIs and executed via dangerouslySetInnerHTML

        I wish I had more context on this one, it looks like someone is trying to bypass React. (Edit) Perhaps they learned HTML, wanted to set the HTML on an element, and React was getting in the way?

        > API calls from one server-side API endpoint to another public API endpoint on localhost:3000 (instead of just importing a function and calling it directly)

        I once inherited C# code that, instead of PInvoking to call a C library, pulled in IronPython and then used the Python wrapper for the C library.

  • weitendorf 6 hours ago ago

    I can’t help but think part of the problem is that web development is also an impediment to learning web development.

    IME there is a lot more arcana and trivia necessary to write frontend/web applications than most other kinds of software, mostly because it’s both regular programming and HTML/CSS/browser APIs. While you can build a generalized intuition for programming, the only way to master the rest of the content is through sheer exposure - mostly through tons of googling, reading SO, web documentation, and trial and error getting it do the thing. If you’re lucky you might have a more experienced mentor to help you. And yes, there are trivia and arcana needed to be productive in any programming domain, but you can drop a freshly minted CS undergrad into a backend engineering role and expect them to be productive much faster than with frontend (perhaps partly why frontend tends to have a higher proportion of developers with non-CS backgrounds).

    It doesn’t help that JavaScript and browsers are typically “fail and continue”, nor that there may be several HTML/CSS/browser features all capable of implementing the same behavior but with caveats and differences that are difficult to unearth even from reading the documentation, such as varying support across browsers or bad interactions with other behavior.

    LLMs are super helpful dealing with the arcana. I’m recently writing a decent amount of frontend and UI code after spending several years doing backend/systems/infra - I am so much more productive with LLMs than without, especially when it comes to HTML and CSS. I kind of don’t care that I’ll never know the theory behind “the right way to center a div” - as long as the LLM is good enough at doing it for me why does it matter? And if it isn’t, I’ll begrudgingly go learn it. It’s like caring that people don’t know the trick to check “is a number even” in assembly.

  • BinaryMachine 9 hours ago ago

    Thank you for this post.

    I use LLMs sometimes to understand a step by step mathematical process (this can be hard to search google). I believe getting a broad idea by asking someone is the quickest way to understand any sort of business logic related to the project.

    I enjoyed your examples, and maybe there should be a dedicated site just for examples of code related to the web that used an LLM to generate any logic, the web changes constantly and I wonder how these LLMs will keep up with the specs, specific browsers, frameworks, etc.

  • cladopa 7 hours ago ago

    I disagree. I am a programmer and entrepreneur myself with engineering education. I know lots of languages very well (c,c++,scheme, python) and made my own tech company so managing it takes a big amount of my time.

    I always wanted to program(and understand deeply) the web and could not. I bought books and videos, I went to courses with real people but I could not progress. I had limited time and there were so many different things, like CSS, and js and html and infinite frameworks you had to learn at once.

    Thanks to ChatGPT and Claude I have understood web development, deeply. You can ask both general and deep questions and it helps you like no teacher could (the teachers I had access to).

    Something I have done is creating my own servers to understand what happens under the hood. No jQuery teacher could help with that. But ChatGPT could.

    AI is a great tool if you know how to use it.

  • userbinator 8 hours ago ago

    AI is an impediment to learning.

    Also ask yourself this the next time you feel compelled to just take an AI solution: what value are you providing, if anyone can simply ask the AI for the solution? The less your skills matter, the more easily you'll be replaced.

    • jwrallie 4 hours ago ago

      Sometimes more important than to get the right answer is to ask the right question. That and the verification that the answer satisfies the question is what you are providing.

      If you delegate a task to people working under you, would it be that different? They can also learn and replace you. Maybe even with more productivity since they may be willing to use AI.

  • xp84 6 hours ago ago

    > API calls from one server-side API endpoint to another public API endpoint on localhost:3000 (instead of just importing a function and calling it directly). > These don’t seem to me like classic beginner mistakes — these are fundamental misunderstandings of the tools

    This all sounds on par with the junior level developers I feel ai has pretty quickly replaced.

    I still feel sad though :( how are people meant to get good experience in this field now?

    • inopinatus 6 hours ago ago

      Experience is one thing. Feedback is another. You can't mentor a language model.

  • csallen 8 hours ago ago

    AI is an impediment to learning high-level programming languages. High-level programming languages are an impediment to learning assembly. Assembly is an impediment to learning machine code. Machine code is an impediment to learning binary.

    • gizmo686 7 hours ago ago

      The difference is that all of those other technologies have a rock solid abstraction layer. In my entire career, I have never encountered an assembler error [0], and have encountered 1 compiler error (excluding a niche compiler that I was actively developing). Current generative AI technology is fundamentally not that.

      [0] which isn't that impressive, as I've done very little assembly.

    • lovethevoid 8 hours ago ago

      Who needs to learn how to read anyways, isn't everything just audiobooks now amiright?

  • jt2190 7 hours ago ago

    > Use of LLMs hinders learning of web development.

    I’m sure this is true today, but over time I think this will become less true.

    Additionally, LLMs will significantly reduce the need for individual humans to use a web browser to view advertisement-infested web pages or bespoke web apps that are difficult to learn and use. I expect the commercial demand for web devs is going to slowly decline for these college-aged learners as the internet transitions, so maybe it’s ok if they don’t become experts in web development.

  • Krei-se 9 hours ago ago

    I like AI to help me fixing bugs and looking up errors, but i usually architect everything on my own and i'm glad i can use it for everything i would've put off to some coworker who can do the lookups and works on a view or sth. that has no reconnect to the base system architecture.

    So he's not wrong, you have to ask the right questions still, but with later models that think about what they do this could still become a non-issue sooner than some breathing in relieve think.

    We are bound to a maximum of around 8 working units in our brain, a machine is not. Once AI builds a structure graph like wikidata next to the attention vectors we are so done!

  • infinite-hugs 8 hours ago ago

    Certainly agree that copy pasting isn’t a replacement to teaching but I can say I’ve had success learning coding basics while just asking Claude or gpt to explain the code output line by line.

  • synack 9 hours ago ago

    I learned web development in the early '00s with the Webmonkey tutorials, which were easy, accessible, and fun. I don't know what the modern equivalent is, but I wish more people could have that experience.

    https://web.archive.org/web/20080405212335/http://webmonkey....

  • heisenbit 7 hours ago ago

    Not just web development learning is affected. Students handing in homework in all kind of coursed generated with AI. The problem of course is that part of learning depends on spaced repetition (ask any AI how it learned ;-) ) so skipping that part - all across the board - is having an impact already now.

  • trzy 6 hours ago ago

    That’s ok. Having to learn web development is an impediment to the flourishing of the human spirit.

  • Buttons840 8 hours ago ago

    Does anyone else feel that web technologies are the least worthy of mastery?

    I mean, a lot of effort has gone into making poorly formed HTML work, and JavaScript has some really odd quirks that will never be fixed because of backwards compatibility, and every computer runs a slightly different browser. True mastery of such a system sounds like a nightmare. Truly understanding different browsers, and CSS, and raw DOM APIs, none of this feels worthy my time. I've learned Haskell even though I'll never use it because there's useful universal ideas in Haskell I can use elsewhere. The web stack is a pile of confusion; there's no great insight that follows learning how JavaScript's if-statements work, just more confusion.

    If there was ever a place where I would just blindly use whatever slop a LLM produces, it would be the web.

  • fimdomeio 7 hours ago ago

    I'm the person that was copy-past school work in the 90's for things that I wasn't interested in. I'm also the person who spent years learning things that I was passionate for without a end goal in mind. The issue here is not AI, it's motivation.

  • calibas 6 hours ago ago

    Having someone else do something for you is an impediment to learning anything.

  • elicksaur 9 hours ago ago

    If it’s true for the beginner level, then it’s true for every level, since we’re always learning something.

  • travisgriggs 6 hours ago ago

    You had me at "AI is an impediment to learning..."

    I use GPT all the time. But I do very little learning from it. GPT is like having an autistic 4 year old with with vast memory as your sidekick. It can be like having a super power when asked the right questions. But it lacks experience. What GPT does is allow you to get from some point As to other point Bs faster. I work in quite a few languages. I like that when I haven't done Python for a few days, I can ask "what is the idiomatic way to flatten nested collections in Python again?". I like that I can use it to help me prototype a quick idea. But I never really trust it. And I don't ship that code til I've ever learned more to vouch for it myself or can ask a real expert about what I've done.

    But for young programmers, who feel the pressure to produce faster than they can otherwise, GPT is a drug. It optimizes getting results fast. And since there is very little accountability in software development, who cares? It's a short term gain of productivity over a long term gain of learning.

    I view the rise of GPT as an indictment against how shitty the web has become, how sad the state of documentation is, and what a massive sprint of layering crappy complicated software on top of crappy complicated software has wrought. Old timers mutter "it was not always so." Software efforts used to have trained technical writers to write documentation. Less is more, used to be an effort by good engineering. AI tools will not close the gap in having well written concise documentation. It will not simplify software so that the mental model to understand it is more approachable. But it does give us a hazy approximation of what the current internet content has to offer.

    (I mean no offense to those who have autism in my comment above, I have a nephew with severe autism, I love him dearly, but we do adjust how we interact with him)

  • manx 8 hours ago ago

    Humanity was only able to produce one generation who knows how computers work.

  • tetha 8 hours ago ago

    I very much agree with this.

    If I have a structured code base, I understood the patterns and the errors to look out for, something like copilot is useful to bang out code faster. Maybe the frameworks suck, or the language could be better to require less code, but eh. A million dollars would be nice to have too.

    But I do notice that colleagues use it to get some stuff done without understanding the concepts. Or in my own projects where I'm trying to learn things, Copilot just generates code all over the place I don't understand. And that's limiting my ability to actually work with that engine or code base. Yes, struggling through it takes longer, but ends up with a deeper understanding.

    In such situations, I turn off the code generator and at most, use the LLM as a rubber duck. For example, I'm looking at different ways to implement something in a framework and like A, B and C seem reasonable. Maybe B looks like a deadend, C seems overkill. This is where an LLM can offer decent additional inputs, on top of asking knowledgeable people in that field, or other good devs.

  • ellyagg 8 hours ago ago

    Or is learning web development an impediment to learning AI?

  • monacobolid 8 hours ago ago

    Web development is impediment to learning web development.

  • kgeist 7 hours ago ago

    >API calls from one server-side API endpoint to another public API endpoint on localhost:3000 (instead of just importing a function and calling it directly).

    >LLMs will obediently provide the solutions you ask for. If you’re missing fundamental understanding, you won’t be able to spot when your questions have gone off the rails.

    This made me think: most of the time, when we write code, we have no idea (and don't really care) what kind of assembly the compiler will generate. If a compiler expert looked at the generated assembly, they’d probably say something similar: "They have no idea what they’re doing. The generated assembly shows signs of a fundamental misunderstanding of the underlying hardware," etc. I'm sure most compiled code could be restructured or optimized in a much better, more "professional" way and looks like a total mess to an assembly expert—but no one has really cared for at least two decades now.

    At the end of the day, as long as your code does what you intend and performs well, does it really matter what it compiles to under the hood?

    Maybe this is just another paradigm shift (forgive me for using that word) where we start seeing high-level languages as just another compiler backend—except this time, the LLM is the compiler, and natural human language is the programming language.

    • jcgrillo 6 hours ago ago

      The problem with this analogy is that compilers behave deterministically, in that they'll reliably generate the same lower level output for a given source input. Therefore, the program binaries (or bytecode) they generate are debuggable. You can't treat LLM prompt inputs like source code input to a compiler, because the output has no deterministic causal relationship to the input.

      Also, you may not care now what assembly instructions your function gets compiled down to, but someday you might care a great deal, if for example you need to optimize some inner loop with SIMD parallelism. To do these things you need to be able to reliably control the output using source code. LLMs cannot do this.

      • kgeist 4 hours ago ago

        LLMs can be made more deterministic if you decrease the temperature parameter and have a fixed seed. Outputs can be controlled with test suites (i.e. that they do not change behavior or have performance regressions). For me as a team lead, a human programmer is already a very non-deterministic agent :) Give a non-trivial task to 10 human programmers and they will all solve it differently.

        Lack of debuggability is a good argument. Maybe it's only a problem if you want a human to debug the generated code? How about let an LLM iteratively run the code and figure out where it goes wrong by itself (o1 style).

        • jcgrillo 4 hours ago ago

          > Maybe it's only a problem if you want a human to debug the generated code?

          What will you tell your customers when you're suffering some performance regression and e.g. your kafka lag is growing without bound? "I'm sorry, the LLM seems to be unable to figure out how to fix the latest performance regression"? You can't just absolve yourself of responsibility like that. You, the human, are responsible for every single thing the computer does in production, and if you absolve yourself of ownership by leaning on an LLM you end up risking catastrophic helplessness. So you'd better be confident the LLM can debug every issue that will ever come up otherwise your decision to use the LLM could come back at you really hard.

          • kgeist 3 hours ago ago

            >You can't just absolve yourself of responsibility like that. You, the human, are responsible for every single thing the computer does in production, and if you absolve yourself of ownership by leaning on an LLM you end up risking catastrophic helplessness

            >So you'd better be confident the LLM can debug every issue that will ever come up otherwise your decision to use the LLM could come back at you really hard.

            Most of our programmers are PHP devs. They don't know any C. Once, we hit a bug in the PHP runtime which sporadically crashed our entire application. None of the PHP devs were able to fix the bug because they had no experience debugging C code, let alone the PHP runtime specifically. Fortunately, I had experience with C so I was able to research PHP's source code, and trace the crashes to a memory corruption bug in PHP which only surfaced when a very specific set of options was enabled and only under a high production load (so we did not see it during testing). We reverted the changes and the bug disappeared.

            What would happen if there was no one to investigate and find the root cause of the bug? Without knowing the cause, they'd probably first try to revert the changes ASAP and that would already solve the problem for the customers. The situation is pretty similar to what you're describing: there's a class of problems which requires knowing what happens "under the hood" at a lower level, and many shops, especially, say, in webdev, don't have the luxury of having engineers which know all ins and outs of the entire system. So this situation can happen any time without any LLMs involved: hardware failures, a kernel bug, a runtime bug -- they all can catch you unprepared.

            My point is, the risk is definitely there ("I have no idea what's happening and how to fix it") but it's not something novel and can happen without LLMs, and people usually find workarounds. As for debuggability, although LLMs can produce pretty bad code that is harder to debug, I think it's still debuggable by a human, in case of a rare event when even a sufficiently smart LLM cannot debug the problem. The code which, say, ChatGPT generates, is pretty readable and understandable.

            • jcgrillo 3 hours ago ago

              > there's a class of problems which requires knowing what happens "under the hood" at a lower level, and many shops, especially, say, in webdev, don't have the luxury of having engineers which know all ins and outs of the entire system.

              I think this passive framing of the problem--that this is some "luxury"--papers over something important, which bears repeating:

              If you advertise and provide some service, you own its production behavior including uptime, correctness, and performance. Failure to maintain these is really bad and if negligence contributes to these failures it's malpractice. Negligence includes failing to maintain and train staff properly.

              > What would happen if there was no one to investigate and find the root cause of the bug?

              I don't see this as a valid excuse, ever. To end up in such a situation is a catastrophic engineering disaster.

              • kgeist 2 hours ago ago

                >Negligence includes failing to maintain and train staff properly.

                >To end up in such a situation is a catastrophic engineering disaster.

                That was a novel bug in the PHP runtime which manifested only in very specific PHP configurations and under a very specific load. Do you recommend hiring a PHP runtime expert just in case it repeats again? Earlier this year we also ran into a rare Linux kernel bug. Do we need to hire a Linux kernel expert, just in case? Or teach PHP programmers how to debug kernel drivers? This kind of "never seen before" stuff happens quite often under high load (even though we do load testing).

                What really matters, I think, is how the entire delivery process/pipeline is designed: whether we have tests, QA, monitoring, if it's easy to revert a bad release, if we have on call engineers, tech support, backups, replicas etc. It's not realistic to have experts for every possible problem in the stack, and it's not possible to always have bug-free software; what's more important is if our engineering practices allow us to quickly recover from problems which were never seen before. And in my analogy, if we have an LLM which suddenly produces unstable code (although it passed all QA checks during testing) and no one immediately knows how to fix it, it's no different from running into a kernel, runtime or hardware bug, where the chance of anyone immediately knowing how to fix the root cause is close to zero, too. You already must have processes in place which allow you to recover from such unexpected breaking bugs quickly, with LLMs or without. Sure if the LLM crashes your production server every single day, then it's not a very useful LLM. I hope future coding LLMs will continue to improve.

                • jcgrillo an hour ago ago

                  > What really matters, I think, is how the entire delivery process/pipeline is designed: whether we have tests, QA, monitoring, if it's easy to revert a bad release, if we have on call engineers, tech support, backups, replicas etc.

                  Yeah I agree with this. Mitigating some production issue should not require a deep dive engineering effort, it should be routine. I just worry that LLM-assistance seems like it's going to turbocharge technical debt accrual and that freaks me out--the prospect of defaulting on technical debt is nightmare fuel.

    • jrflowers 7 hours ago ago

      > does it really matter what it compiles to under the hood?

      The example you quoted could trigger a DDoS if a page using that code got popular.

      • kgeist 7 hours ago ago

        I'm not claiming the code is perfect; early compilers that generated assembly often produced inefficient code as well. I hope LLMs' coding abilities will improve over time. For now, I'm not ready myself to use LLMs beyond basic prototyping.

  • menzoic 9 hours ago ago

    Learning how to use ̶C̶a̶l̶c̶u̶l̶a̶t̶o̶r̶s̶ LLMS is probably the skill we should be focusing on.

  • MicolashKyoka 7 hours ago ago

    sure, let's hear it from the "head of engineering" of an academic club with "9-12" intern level devs who has barely 2y of experience as a dev himself what he thinks about the industry. i mean it's fine to have an opinion and not particularly hating on the guy, but why is it given any credence and making the front page? are people this afraid?

    llms are a tool, if you can't make it work for you or learn from using them, sorry but it's just a skill/motivation issue. if the interns are making dumb mistakes, then you need to guide them better and chop up the task into smaller segments, contextualize it for them as needed.

  • camillomiller 9 hours ago ago

    > For context, almost all of our developers are learning web development (TypeScript, React, etc) from scratch, and have little prior experience with programming.

    To be fair, having non programmers learn web development like that is even more problematic than using LLMs. What about teaching actual web development like HTML + CSS + JS, in order to have the fundamentals to control LLMs in the future?

  • seydor 9 hours ago ago

    I don't think the thing called 'modern web development' is defensible anyway

  • FpUser 6 hours ago ago

    Don't we already have enough self certified prophets telling everyone how to do things "properly"? Nobody pushes you to use LLM. As for us - we'll figure out what forks to our benefit

  • jMyles 8 hours ago ago

    I've been engineering (mostly backend but lots of full stack too) web technologies for almost two decades. Not the world's greatest sage maybe, but I have some solid contributions to open source web frameworks, have worked on projects of all different scales from startups to enterprise media outfits, etc.

    And I have this to say: any impediment to learning web development is probably a good thing insofar as the most difficult stumbling block isn't the learning at all, but the unlearning. The web (and its tangential technologies) are not only ever-changing, but ever-accelerating in their rate of change. Anything that helps us rely less on what we've learned in the past, and more on what we learn right in the moment of implementation, is a boon to great engineering.

    Every one of the greatest engineers I've worked with doesn't actually know how to do anything until they're about to do it, and they have the fitness to forget what they've learned immediately so that they have to look at the docs again next time.

    LLMs are lubricating that process, and it's wonderful.

  • meiraleal 9 hours ago ago

    Code School employee says: AI is an impediment to learning web development

  • blackeyeblitzar 9 hours ago ago

    Almost every student I know now cheats on assignments using ChatGPT. It’s sad.

    • synack 9 hours ago ago

      If all the students are using ChatGPT to do the assignments and the TA is using ChatGPT to grade them, maybe it's not cheating, maybe that's just how things are now.

      It's like using a calculator for your math homework. You still need to understand the concepts, but the details can be offloaded to a machine. I think the difference is that the calculator is always correct, whereas ChatGPT... not so much.

      • grey-area 9 hours ago ago

        Yes that's why it's nothing like using a calculator. If the LLM had a concept of right or wrong or knew when it was wrong, that would be entirely different.

        As it is you're getting a smeared average of every bit of similar code it was exposed to, likely wrong, inefficient and certainly not a good tool for learning at present. Hopefully they'll improve somehow.

      • Rocka24 8 hours ago ago

        We are now in a world where the common layman can get their hands on a GPT (a GPT that is predicted to be equivalent to a pHD in intelligence soon), instead of the person scrolling hugging face and churning out their custom built models.

        I think in the future it'll be pretty interesting to see how this changes regular blue collar or secretarial work. Will the next future of startups be just fresh grads looking for B2B ideas that eliminate the common person?

      • bigstrat2003 8 hours ago ago

        It is both cheating, and also the way things are now. Also your calculator example is odd, because when you're learning the math that calculators can do, using a calculator is cheating. Nobody would say we should let third graders bring a calculator instead of learning to do arithmetic, it defeats the purpose of the learning.

  • cush 9 hours ago ago

    I find it particularly ironic when someone who goes to a top university with $70k/yr tuition attempts to gatekeep how learning should be. LLMs are just another tool to use. They're ubiquitously accessible to everyone and are an absolute game-changer for learning.

    Folks in an academic setting particularly will sneer at those who don't build everything from first principles. Go back 20 years, and the same article would read "IDEs are an impediment to learning web development"

    • wsintra2022 9 hours ago ago

      Hmm not so sure. If you don’t know or understand some web development fundamentals; having a friend who just writes the code for you and also sometimes makes up wrong code and presents it as the right code. Can definitely be a hindrance to learning rather than a help.

    • o11c 6 hours ago ago

      The problem is that the AI always gives wrong answers that are indistinguishable from correct answers if you don't already know what you're doing.

      And if you know what you're doing, you don't need the AI.

  • j45 8 hours ago ago

    This article feels out to lunch.

    If you use AI to teach you HTML / programming concepts first, then support you using them, that is learning.

    Having AI generate an answer and then not have it satisfy you usually means the prompt could use improvement. In that case, the prompter (and perhaps the author) may not know the subject well enough.

  • wslh 9 hours ago ago

    As Python is an impediment to learning assembler?