Choosing learning over autopilot

(anniecherkaev.com)

57 points | by evakhoury 2 days ago ago

38 comments

  • joe_mamba 2 days ago ago

    From the author:

    >ai-generated code is throw-away code

    Mate, most code I ever written across my career has been throw away code. The only exception being some embedded code that's most likely on the streets to this day. But most of my desktop and web code has been thrown away by now by my previous employers or replaced by someone else's throwaway code.

    Most of us aren't building DOOM, the Voyager probe or the Golden Gate bridge here, epic feats of art and engineering designed to last 30-100+ years, we're just plumbers hacking something quickly to hold things together until the music chairs stop playing and I have no issue offloading that to a clanker if I can, so i can focus on the things I enjoy doing. There's no shame in that and no pride in that either, I'm just paid to "put the fries in the bag", that's it. Do you think I grew up dreaming about writing GitHub Actions yaml files for a living?

    Oh and BTW, code being throwaway, is the main reason demand and pay for web SW engineers has been so high. In industries where code is one-and-done, pay tends to scale down accordingly since a customer is more than happy to keep using your C app on a Window XP machine down in the warehouse, instead of keep paying you to keep rewriting it every year in a facier framework in the cloud.

    • m463 2 days ago ago

      It's kind of amazing that the really mainstream jobs create and pitch throwaway code, while a few key niche jobs, with little demand, can really create enduring products.

      Kind of like designing a better social media interface probably pays 100x what a toilet designer would be paid, but a better toilet would benefit the world 1000x.

      • esafak a day ago ago

        The difference between economic value and social value.

        • joe_mamba a day ago ago

          Which is why I dislike the GDP being thrown around in discussions as the ultimate dick measuring metric. High economic value activities don't translate or don't trickle down into high social value environments.

          For example, I went to visit SF as young lad and I was expecting to be blown away given the immense wealth that area generates, but was severely disappointed with what I saw on the street. I used to think my home area of Eastern Europe is kind of a shithole, but SF beats that hands down. Like there's dozens of places on this planet that are way nicer to live in than SF despite being way poorer by comparison.

          • m463 a day ago ago

            > kind of a shithole

            literally supporting the toilet designer argument.

            tangentially, japanese toilets are quite amazing.

    • vjerancrnjak 2 days ago ago

      RAG, llm pipeline industry just continues in the same fashion of throwing even more glue, insanely slow, expensive, but works due to somehow companies having money to waste, perpetually. Not that much different from the whole Apache stack or similar gluey expensive and slow software.

      There is similar mindless glue in all tech stacks. LLMs are trained on it, and successfully do more of it.

      Even AI companies just wastefully do massive experiments with suboptimal data and compute bandwidth.

      • dgxyz 2 days ago ago

        Yeah this is what kills me. Most of the problems we solve are pretty simple. We just made the stacks really painful and now LLMs look sensible because they are trained to reproduce that same old crap mindlessly.

        What the hell are we really doing?

        What looked sensible to me is designing a table, form and report in Microsoft Access in 30 minutes without requiring 5 engineers and writing 50k lines of React and fucking around with kubernetes and microservices to get there.

        LLMs just paste over the pile of shit we build on.

        • spion 2 days ago ago

          cold take speculation: the architecture astronautics of the Java era probably destroyed a lot of the desire for better abstractions and thinking over copy-pasting, minimalism and open standards

          hot take speculation: we base a lot of our work on open source software and libraries, but a lot of that software is cheaply made, or made for the needs of a company that happens to open-source it. the pull of the low-quality "standardized" open source foundations is preventing further progress.

        • califool 21 hours ago ago

          “LLMs just paste over the pile of shit we build on.” This is the perfect description. Nice job.

    • Hamuko 2 days ago ago

      I feel like a lot code is pretty sticky actually. I spend two weeks working on a feature and most likely that code will live for a time period measured in years. Even the deprecation period for a piece of software might be measured in years.

  • andai a day ago ago

    Recently after a month of heavily AI assisted programming, I spent a few days programming manually.

    The most striking thing to me was how frustrating it was.

    "Oh my god, I've melted my brain" I thought. But I persisted in my frustration -- basically continuous frustration, for seven hours straight -- and was able to complete the task.

    Then I remembered, actually, it was always like that. If I had attempted this task in 2019 (or a similar task, in terms of difficulty and novelty), it would have been the same thing. In fact I have many memories of such experiences. It is only that these days I am not used to enduring the discomfort of frustration for so long, without reaching for the "magically fix everything (probably)" button.

    If I had to guess, I'd say that is the main "skill" being lost. There's a quote like that, I think.

    Genius ... means transcendent capacity of taking trouble, first of all. —Thomas Carlyle

    • Harsha_Jaya_13 6 hours ago ago

      Yeah I accept it , because most of the new gen are actually trained to be ,copy and paste and that is where the actual capability of the critical thinking is lost and I would accept your words being a new dev, because it feels cool after using ai for our work to complete and then after it completes and If I just come back and see ,what I have really did is,it feels like am I really unproductive? Actually no,but it is a bit like yes, because when we are good at driving the car and then help with the auto pilot is so good ,but when we don't actually know ,how to drive the car and then it feels like ,we are in the control of the autopilot disguised as the automation,what most people won't admit and I wanted to say that,we must actually have some knowledge on what we are trying to produce because, without any prior knowledge on what we are trying to do,we are just using the autopilot not our brain for the brainstorming ideas,even though we can get and we can take the help of the ai to convert that ideas to execution, atleast we must learn what we have been going through

    • r0x0r007 a day ago ago

      'If I had to guess, I'd say that is the main "skill" being lost'(to endure frustration).

      I think this might be true for you, but for less experienced and new developers, well, they actually won't get to that stage because their 'learning' is basically prompting and they have nothing to forget nor remember. And that might be bigger issue.

  • dandano a day ago ago

    Lately I have had the cursed vision as I'm building a new IoT product. I have to learn _so_ much, so I have stopped using claude code. I find directly altering my code too hands off.

    Instead I still use claude in the browser mainly for high level thinking/architecture > generating small chunks of code > copying pasta-ing it over. I always make sure I'm reading said library/code docs as well and asking claude to clarify anything I'm unsure of. This is akin to when I started development using stackoverflow just 10x productive. And I still feel like I'm learning along the way.

    • JP44 a day ago ago

      I wouldn't call that cursed but useful tooling usage. Had the same scenario where I wanted to work on a tool for a project written in Go, of which I know next to nothing. Claude code was able to spit out 100's line of code that worked and I (almost) understood and could explain what was happening where and why, but I had no chance of debugging or extending it on my own.

      I've limited myself to only use Claude's webchat to do almost exactly as you've mentioned except creating snippets, it can only explain or debug code I enter. I prompt it to link relevant sources for solutions I seek. Plus it assists me subdivide, prioritise and plan my project in chunks so I don't get lost.

      It has saved me a lot of time this way while still enjoying working on a project

      • dandano a day ago ago

        Interesting how you write the code first then put it into claude. What's the reason there? I guess that is where I find the most benefit is not writing out the syntax, even though I could I just can't be bothered. I often start with the snippet then refactor to the style of code I like. For code I don't know that well like c++ I like to get a snippet so I can then research into those functions that is used and go from there.

        • JP44 a day ago ago

          Mostly because I learn the best by doing, reiterating and then expanding, especially with programming. Essentialy, building a form of context or mindmap if you will.

          When I was testing Typescript/React I followed the docs and some guides and got thrown in the deep end, I could follow and understand the steps but not reproduce or adapt them because the (or my) scope was limited, also, libraries; so many libraries used..

          By starting with a HelloWorld and expanding it step by step, going back and forth. Using forums/blogs to see available functions or similar oss projects for what I wanted to do, then use the docs to read about the used functions.

          Kagi already helped save me a lot of time by reducing spam posts and using language shebangs etc. With Claude I either give a snippet that I cannot translate or am stuck on, like you do, or I'll prompt something like: 'describe steps used to get from input=.. to output=.. in go, this/that needs to be done/transformed, do not output actual code'.

          I guess the main thing is that I want to be engaged in my personal/hobby projects and think about the problem and solution and not just copy/paste because that takes the fun away (in case of work, if it makes me more productive I'll take it. Just need to remember I'm the one who is responsible). It's like buying a pre-assembled puzzle.

  • poulpy123 a day ago ago

    I'm using AI for 2 things: as a very good autocompleter, and as a partial replacement for a now crap google search.

    I regularly try the agent mode (recently google's antigravity), but I was two issue that always come back. The first one is technical: if I'm always amazed the first 10-15mn, after a while the agent get stuck, and I would spend more time trying to make it to the job properly than looking directly and making changes myself. The second one is practical: I don't like to not know and understand what the LLM did, so I have to spend a lot of time trying to understand the code

  • furyofantares a day ago ago

    Post is clearly very heavily glued together/formatted and more by an LLM, but it's sort of fascinating how bits and spurts of the author's lowercase style made it through unscathed.

  • pizzafeelsright 2 days ago ago

    How many people could, from scratch, build a ball point pen?

    Do we have to understand the 100 years of history behind the tool or the ability to use it? Some level of repair knowledge is great. Knowing the spring vs ink level is also helpful.

    • pizzafeelsright a day ago ago

      Following up - I am the most excited about using computers because the barrier from intent to product are being dropped. At this point my children can 'code' software without knowing anything other than intent. Reality is being made manifest. Building physics into a game would take a decade of experience but today we can say "allow for collision between vehicles".

      If you have ever gone running the ability to coordinate four limbs, maintain balance, assert trajectory, negotiate uneven terrain, and modify velocity and speed at will is completely unknown to 99.9% of mortals who ever lived and yet is possible because 'biological black box hand wave'.

  • belval a day ago ago

    I get where the author is coming from, but (I promise from an intellectually honest place) does it really matter?

    Modeling software in general greatly reduced the ability of engineers to compute 3rd, 4th and 5th order derivatives by hand when working on projects and also broke their ability to create technical drawing by hand. Both of those were arguably proof of a master engineer in their field, yet today this would be mostly irrelevant when hiring.

    Are they lesser engineers for it? Or was it never really about derivatives and drawings, and all about building bridges, engines, software that works?

    • esafak a day ago ago

      I can't believe I took a mandatory technical drawing class.

    • mkoubaa a day ago ago

      Are you arguing that we are no worse at building bridges than we were 100 years ago?

  • amelius 2 days ago ago

    > What scares me most is an existential fear that I won’t learn anything if I work in the “lazy” way.

    You're basically becoming a manager. If you're wondering what AI will turn you into just think of that manager.

    • epolanski a day ago ago

      Imho this AI "revolution" will be the death of non technical middle management first.

      Engineers who practice engineering (as in thinking about the pros and cons, impact, cost) will simply get to work closer with relevant stakeholders in smaller teams and the role of the project manager will start to be seen more of a barrier than facilitator.

  • jrm4 2 days ago ago

    I respect this choice, but also I feel like one might need to respect that it may end up not being particularly "externally" valuable.

    Which is to say, if it's a thing you love spending your time on and it tickles your brain in that way, go for it, whatever it is.

    But (and still first takeaways) if the goal is "making good and useful software," today one has to be at least open to the possibility that "not using AI" will be like an accountant not using a calculator.

    • RealityVoid a day ago ago

      While I tend to agree, I think it's super easy to think you are using AI and being productive and then hitting a brick wall once all the things start failing because the system is not internally coherent.

      • saulpw a day ago ago

        Yeah, it's more like an accountant throwing away this "double-entry" system in favor of a single-entry spreadsheet that any Jimbob or Maryanne can use.

  • beej71 a day ago ago

    Seems like a decent balance to me. They note that there's no substitute for experiential learning. The harder you work, the more you get out of it. But there's a balance to be struck there with time spent.

    What I do worry about is that all senior developers got that experiential education working hard, and they're applying it to their AI usage. How are juniors going to get that same education?

    • epolanski a day ago ago

      This is also what I often wonder.

      Imho, AI is a multiplier and it compounds more as seniority grows and you know how to leverage it as a tool.

      But in case of juniors, what does it compound exactly?

      Sure, I see juniors being more independent and productive. But I also see them being stuck with little growth. Few years ago in an year, they would've tremendously grow at least on the technical side, what do they get better at now? Glueing APIs via prompting while never even getting intimate with the coding aspect?

  • ofalkaed 2 days ago ago

    The missing step seems to be identifying what is worth learning and your goals. Will learning X actually benefit you? We already do this with libraries, they save us a great deal of time partially by freeing us from having to learn everything required to implement that library, and we use them despite those libraries often being less than ideal for the task.

  • spion 2 days ago ago

    Has anyone measured whether doing things with AI leads to any learning? One way to do this is to measure whether subsequent related tasks have improvements in time-to-functional-results with and without AI, as % improvement. Additionally two more datapoints can be taken: with-ai -> without-ai, and without-ai -> with-ai

    • somethingsome a day ago ago

      I'm only a data point, but some years ago I spent a whole year learning a mathematical book above my level at the time. It was painful and I only grasped parts of it.

      I did again the same book this year, this time spending much time questioning an llm about concepts that I couldn't grasp, copy pasting sections of the book and ask to rewrite for my understanding, asking for fast visualization scripts for concepts, ask to give me corrected examples, concrete examples, to link several chapters together, etc..

      It was still painful, but in 2 months (~8h-10h a day) I covered the book in many more details that what I ever could do some years ago.

      Of course I still got some memories of the content from that time, and I'm better prepared as I have studied other things in the meantime. Also the model sometimes give bad explanations and bad links, so you must stay really critic about the output. (same goes for plots code)

      But I missed a lot of deep insights years ago, and now, everything is perfectly clear in two months.

      The ability to create instant plots for concepts that I try to learn was invaluable, then asking the model to twist the plot, change the data, use some other method, compare methods, etc..

      Note: for every part, when I finally grasped it, I rewrited it in my own notes and style, and asked the model often to critic my notes and improve a bit. But all the concepts that I wrote down, I truly understand them deeply.

      Of course, this is not coding, but for learning at least, LLMs were extremely helpful for me.

      By this experiments I would say at least 6x speedup.

    • Havoc a day ago ago

      I learned a fair bit about architectural choices while vibecoding because if you don’t spec out how things should work it goes off the rails fast.

      Haven’t found a good way to learn programming language basics via AI though

    • epolanski a day ago ago

      Honestly I feel I have never learned as much as I do now.

      LLMs remove quite a lot of fatigue from my job. I am a consultant/freelancer, but even as an employee large parts of my job was not writing the code, but taking notes and jumping from file to file to connect the dots. Or trying to figure out the business logic of some odd feature. Or the endless googling for responses lying deep inside some github issue or figuring out some advances regex or unix tool pattern. Or writing plans around the business logic and implementation changes.

      LLMs removed the need for most of it which means that I'm less fatigued when it comes to reading code, focusing on architectural and product stuff. I can experiment more, and I have the mental strength to do some leetcode/codewars exercise where incidentally I'll also learn stuff by comparing my solution to others that can then apply back to my code. I am less bored and fatigued by the details to take some time more focusing on the design.

      If I want to learn about some new tool or database I'm less concerned with the details of setting it up or exploring its features or reading outdated poorly written docs, when I can clone the entire project in a git subtree and give the source code to the LLM which can answer me by reading the signature, implementation and tests.

      Honestly, LLMs remove so much mental fatigue that I've been learning a lot more than I've ever done. Yet naysayers will conflate LLMs as a tool with some lovable crap vibecoding, I don't get it.

  • soundworlds a day ago ago

    I do AI trainings, and the framework I try to teach is "Using AI as a Learning Accelerator, not a Learning Replacement"

  • 2 days ago ago
    [deleted]