AI won't replace human devs anytime soon

(twitter.com)

106 points | by skeptrune 19 hours ago ago

166 comments

  • 123yawaworht456 17 hours ago ago

    95% of real dev work is digging through piles of brownfield spaghetti to fix a bug or implement a feature. human-provided descriptions of the bug to fix or the feature to implement are often so vague that even a human developer can barely comprehend them. also more often than not, a visual link is required between the developer and the product being fixed/improved.

    hypothetical AI will replace humans, sure. LLMs probably won't. original GPT4 was the best coding LLM I dealt with, and it was 1.5+ years ago. the context size was pitiful, yes, but it was a lot smarter than the current crop of big models.

    • lokar 15 hours ago ago

      Even just the ability to reproduce a non-trivial bug, and then work out what general area of the codebase might be in seems out of reach for an LLM

      • sureIy 14 hours ago ago

        The instructions seem to have to be so specific and repeated that you need a full-time person to review and retry.

        My last struggle was creating an SVG arrow pointing to a specific corner. Both Claude and CGPT failed with many attempts.

        • fragmede 11 hours ago ago

          the skill is in having it expose enough of the details that you only need to do a bit of messing with the output by hand and then your can be done with the ticket. there's no points for getting the LLM to spit out perfect code for you, the point is to get working code checked into git.

          • sureIy 8 hours ago ago

            This thread is about LLM replacing people, so yes you're confirming that you do need a skilled person to get working code out of it.

      • skeptrune 15 hours ago ago

        Yes! I think I'm going to make LLM bug reproduction my primary response to AI developer maximalists.

        • beeflet 14 hours ago ago

          Maybe the solution isn't to train LLMs on human-written code, but to train some sort of text-to-AST generator on descriptions of programs and then the output of programs. That way it doesn't learn to include human-inspired bugs at least.

          • mnky9800n 10 hours ago ago

            I always wondered why bother having an AI generate human readable code. Isn't it a machine? Why not have it generate machine code? of course, it wouldn't be readable, but would it be performant? Maybe.

            I guess my point is that if an AI is doing computing why would it doing computing resemble a person doing it? It's much closer to the machine then me. Why not take advantage of that?

            • addandsubtract 5 hours ago ago

              Because then you'd have no control over what the AI/code does. If you blindly want to trust the AI, you might as well just make the AI the black box between your input and output.

            • delapid 6 hours ago ago

              Isn’t that kind of what neural networks are in a way? Large blobs of computation we cant really parse fully

      • beeflet 14 hours ago ago

        This actually gives me a funny idea, IDK if it would actually work.

        Something interesting you can do with LLMs is you can constrain their output. What if you just write a test case to encapsulate the bug then try to filter out the most "sane" LLM output that results in the code compiling and the test case passing?

        It doesn't seem like much, but it's like using more computing power to filter out all the "obviously wrong" solutions. Doing this in practice might require you to make too many test cases though.

    • scruple 4 hours ago ago

      And that's to say nothing of the fact that the programs we build exist in an ecosystem of other programs, and sometimes you're lucky if you even have a copy of the source available... And that's without mentioning that it's all networked and you're dependent on other systems all the way down. So much of the work I do today is black box debugging where I can only make broad assumptions about the behavior of systems that are distant to my own based on their past performance, based on some notes someone took from a meeting 6 months ago, based on what the engineers on that team believe their systems are doing, etc...

      A lot of times at work, support teams modify their systems and don't bother communicating what they're doing, they seem to rely on scream tests to figure out who is even using their services and for what purpose. It's a total fucking mess and I laugh whenever I hear someone suggest that "AI" is going to replace what I do when some massive % of what I do has fucking nothing to do with code at all.

    • moi2388 12 hours ago ago

      I hate to break it to you, but that’s exactly what ChatGPT o1 excels at.

      I swear to god, it’s better than our junior devs fresh out of college.

      Like, I let the LLM solve a PBI and it’s just faster with better code.

      Actual devs will still have a job, but the low hanging write some code guys? They’re done. And a lot of companies had a bunch of them..

      • n_ary 11 hours ago ago

        I hate to break it to you but o1 is like far off of anything what you describe. Yesterday, I was testing some stuff and told it in very precise prompt to write a dripping bucket rate limiting middleware and it just literally named a var “bucket” with a timeout and then implemented something that will wait for the next function to timeout!

        I then said, heh and asked it to implement a circuit breaker pattern using axios, and it again implemented a timeout and added it to axios options!

        Could be am dumb or o1 was hallucinating on me intentionally, but the thing is very far away from a Jr dev fresh out of college.

        • wruza 11 hours ago ago

          very far away from a Jr dev fresh out of college

          Feels like your Jr position claims 3 YoE in hardcore backend. Last time I’ve seen them, they discussed == vs === and “in” vs “of”. Wouldn’t hold my breath asking them to implement a rate limiter mw or any sort of debouncer. Unless humanity boosted their average intelligence very recently, ofc, where you’re an async networking specialist fresh outta school.

          • n_ary 9 hours ago ago

            I think, my “very far away” was misunderstood. What I meant is that, a junior dev will hopefully go and do some research and try to find some reliable info before trying to implement something, while o1 just spits out statistical matches.

            o1 is very far away from being a Jr dev replacement and is actually slightly more useful(if I ignore the costs entirely) than the existing intelligent autcomplete features.

            • moi2388 3 hours ago ago

              Depends on the jr dev. You have the actual devs which develop solutions and the jrs who grow out to these roles. But I’ve been in plenty a company with just a bunch of code monkeys as well to write unit tests, boilerplate and center divs.

              They are definitely being replaced by ChatGPT in our company. Think of it what you will, several teams write all their unit tests with ChatGPT now instead..

        • oefnak 11 hours ago ago

          I don't even understand half those words. And I'm human btw.

          • n_ary 9 hours ago ago

            It is fine, if those terms are not relevant to you. We learn when we need to. My point was that, a Jr dev will do some research and then try to implement a solution, while o1 being praised as immediately replacement of Jr devs is not even close and only shows as much correct implementation as its training data.

        • j45 8 hours ago ago

          It's interesting how many folks assume everyone is using LLMs the same way they are to have comparable results (or not)

      • floppiplopp 11 hours ago ago

        I've seen a whole gamut of junior devs and their abilities. Your junior devs are just the average beginner. They still learn to get better, to become real devs. And we need real developers to deal with the stuff even o1 stitches together. Because you actually need to understand problems and code to solve problems. And even something like o1 is just a statistical model that outputs the most likely result based on an input. And I've seen the incoherent garbage it babbles out when it comes to code. But the main issue will be, the blind reliance on these hallucinating models that create 'just good enough' will hinder people from learning and becoming better. LLMs are the end of ideas. Just the calcified present. And that's a problem.

      • raxxorraxor 9 hours ago ago

        I would need to ask ChatGPT about what an PBI is, but other than that what do you expect. You don't learn coding in college, companies and devs need to train freshmen.

        LLMs still cannot solve most logic problems. They generate language. It can be a programming language but obviously the scope is limited.

      • yawnxyz 11 hours ago ago

        I had to spend six hours today to untangle an o1/sonnet mess today and ended up rewriting a huge part of it to be remotely readable

        what a waste of a nice sunday

    • refurb 8 hours ago ago

      It would be interesting if AI could be used to isolate bugs?

      Have it run through all the branches of spaghetti code and isolate the problematic sections along with proposed changes.

    • 015a 17 hours ago ago

      > original GPT4 was the best coding LLM I dealt with, and it was 1.5+ years ago. the context size was pitiful, yes, but it was a lot smarter than the current crop of big models.

      Anytime someone says "this is the worst it'll ever be" I can't help but think "oh sweet summer child"; and then I ask them, what's your favorite operating system you've ever used?

      The answer heavily depends on the person's age. Windows 10 or 7 dominates the conversation, and for slightly older folks, XP. "MacOS Snow Leopard" gets said a lot, that was a beloved MacOS release because they very famously said "no new features we're just fixing the old ones" and they actually did it.

      No one ever says "MacOS Sequoia" or "Windows 11". Man, the discourse surrounding the new iOS 18 Photos app is reaching a fever pitch.

      Sorry for the weird music on this, but its the first clip I could find of even, gasp, Elon Musk, stating this very real fact [1]: Technology does not automatically improve. In fact, it seems like especially software loves to get worse, and it takes a focused effort, from talented individuals, as far removed from money and business and agilescrum as possible to keep things operationally stable.

      But, sure, definitely the new metaversecryptoai bubblescam will escape that and be built to the highest possible quality. There definitely won't be investors and men in suits in every room saying "we're losing billions is there a knob we can turn to reduce electricity usage?" OpenAI is definitely raising billions to fund the R&D of the next level of intelligence, and certainly not because without that capital they'd be bankrupt in three months.

      [1] https://www.youtube.com/watch?v=aDC6kWC8y2Y

      • benterix 11 hours ago ago

        > it takes a focused effort, from talented individuals

        You see, that's exactly why I don't have high hopes about OpenAI. They came up with a state of the art GPT-4 that was slow and expensive to maintain, so they came up with this o1 that is basically a chain of cheap 4os, something that people used to do from the moment LLMs appeared. And they started calling GPT-4 a "legacy model".

        In the meantime Anthropic is steadily raising the bar with each iteration. But even there the efforts plateaued and what we have still has the inherent problems of LLMs, that is the ability to give completely wrong answers with amazing self-confidence.

      • shepherdjerred 17 hours ago ago

        I would pick Windows 11 (LTSC) over any prior version of Windows, and I would pick macOS Sequoia over any prior version of macOS.

        • 015a 16 hours ago ago

          [flagged]

          • shepherdjerred 16 hours ago ago

            It's uncharitable of you to consider my opinion as being contrarian versus being genuine preference.

            Alternatively you could have chosen to be curious about my response that deviates from your understanding.

            I've used Windows 97 onwards, macOS around Mavericks, and various desktop Linux distros from around 2013.

            I would have told you that I like Windows 11 (LTSC) because it is much more visually pleasing than 7/8/10. The OS works reliably and I overall enjoy using it much more than the Linux distros I tried out before the LTSC build was made available.

            I like macOS Sequoia (and iOS 18) because Siri has gotten substantially better and I'm excited about Apple Intelligence. iOS 18 adds RCS which has been great and I actually do like Photos quite a bit, but I only moved to iCloud Photos while I was on the beta, so I can't compare to the old Photos app.

            • andkenneth 13 hours ago ago

              I'm the same, I've basically upgraded on release or shortly thereafter to every version of windows since Windows XP, and the only one that actually felt bad and I rolled back was Vista. Windows 8 was a tad whacky, but since windows 10 things have been pretty dang stable, which I'll remind you is 9 years old. Windows has been pretty consistent for a long time at this point.

              IMO a lot of complaints about new OS versions are just a plain psychological aversion to UI changes. I always try and give them a go with an open mind, and most of the time it's honestly just fine if not actually a bit better in some way.

            • Retric 15 hours ago ago

              That built in advertising is definitely aesthetically pleasing. It just makes the whole UI pop.

              • shepherdjerred 15 hours ago ago

                This is another unnecessarily snarky reply.

                I’ve specifically mentioned the LTSC build when mentioning Windows 11, which does not have the normal bloat.

                The consumer version of Windows 11 is terrible IMO, but that’s more a business decision than a technical one.

                • Retric 13 hours ago ago

                  The LTSC build isn’t the totality of Windows 11, it’s even missing major features.

                  You can defend advertising as ‘a business decision‘ but this is an actual product they are shipping as Windows 11. LTSC is also compromised due to business decisions, so there’s no clean option.

                  • shepherdjerred 4 hours ago ago

                    What is Windows 11 LTSC missing? I use my Windows PC mainly for gaming & light web browsing/studying/coding.

                    I did have to manually install some Windows components I needed, e.g. the Windows store, but other than that I haven't had a single issue in the last few months I've been on it.

                    I'm not going to defend the normal version of Windows 11 (it sucks), but, again, the LTSC build is excellent.

                • bruce511 14 hours ago ago

                  It's also worth pointing out that advertising in Windows is localized (I suspect mostly in the US).

                  I am outside the US, I'm running Windows 11, and I see no ads.

                • wruza 11 hours ago ago

                  Isn’t LTSC pretty yarr to get?

                  • shepherdjerred 4 hours ago ago

                    Yes. I would pay any reasonable amount, but Microsoft doesn't make it available to consumers, so I don't feel bad about it. I found it here: https://massgrave.dev/windows_ltsc_links

                    They include instructions for verifying the ISO integrity. Activation uses their tool but I believe the method is straightforward to do on your own.

          • _annum 15 hours ago ago

            Despite minor grievances (primarily UI related, and likely inconsequential to others), I also prefer Sequoia over previous iterations. The future of mac/iOS is visible on the horizon and I expect it to roll out in a steady series of well-considered upgrades.

            I was a die-hard Mojave holdout until two months ago; if anything, preferring Snow Leopard or iOS 6 or whatever (review screenshots) seems to be the more contrarian take when considering contemporary workflows, device interoperability, and aesthetic cohesiveness. It's like pining for a Powerbook G4 or iMac G3 – nostalgic curiosities, but personally, I'm glad we've moved on.

            Windows though... take me back to '98!

            • wruza 10 hours ago ago

              Interop and workflows could be added without rehashing and “aesthetics”. Skeuo and metal surfaces were peak designs, please don’t even start me with this “flat” nonsense again. It’s a complete crap proven countless times by non-computer people (eldery) trying to interact with it and fail.

      • audunw 11 hours ago ago

        My favourite version of Windows was Windows 2000. It was stable, visually consistent and fairly simple.

        But Windows 11 is a massive improvement over Windows 2000 in many, many areas. Just because Windows 2000 was my favourite doesn’t mean I think it’s the best. It doesn’t even have built-in support for WiFi.

        I think they’re even moving towards something that could be my new favourite Windows. Windows 11 is the first Windows with a visual style that I actually find appealing, and they seem to be making some serious efforts to make it all visually consistent too. The new settings app has its flaws but if they just finish the work of supporting all the features of the old control centre and finally have a unified place to change settings, I would say it’ll be all-in-all better than Win2k.

        The GPU driver system and WSL2 are leaps and bounds beyond what Win2k could do. The built in support for virtual desktops and window arrangement is something I would never want to be without.

        Yeah, on the Mac side I remember Snow Leopard fondly. That doesn’t mean I think it’s the best. On Mac I see even less reason to think older versions of MacOS is better in any way. They’ve stayed visually pleasing and consistent, haven’t made any huge UI changes similar to Windows’s attempts at making a better start menu, and have added many genuinely useful features over the years.

  • kgilpin 17 hours ago ago

    I think people / the market have gotten a little too excited about something AI is actually pretty bad at - making changes to existing code (which is, after all, most of the code).

    AI software devs don’t understand requirements well, and they don’t write code that confirms to established architecture practices. They will happily create redundant functions and routes and tables, in order to deliver “working code”.

    So AI coding is bunk? No, it’s just that the primary value lies elsewhere than code generation. You can stuff A LOT of context into an LLM and ask it to explain how the system works. You can ask it to create a design that emulates existing patterns. You can feed it a diff of code and ask it to look for common problems and anti-patterns. You can ask it create custom diagrams, documentation and descriptions and it will do so quickly and accurately.

    These are all use cases that assist with coding, yet don’t involve actually writing code. They make developers more knowledgeable and they assist with decision making and situational awareness. They reduce tedium and drudgery without turning developers into mindless “clickers of the Tab key”.

    • rohansood15 13 hours ago ago

      Could not agree more. Plus, even for all of those tasks it takes a couple of iterations if not more.

    • j45 8 hours ago ago

      I'm finding different tools can do this to different degrees of success.

      Aider, for example, has been pretty decent at finding where to do what, as long as it's guided there in a certain way.

      Still, it's far from perfect, but in more and more cases, doable.

  • mattfrommars 15 hours ago ago

    Neal Wu recently posted a video of him attending Meta Hack Coding challenge. This time around, Meta has two score board, one for all human and the other one, I believe is AI assisted. Shockingly, the leader board for human was significantly faster than leaderboard for folks who used AI.

    We have a long way to go before AI surpasses human.

    • discordance 15 hours ago ago

      A competition like that is for the top 1% and very difficult problems. For most people and most problems, AI is very helpful and speeds up the process.

      As an example, I made a browser extension in 5 minutes this weekend using chatgpt by telling it what I wanted. I didn't know how to make a browser extension before that, so you could say AI surpassed this human quite easily.

      • jppope 14 hours ago ago

        As expected, AI is great for stuff thats been done a million times. AI will slow you down for anything even slightly novel.

        • 93po 14 hours ago ago

          for now

          • hatefulmoron 13 hours ago ago

            How long until the statement is false? You seem to have something in mind.

          • tovej 9 hours ago ago

            for all of time as long as machine learning = AI

            If it's not in the dataset the AI won't handle it correctly (unless it's trvial and a linear model is good enough, but then why even use AI)

      • beeflet 14 hours ago ago

        Yeah I guess it helps for applications where you otherwise need to read a ton of documentation to get started. For web stuff it helps me find useful APIs sometimes, but other times it wastes my time with deceptively wrong answers.

        The task of programming or engineering is generally one where you spend 80% of the time solving bugs in 20% of the code. LLMs are good for suggesting possible solutions based on existing things, but not for verifying solutions or doing any sort of real problem solving (they generally waste more time than they save because they give plausible but wrong answers).

        If you just want to reproduce some boilerplate or example code it is great though because documentation tends to lack good examples, and LLMs have a ton of real code stored in their latent space. Sometimes I even just use it as a thesaurus, if I have something on the tip of my tounge.

      • krater23 14 hours ago ago

        So AI is the next way to give people that don't know what they do new tools to generate and release shit where they don't know what it does to customers that don't know how much shit they got. I'm sure this will bring the mankind further...

        • Ozzie_osman 14 hours ago ago

          People said similar stuff about the printing press.

    • rsynnott 8 hours ago ago

      > Shockingly

      Yeah, see I don't think that's at all shocking.

  • kaushikc 17 hours ago ago

    I don't care about writing code, all I care is that my work (non-tech) needs to be done. I code out of necessity to save more time, reduce labour and errors. LLM has made a non-programmer like myself and given me super powers, all I have to do is ask the right questions and I am directed to somewhat of a reasonable place to look for a solution and create ones. I can create CRUD apps, use api's and various stacks from multiple programming languages while referencing documentation to build and practice building pretty much what I desire which could have costed me a fortune and a lot of time from professional coders, some work I could not give to anyone else for maintaining confidentiality.

    • bruce511 14 hours ago ago

      I've spent my career working in and on a low-code tool. For 30 years we've been doing what you describe; allowing non-coders to generate working applications that solve business problems.

      It's similar to the way that Excel exposed non programmers to data manipulation.

      Lots of people had very successful careers building software this way. They created a valuable product with mostly just domain knowledge and not programming knowledge.

      But there are boundaries to this approach. Because they lack the fundamentals, they're (mostly) unable to understand what something is doing or how something works. So there's a skilled group providing assistance (paid) whenever they hit a boundary.

      AI won't make programmers go away. But it will both expand the reach of "writing programs" to more people, and simultaneously replace "unskilled" programmers.

      As more joe public writes code, more programmers will ultimately be needed to support those Joe's. And in some cases, just like spreadsheets, the help might be "we need to throw all this away and build it correctly."

    • hyperG 7 hours ago ago

      I am in the same spot as a non-software engineer.

      If you are using multiple programming languages though you really can't say you are a non-programmer.

      I just got up and running with Julia in like two weeks when I didn't even know what Julia was 3 weeks ago.

      On the other hand, I have the digital audio workstation Reaper installed. It is one thing for LLMs to write crud apps and data science scripts but to write a digital audio workstation(something I have 25 years of experience as a user), as a non-software engineer I wouldn't even know how to begin. Even if I could compile the code to play an audio file with a volume knob, the gulf is so huge. I wouldn't be able to add more than a few features before it would become totally unmanageable since I don't know what I am doing.

      Much of what makes LLMs so impressive is we have such low expectations of modern software. They have broken the market for demo CRUD apps. Beyond that though I wouldn't trust anything cursor and I make.

    • mejutoco 9 hours ago ago

      It sounds like you care about writing code and are learning to code. And that is a good thing.

    • alphabettsy 15 hours ago ago

      How is it different for you than using StackOverflow and GitHub?

      • tourmalinetaco 14 hours ago ago

        LLM + search is far more powerful and intuitive than searching alone. I can actually explain what I want, what problems I have, and what I don’t want and it will (kinda) follow the directions laid out. It doesn’t always work, but it also hasn’t been too difficult to troubleshoot either. It absolutely cannot fix itself though, so if you can’t find the problem then you‘re very much stuck. All in all, it‘s different because it lets me delegate writing the code while I can focus on designing the program/problem solving, which I enjoy more anyway.

        It doesn’t make non-programmers programmers, and it doesn’t make junior devs into senior devs, but at least for more basic problems/well-defined projects it can help turn a night of typing away into a night of development.

  • Aeolun 14 hours ago ago

    > I'm a developer myself (2k+ commits 3yrs in a row)

    What a completely weird thing to identify yourself by number of commits. I can make 5k+ commits in 3 days. Does that make me a better developer?

    • lispisok 12 hours ago ago

      I could see it as being a metric like lines of code written. The days I make more commits is usually because I coding poorly and having to fix my bugs and fix missed edge cases. Fewer well thought out tested changes are better

    • purple-leafy 12 hours ago ago

      Yeah, low number of commits too if they are trying to brag. I did a PR the other day that was ~50 commits, and I only took a few hours.

      At that speed working 40 days I’m already blasting this guys number of commits every year lol

      Stupid metric, stupid post

  • yashvg 14 hours ago ago

    While I understand the desire to reassure developers, I think this perspective seriously underestimates the pace of progress in AI. Just 3-4 years ago, the idea of AI writing any functional code seemed far-fetched. Now they can handle many coding tasks competently.

    The author lists specific tasks LLMs can't do today. But there's no fundamental reason they won't be able to in the near future. Domain expertise, understanding downstream effects, configuring CI pipelines - these are all learnable patterns. As models get larger, are trained on more diverse datasets, size of context window increases, and new architectures emerge, these capabilities will come online rapidly. The jump from GPT-3 to GPT-4 was substantial, and we should expect continued leaps.

    This doesn't mean human developers will become obsolete overnight. But it does mean the nature of software development work is likely to change significantly. Lower-level coding tasks may be increasingly automated, shifting focus to higher-level design, architecture, and problem framing.

    Rather than dismissing the potential impact, we should be preparing for a world where AI significantly augments or even replaces many current development tasks. This might involve focusing more on skills that complement AI capabilities or exploring new areas where human creativity and insight remain critical.

    • lurking_swe 11 hours ago ago

      current AI is already stressing the power grid, and much of it will need to be redeveloped and improved just to keep pushing the limits of LLM’s. Power is the limiting factor with scaling here, so i’m rather unconvinced with your hypothesis. The improvements in the last 2-3 years are in no way indicative of the next 2-3 years.

      I agree with your sentiment by the way, developers should find ways to use LLM’s to improve their development process. But the drama is getting old.

    • Aeolun 14 hours ago ago

      Maybe we should instead do those things when that time actually comes. Premature optimization and all that.

  • tocs3 17 hours ago ago

    I am curious to see how all this plays out. I am not a developer but have played some with chatGPT to write some simple python. It has done an OK job. When it works it is sometime easier than other cut and paste methods. That is a long way from replacing those that know what they are doing (even at a junior level).

    On the other hand I remember hearing the old electrical engineers saying "you do not need a microcontroller for that. Just use a resistor and capacitor, or a 555".

    As an aside, is my Firefox spell check worse? Are they getting ready to replace it with a LLM?

  • owenpalmer 11 hours ago ago

    > LLMs cannot update the SDK code to V2 response types in the first place

    Why not? Isn't it just a matter of adding the new V2 response type to the context window and asking it to update those types in another file?

    > LLMs cannot successfully configure a CI action such that I won't have to publish myself

    Why can't it do this? I'd like to see your prompt.

    > LLMs do not understand the downstream effect of fixing the problem will be updating yournextstore's code

    I suppose I don't understand the specifics enough to analyze why this isn't doable with an LLM. However, it seems like the theme is that code almost always relies on other code, and if one part breaks, it's hard to get it agreeing again. To solve this, you could have the LLM agent routinely add changelogs to the context window when making maintenance changes.

    If you can't get an LLM to do something, you probably aren't thinking creatively enough.

    • leshenka 6 hours ago ago

      No amount of "please don't use features that do not exist" will stop LLM from making stuff up

  • ristos 9 hours ago ago

    I feel like the hype around LLMs replacing dev work is similar to what happened years ago when they were saying the same thing about scripting languages and WYSIWYG editors like Dreamweaver...

    What ended up happening in practice is just that time was saved on grunt work, and really engineers just ended up working on composing and debugging higher level components, optimization, or low level work.

  • noobermin 13 hours ago ago

    All I'll say is if you for the past few years were chiming in with how this will change everything only to join the rest of us skeptics in the last year or so, you don't get to turn around and pretend you were right all along about the limitations, that AGI was never going to happen with LLMs only, etc, etc. I just wish some of you , who let's be real, just jumped off the crypto bandwagon a few years ago, would do some introspection first.

    Btw, if you do agree with OP about development, extend that to art, writing, etc. The same kind of deep domain specific knowledge is required in almost all things that humans do.

  • segmondy 13 hours ago ago

    Well, it's not a matter of if AI will replace human devs, it will.

    The question is WHEN and HOW many devs?

    Not all devs will be replaced, but how much will be replaced and when will it start?

    If you take your AI and your prompt it to build an app, are you not a dev? I believe the same way we have redefined AI and AGI numerous times, we will redefine what it means to be a developer.

  • NBJack 18 hours ago ago

    Honestly, this opinion is borderline tinfoil hat territory, but I'd like to point out that between the sudden market expectation that fired hundreds of thousands of developers across multiple companies (in a very close time period!) and constant "LLMs that replace humans are just around the corner" marketing drivel, it almost appears as if The Market (tm) wants to really just drive down the costs of developers more than anything else.

    Perhaps it's all a completely legit market turn, and I'm the one in the dark. But I continue to be pessimistic that LLMs are truly going to change anything for the better.

    • Ekaros 10 hours ago ago

      I always question was the firing due to some LLM or changing market conditions namely much higher rates. As higher rates means investments don't need to search for new products and actually companies investing in products with debt get lot more expensive. Thus there is less demand for new products thus less demand from someone to make those.

      It could very well be time period where just less software will be produced. And software that is need to make returns at least in medium term already.

    • majormajor 17 hours ago ago

      2020 bubble hiring was weird enough that I mostly throw out the 2020 hiring and the 2022 layoffs both and just look at things compared to 2018/2019. That ramp-up in the 2010s was called a bubble for years even before Covid but has never popped yet. The expected product bar just keeps getting hire instead.

      • mlinhares 16 hours ago ago

        I don't think it was just 2020, when you started to see people being hired out of bootcamps with 0 experience whatsoever to entry level jobs it was visible there was something weird with the market. That has mostly reverted and I haven't seen a bootcamp ad in a while.

        • Capricorn2481 15 hours ago ago

          I interview a lot of candidates and there is a much higher ratio of qualified candidates from bootcamps than from college, assuming they're not going to the trash ones like Trilogy Education Services (which, believe me, is worse than nothing). That's all anecdotal, but please read the rest as me explaining why that's happened to us.

          The candidates from college, so often, come into the interview completely unprepared for the job. We have hired some bootcamp grads and college grads, but the only candidates we've ever fired for inadequate work were college grads (and this is, of course, a high bar at our company. We do our best to train and foster growth and hate firing anyone, but sometimes people are just not ready for the job).

          I have some experience with this first-hand. I did half my degree in college before trying a bootcamp. What they taught me was night and day more practical than what I was learning in college. I think college CS programs are paced poorly and lead to students forgetting most of what they learned by the time they graduate. Bootcamp students have been preparing for the job 40-60 hours a week for months, and are ready to start as soon as they join. I don't get all the hate for them. I think it speaks to some kneejerk gatekeeping that isn't based in anything concrete.

          But of course, there are lots of other reasons to go to college than raw practical experience.

          • mlinhares 3 hours ago ago

            Haven't had the same experience here in the US or back when I was in Brazil, might have been biased as all college grads I interacted with had been interns (specially in Brazil, as you'd usually intern at least half a year) so they all had some experience in coding and were able to get the job done with proper guidance.

            For bootcamp graduates it was very hit and miss and they usually lacked most of the basic computer science knowledge in data structures, algorithms and higher level concepts like design patterns and object oriented design. So the ramp up ended up being much longer. I think you can do something in six months but it's not the same as college, had a great experience when I was there and wouldn't trade it for "faster" job placement at all.

    • onlyrealcuzzo 18 hours ago ago

      There's already a type of search that is much better now than 3 years ago.

      • skeptrune 18 hours ago ago

        I made an HN search engine which I think is better than the existing - https://hn.trieve.ai and I still agree with the above comment.

        Tinfoil hat vibes all around for me too with the LLM's taking Dev jobs.

      • makeitdouble 17 hours ago ago

        For people not actively dipping their toes in the field, which type of search do you have in mind ?

        • onlyrealcuzzo 4 hours ago ago

          Predictive search:

          i.e. What will the greenhouse gas emissions of China be in 2032?

    • JTyQZSnP3cQGa8B 15 hours ago ago

      It’s not your tinfoil hat IMHO. I see a few managers and non-technical people (IRL and more on LinkedIn) who “study” prompting with the idea that they can replace people with LLMs and cut down costs.

      LLMs can be useful in very specific tasks but these people don’t care about it as they don’t have the technical knowledge to see how and why. All they care about is more money for them.

      It won’t be limited to developers though if that happens, and with the darkest scenario, UBI will have to happen as you can’t have half the population unemployed roaming the streets.

    • qntmfred 17 hours ago ago

      > The Market (tm) wants to really just drive down the costs of developers more than anything else.

      well yeah. this wasn't going to last forever.

      in the US especially. Salaries here were bonkers compared to the rest of the global developer workforce. Gotta remember, remote work wasn't common and barely possible at scale not that long ago. COVID put an end to that, and with it the expectation that management hire developers in any particular local geography. Despite all the RTO drama in some organizations, the majority of the economy was always going to cash in on the advantages of hiring developers in other countries, and squeezing even more productivity out of them with AI tooling.

      Downward pressure has been a long time coming. Hope y'all enjoyed the ride.

  • aussieguy1234 17 hours ago ago

    By the time AI can replace Human Devs, it'll also be able to replace almost all jobs that involve working on a computer.

    That will trigger mass unemployment and our current economic system could collapse and need replacing with a new system, hopefully one where the profits of AI are shared around more equally.

    As far as the jobs that could be automated away soon go, I'd say that being a developer is one of the least likely to be automated away in the short term and it may be one of the last to be automated.

    • intelVISA 14 hours ago ago

      I prefer the Alternate Reality where businesses are now finally run as they should be: managers describing issues using technical terms that are fed into a machine to produce desired outputs.

      Not to be confused with legacy arts like coding, or programming, which required Very Important project management frameworks... I guess we don't need PMs now either... or anyon-

  • Ferret7446 17 hours ago ago

    Senior devs, no. Junior devs? Uh...

    I've been seeing the same thing Steve Yegge has with AI. It could very well replace all of the grunt work in software engineering.

    • mikeocool 17 hours ago ago

      People keep saying this — but if I had to give a junior the level direction I give an AI bot everyday, beyond their first few weeks, that person would be on a pip pretty quickly.

      Maybe I’m just fortunate, but most places I’ve worked the difference between jr and senior has been the scope they owned and the amount of time spent architecting/reviewing/mentoring vs heads down coding. Seniors are not just handing off grunt work to juniors.

    • lokar 15 hours ago ago

      The grunt work, not the hard work.

      I do the hard work of software Eng away from my computer, trying to think clearly about the problem(s).

    • mattlutze 17 hours ago ago

      If teams are hiring Junior developers to give them "grunt work," they're doing something wrong to begin with.

      • jerf 17 hours ago ago

        That probably felt very nice to say, but it's wrong. Everyone starts somewhere. Nobody can start out architecting the login infrastructure for the 250 applications at a major company, all of which already have their own login system and local considerations. Everybody starts out on some simple work. They have to. It's all they can do on day one. It's all I could do. It's all anyone can do.

        I'm fairly worried that AI as it is developing now is going to be the most effective ladder pull in the world. While a new dev is grappling with "grunt work", they're learning. I'm working with them, explaining why this was done and why that was done, and why, yeah, that's busted, but actually not for the reason you think, and they're learning a lot more than just how to accomplish the specific task nominally laid out in the specific ticket they are working on. If they do the work by just asking an LLM for the answer and they actually get it, they may finish the task today... but what of their progress in five years? Ten? I'm very concerned that juniors leaning on AI will discover that in five years, the easy stuff is still easy, but the hard stuff is still as incomprehensible as it was five years ago.

        • imtringued 12 hours ago ago

          "All of which have their own login system"

          You do realize that SAML has been around for a long time? If anything what you want is an executive to tell everyone to use SAML rather than a "senior architect".

          • jerf 5 hours ago ago

            If you think you can walk up to 250 legacy systems and "just" do anything, you are not an experienced developer.

            An executive can certainly issue the mandate but the project to make it happen is going to be a very detailed one.

            It's a good project to do. There shouldn't be 250 ways of doing authentication. Authorization is rather difficult to just wave a magic wand and harmonize, but authentication shouldn't be a cookie here, and a JWT token there, and a microservice with its own tokens that also integrate with some vital system over there, and Basic Auth with LDAP creds over there, and so on.

            But the project is going to be a lot more than just standing in a room and shouting "HEY EVERYONE USE SAML, ok, cool, project is now spec'd, timelined, prioritized, and staffed problem solved".

    • fzeroracer 11 hours ago ago

      Juniors are far more valuable to your organization than just grunt work. They offer fresh perspectives from an 'untainted' perspective. They might know about new technology or understand how customers interact with your system better than the senior engineer who has been behind the desk for so long that all they see is the bugs and issues. AI cannot do anything like this, it can only affirm what you ask of it and in the ways you ask it.

      Of course many organizations don't like the whole 'training' people thing anymore so it's rough for junior devs out there. Good for my career since I can demand a premium since the pool of experienced engineers will only go down over time, but ultimately bad for software as a whole.

  • dbetteridge 17 hours ago ago

    I personally like the idea of a "companion AI", it comes up a lot in video games and sci-fi stories (funnily enough ones where capitalism has been taken out back and shot.)

    But the concept of a second brain that you can use to rubber duck concepts, decipher documentation and save your wrists from carpal tunnel when dealing with boilerplate is very useful.

    With that all said, as a senior dev my day is only about 50-75% coding (on a good day) and the rest is often meetings/planning boards/bug triage/helping juniors and just plain translating product requirements into meaningful pieces of work for Devs, which I've not had much luck using an LLM to replace.

  • WheelsAtLarge 19 hours ago ago

    I agree but it's important to know that AI is a tool that devs need to understand and incorporate into their daily work life.

    • al_borland 18 hours ago ago

      Is a statement like this any different than someone saying developers needed modern IDEs 20 years ago? Yet we still have people in the industry using vim or emacs, and are successful.

      I will occasionally ask Copilot for something when I’m in a hurry, and the there is some minor thing I don’t care about that someone is telling me needs to get done. The other 99% of the time, I’m doing it myself. Any time I reach for AI instead of doing it myself, I’m learning less, growing less, and understanding my code less. Why would anyone want this?

      • marcosdumay 17 hours ago ago

        Well, it's actually different.

        The impact of the "modern" IDEs of 20 years ago was distinctly measurable, and sensibly perceptible by anyone that tried them.

        At the same time, it's not even clear if the gain from LLMs is even positive.

        But if your claim was that neither was relevant enough to change the overall noisy productivity of developers, than yeah. At least not positively.

        • beeflet 14 hours ago ago

          modern IDEs are nice on operating systems like windows because they bundle the compiler and text editor and debugger together, and installing and configuring anything on windows is a pain of wizards and menus, especially if you want those programs to talk to each other correctly. I've had jobs where we have spent entire days configuring and installing visual studio and related software and activating all of the licenses and connecting all the databases.

          But on linux you can just can just figure it out once and "su install compiler-version text-editor-version debugger-version && cp dotfiles ~/.config"

          • marcosdumay 6 hours ago ago

            > modern IDEs are nice on operating systems like windows

            Nah, the turn of the century IDEs were better enough that almost all the text editors adopted the relevant feature.

            We mostly do not have pure text editors like there were back then. There's nano, and notepad++, but on the traditional holdouts, vi became an IDE, and emacs, that was already an IDE turned into a multi-language one too.

            That said, those features were compelling enough for every editor to adopt, but didn't change the picture enough so that people had no choice but to use them. It is still perfectly viable to not use the IDE features of your favorite editor.

      • lz400 18 hours ago ago

        I think it's more like the move to cloud. It didn't make developers obsolete but it DID make some types of jobs semi-obsolete like sysadmins and dbas. It also created new jobs like DevOps.

        • dboreham 18 hours ago ago

          People are still being sysadmins and dbas, with new titles. It's just the MBAs that think they're gone.

          • majormajor 17 hours ago ago

            Now you get to pay a team of system engineers AND you get to pay Amazon crazy monthly bills!

            But you get multiple availability zones and regions with much lower capital cost! (But do you need them?)

        • yjftsjthsd-h 16 hours ago ago

          > but it DID make some types of jobs semi-obsolete like sysadmins and dbas. It also created new jobs like DevOps.

          Er, that's the same thing. The only change is that HR thinks my position is labeled "devops engineer" now instead of "sysadmin", but it's the same job.

        • gregjor 17 hours ago ago

          No. Moving to cloud computing only obsoleted managing physical hardware. System admins and DBAs just as necessary as before. In my experience even more necessary because managers and programmers think they don’t need sysadmins or DBAs until they have messed things up so bad they have to call in an expert.

      • greenhat76 17 hours ago ago

        I still use vim, has all the features "modern ides" use. It just isn't a gui.

      • 18 hours ago ago
        [deleted]
      • bdangubic 18 hours ago ago

        once you realize you are 30-40-50..% slower than your peers you will not only want it but need it to keep being employed.

        saying “i am learning less, growing less…” is just so amazingly shortsighted, like farmers when first tractors arrived going “fuck this, imma walk this field in 100 degree weather - doing it myself”(sound familiar?).

        • grayhatter 18 hours ago ago

          I reject the assertion that it's slower.

          But even if I didn't, I'd still choose the team that delivers 1 month later with high quality code, over the team that cranks out dozens of half broken features they have no idea how to support. Mandating I spend the same amount on a team able to barely maintain that team's mistakes. I know I'm just making the "buy once, cry once" argument, and I understand that's "not how business works" but I'm an engineer, not a manager trading company interest for a promo.

          • bdangubic 18 hours ago ago

            sounds fun but your days are numbered, much like the farmer I mentioned in the previous comment were when the first tractors rolled into town…

            • aspenmayer 16 hours ago ago

              To extend the analogy, I'm reminded of the pets vs cattle debate in the context of managing computer systems, and how that distinction continues to shake out. I suspect that as personal computing moves toward cloud computing, following businesses, we will see tension between local-first AI solutions and cloud AI products. Seeing as how the same megacorps that build personal computer pets and pet products are building cloud computing AI cattle products with AI that gives the benefits of pets with some of the upsides of cattle as well.

              Local compute vs cloud compute is not the same distinction as that between pets vs cattle, but it has many of the same market participants, and with both Microsoft and Apple making moves to bundle AI capabilities with other cloud products for free with subscription value-add-on features is interesting. These features entice users into the vendor walled gardens, nudging users subtly away from local-first pets.

              https://cloudscaling.com/blog/cloud-computing/the-history-of...

        • bigstrat2003 15 hours ago ago

          Having a tool which gives you wrong results randomly, so you need to double check everything it does, doesn't speed you up. It makes you go slower.

          • WheelsAtLarge 15 hours ago ago

            That's where experience comes in. I can check someone's work way faster than I can write it. AI's right now are like junior assistants who need to be supervised. They are wrong sometimes but they can help you as long as you know their limits.

            • al_borland 8 hours ago ago

              How does someone gain experience if they are using AI from the start? They never develop the skills by putting in the reps.

              I also find it much harder to go through someone else’s code than to work with my own. If I’m just glancing at it, it seems to technically work, and it needs a rubber stamp… sure, that’s easy. If it doesn’t work, especially if it’s a logic issue rather than a hard error, that takes time to read and learn the context of everything that’s going on. Will someone in school today even have the skills to do that if they only ever use AI?

            • krater23 14 hours ago ago

              Sorry guy, but I don't believe that. You can check for obvious mistakes faster. You will not see any other than obvious mistakes analyzing the code as double as long than it took to write.

        • gregjor 17 hours ago ago

          Poor analogy. The farmer doesn’t learn new skills or get better at walking every day.

          When automation actually does equivalent work in less time or at lower cost it will replace human labor. Right now LLMs cannot deliver equivalent work compared to an experienced and skilled human programmer. If you experience an LLM able to do your job that says more about your skills than the LLM.

          • bdangubic 17 hours ago ago

            you are missing the point, LLM won’t replace, the person using LLM will (quoting jensen huang :) )

            • gregjor 17 hours ago ago

              I have lived through several waves of technology and change that would supposedly make me obsolete as a programmer and system admin. Over forty years in the business now, I have learned to take those predictions with a lot of skepticism.

              If LLMs get close to doing any part of my work I will incorporate them, like I have previous “threats” like cloud computing. I won’t panic too soon or give up because I have enough experience and expertise to know better.

              I can't predict what might happen, but right now LLMs don't do much of value in the domains I work in. They seem mainly to exist as excuses for stock pumping and managers cutting their labor costs to the bone. The people who don't get laid off cling to their jobs out of fear. The threat to (mostly junior) programmers comes from turning our work into a commodity, not from automation itself.

              Maybe your analogy does apply. A person who can drive a tractor will indeed outperform a farmer in one important but limited way: covering more area in a day than a farmer on foot. But the person who can only drive a tractor likely won't have all of the skills and expertise of a farmer who knows the land, soil, crops, weather, etc. from direct experience. And the person who can only drive a tractor -- but cannot actually grow anything or manage a farm -- becomes more interchangeable and replaceable. Tractor driving already gets replaced by robots, but knowing how to run a farm does not. Likewise a good programmer offers a lot more value than just the code they write; in fact code itself has no value without the context of a business domain and a theory [1] of the system.

              By the way, I'm one of those old guys who still use vim and the command line, as I have since the late 1970s, and will continue to do so. I can use IDEs and GUIs if I have to, I didn't get left behind, but those don't improve my efficiency or quality. They can slow me down and distract me with futzing with tools rather than getting real work done, and they don't work in all of the environments I need them to. I haven't had any problem keeping up with the IDE users.

              [1] https://pages.cs.wisc.edu/~remzi/Naur.pdf

              • bdangubic 16 hours ago ago

                old as well, in the industry since 1996… I get what you are saying but I can honestly tell you I have spent last 6-ish months analyzing these tools and making comparison between vim and IDE vs. someone (not) using LLMs is not even remotely reasonable comparison…

                given your age/time in the industry you may be able to coast to retirement and finish the career and say “vim got me through it all…” - someone with 10-years in will never be able to experience that

                • gregjor 14 hours ago ago

                  I have played with LLMs (and many IDEs). I won't say they suck, but they don't give me any huge boost either. Other programmers may get more mileage out of them, but so far no one I have worked with reports any difference that seems more significant than novelty. We all feel more "productive" when we challenge ourselves to learn a new language, tool, programming technique, platform, etc. When that wears off we have to find the next shiny thing. LLMs have the added feature of FOMO and actual fear of unemployment that, for example, VSCode did not have.

                  All tools take time to master, and then once mastered they can maintain an advantage over competitors that seem objectively better in some ways. I used VSCode for a year and ended up going back to vim. Nothing really wrong with VSCode but it wasn't a 10X or even 2X improvement for me. But I have internalized vim and the Unix/Linux command line tools to a degree that -- as you point out -- younger programmers don't have the runway to catch up with.

                  As a freelancer (and for a long time a f/t employee in various companies) I gradually figured out that the best value I can offer (and the best way to stay employed) is not how fast I can produce code, but how well I can understand the business domain and then tease out and translate requirements into software solutions. Optimizing for producing more or "better" code (however we measure that) I think sends a lot of programmers down the wrong path, one that leads to over-specialization, commodification of skills, and eventual replacement by outsourcing or AI. I think it will take a lot longer for LLMs to do the real work of software development, which has about as much to do with producing code as word processing has to do with writing an interesting book.

            • akkad33 13 hours ago ago

              Yes also using an LLM does not require skill, whereas using a tractor required learning a new skill. So it's a poor analogy

        • Espressosaurus 17 hours ago ago

          Wake me when AI is useful for more than novices in a field, in a small codebase, which is where I've seen the people most bullish on LLMs improving their productivity.

        • smrq 17 hours ago ago

          Sorry, but I'll be concerned if and only if my peers using AI start catching up to those who aren't.

          • bdangubic 17 hours ago ago

            those that aren’t will be unemployed within a year or may hide for a bit longer in companies with incompetent management/ownership

            • mlinhares 17 hours ago ago

              this person is definitely trying to sell a LLM-based coding solution somewhere HAHAHA.

              • bdangubic 16 hours ago ago

                nope, 30-ish years in the industry, too old to try to sell you anything but may make you think about the future or what you do

                • rsynnott 8 hours ago ago

                  Did you make the same predictions about dBASE/Paradox, MS Access, 4GLs, that weird Sun thing there you made Java applets via dragging and dropping logic diagrams, etc, at the time? What makes LLMs different from the last ten times the industry tried to convince us that programming would be obsolete any day now?

                • krater23 14 hours ago ago

                  But not as developer,right?

                  • bdangubic 5 hours ago ago

                    exclusively as a developer...

            • rsynnott 8 hours ago ago

              Oh, ffs. Look, these things just aren't much good. Maybe if you're writing CRUD applications, sure, whatever, who knows.

    • gerdesj 17 hours ago ago

      Your comment is rather prescriptive without any working. Why do I need to ... ?

      So far, I have managed to work out how to spot a hallucinating ChatGPT derived "organic blog post" but it is bloody annoying.

      Back in the day, Linux related queries would generally end up in Gentoo, Arch, Ubuntu, Mint (int al) forums or wikis or perhaps Reddit and co, sometimes it would end up in TLDP by accident. Now we have a plethora of wankery "blogs" that clog up the search returns. Mostly very pretty and mostly following the same old pattern and mostly correctish and wrong at some crucial point.

      I have to deal with a lot of pretty complicated and quite niche stuff, for example HA Proxy, Apache Guacamole, on Linux which itself (despite running mostly everything on our lovely planet) is also considered niche.

      Windows related queries normally end up with a post on an MS site with a response that suggests that "SFC /SCANNOW" will fix everything from stiffness in the joints to cancer. That has always been the way since around the late noughties, thanks to a rather lax hiring policy becoming the norm in a large part of the world using internet points as a score for hiring. That is understandable and humans being lazy and abrogating responsibility is not a new thing. Now we have multi zillion <currency> things happening that claim intelligence and what looks suspiciously tulip flavoured.

      Oh dear!

    • grayhatter 18 hours ago ago

      funny, I recently switched back to vim, and delayed enabling many IDE features such that I don't even use tab complete anymore. I'm actually enjoying writing code again. It's easier to get into the flow state, and generally I find the code I do write to be a higher quality. While the last bit might be completely subjective, and likely a bit of not just sample, but also confirmation bias. I have no interest in any flavor of code complete, or other advanced IDE features. For me, it's seems to be a net loss.

      I understand how generative AI works better than most of the SWEs I work with, but I have absolutely no desire to incorporate it into my workflow. I like that I understand how the code I wrote works, but even if I didn't, I wouldn't trade this rediscovered enjoyment for anything, including the "promised" career velocity.

      • dboreham 18 hours ago ago

        vim would be a luxury for the likes of me. Still using ed here...

    • skeptrune 19 hours ago ago

      100%, but my mental exhaustion grows every time I see a post claiming Claude or some other LLM can build your whole company

    • bigstrat2003 15 hours ago ago

      Only if it provides value, which at this point I wouldn't agree it does.

    • the_gorilla 18 hours ago ago

      It's important for low skill developers. I never understood why everyone was afraid of the shit chatgpt output, until I finally realized it actually surpassed many "software engineers" in skill. Sobering!

    • add-sub-mul-div 18 hours ago ago

      Not for writing code, no. If you're experienced enough it's going to slow you down.

    • mattl 18 hours ago ago

      Nobody needs to incorporate AI into their daily work life.

    • rsynnott 8 hours ago ago

      > need to understand

      To some extent.

      > and incorporate into their daily work life.

      No, not necessarily. Frankly it's of no use, or is actively counterproductive, in many fields.

    • fzeroracer 11 hours ago ago

      Honestly, when I see stuff like this I laugh all the way to the bank knowing how many developers and teams are opening themselves up to massive security holes and/or bugs by trying to incorporate LLMs into their toolsets.

      People are just willingly leaving massive landmines across their codebase waiting to blow their feet off and don't even have the experience to know when the code generated by the AI is bad or not.

  • faangguyindia 18 hours ago ago

    80% coders, just make CRUD apps.

    Those working in specialized fields will hold on for much longer.

    But CRUD apps, apps requiring gluing bunch of APIs with backend and database wil go away.

    Most apps do not need much scaling so highly specialized scaling masters aren't really needed

    Currently using zed editor, I am blown away by its Ai integration.

    Though completion via FIM throut custom LLMs is lacking

    And there are other problems like lacking git integration, I prefer vscode for that!

    • 015a 17 hours ago ago

      > Those working in specialized fields will hold on for much longer.

      The thing about AI up to this point is that it has replaced, well, very few jobs, but what jobs it seems to be most capable of replacing are extremely counter-intuitive. Eight years ago, everyone thought it would be: self-driving, machines, hard labor, etc. Turns out that stuff is really hard, and the first industries to fall were actually the more creative ones like writing and art.

      If you have an intuition for how the next ten years will look, I'd implore you to be open to the reality that there is no way you could have predicted the world-state of 2024 from the perspective of 2019; and that's only 5 years.

      Here's the counter-intuitive take that I believe to be true: Specialized coding might fall faster than generalist CRUD apps, if either falls at all. The value in specialized engineering is biased a lot heavier toward knowledge of the specialized thing you know about. Versus, the problems generalist CRUD coders deal with every day aren't actually technical or coding problems; they're business problems, coordination, resource allocation, and politics. AI has demonstrated itself as being pretty good at knowing things, even highly specialized things; it has not demonstrated itself as being very good at taking responsibility for its actions.

      • charlieyu1 14 hours ago ago

        We just don’t have a massive breakthrough in robots at a level comparable to LLMs. Which is kinda weird because there would be of even higher demand than language models

      • twelve40 16 hours ago ago

        > the first industries to fall were actually the more creative ones like writing and art

        have writing and art really fallen? aside from stupid SEO drivel writing, that is

      • krater23 14 hours ago ago

        Both are technical problems with technical solutions that have to fit 100%. You can't say this for art or writing. It's more mushi and anything can fit there.

    • swagasaurus-rex 17 hours ago ago

      Crud apps can unexpectedly become surprisingly complicated. Take the humble to -do list for example.

      So you can create, edit, even delete to-dos, and it’s even hosted online.

      * Once a user has a lot of to-do’s, they might want to organize them. You could organize them hierarchically, or with tags, or represented as a graph.

      * what if somebody makes an a new to do or an edit while their cell phone is out of service?

      * Are multiple users supported? Can users collaborate on a single to-do? Are entire organizations supported?

      * what happens if somebody accidentally deletes a to-do? Can they ever get it back?

      * What about concurrent edits to the same to-do?

      * Can just anybody make to-dos? What if somebody writes a bot to make millions of to-dos on your website?

      * If I load up my to-do list when my phone has no service, do I just get a loading spinner?

      The perfect to-do app might actually take an expert a year or more to write. Then it still has to generate a profit somehow.

      • greenhat76 17 hours ago ago

        I work for a company that builds apps for companies, most of our apps start as "a simple crud app" and turn into massively complex projects. I feel as if the business community overestimates the ability of AI code generation. It'll get better and better, but I don't see it getting good enough not to have a developer cleaning up its mess behind the scenes.

        • 93po 14 hours ago ago

          ive worked as a software person for 15+ years, largely in an agency setting, and literally the first "complex" project that came to mind for me was a company that makes custom doors. just doors. they used an excel spreadhsheet that was 200+ megabytes and 40+ sheets of extensive calculations and was used for both estimating prices and also the extensive component sourcing and a billion other parameters and in their 100+ employee company there was maybe one, two people who fully understood this shithole of a spreadsheet. as an agency, half a million dollars was understanding this stupid spreadsheet and documenting what this looked like as real sofwtare, and $150k was actually the development. AI would have zero chance of doing this work any time soon

    • majormajor 17 hours ago ago

      CRUD isn't the complexity with CRUD apps. Working with the rest of the business to make sure the processes get built out in ways that make sense and are actually worthwhile is.

      So maybe one dev can handle writing five or ten times as many CRUD services as before. The idea that the biz person is gonna get rid of those devs entirely and babysit the coding agents/deploys/etc themselves strikes me as wildly unrealistic, though. Nor is it realistic that SAAS as a market will just dry up because individuals will just have their own coder agents that do everything for them.

      Question is if there's going to be an "enough is enough" point feature/change-wise. But I can't think of the last time I worked at a company where the product team and senior management threw up their hands and said "we did it, we finally cleared the entire product development backlog" as opposed to generating ideas at a rate many multiples of the rate of implementation.

      The baseline "here's what you can get with a shoestring team" level of output will get higher. But it will also get higher for your competitors. So expectations will just be much higher. So you just gotta move that much faster now.

    • kordlessagain 18 hours ago ago

      So an agentic system can handle git. Would you be willing to use that to manage those processes for you? What about Docker?

    • selcuka 18 hours ago ago

      > But CRUD apps, apps requiring gluing bunch of APIs with backend and database wil go away.

      They were already going away with the improvements in no-code tools, Zapier, scaffolding frameworks etc. LLMs are just another automated tool in this context.

  • zubairq 14 hours ago ago

    I “wish” AI would replace me so that I could do other stuff than coding

    • 93po 14 hours ago ago

      ive loved the past two weeks of working on a project where cursor is doing 98% of the coding, i have a sweet project at 20x the normal speed i could do it

      • zubairq 13 hours ago ago

        Thanks. I've tried Cursor, but probably more useful on smaller codebases I found and for simple code changes. Can you share the github repo that you use it on, or give me an indication of the codebase size that you use it on?

        • 93po 3 hours ago ago

          It's a React project with less than a hundred files made by me/claude. Not really sure of lines of code. It's not overly complex but somewhat novel - it's recreating a game interface and engine and not really a traditional web app. Don't really want to associate my github with my HN username because I used to be a bit more adversarial when posting anonymously on the internet and I'm not super proud of that and don't necessarily want it tied to my real name :)

  • ookblah 10 hours ago ago

    currently AI focus is very narrow. I treat it like a junior dev where it can repeat structure it's given and then I basically "approve" it or I use it like a super robust autocomplete where it knows what I'm writing next like 80% of the time. also good for bouncing ideas to see if there's something someone else had done similarly.

    there is no way in hell I would tell it to "generate X feature" or "fix X bug" wholesale. half the time it is just dead wrong in it's approach and it's more time consuming to grok what it did exactly and then either 1) suggest an alternate approach and 2) fix what it did wrong.

    like i get it, in the end if it generates the right code and it works then most people will call it a day. but longevity is in being able to understand and maintain your codebase and at this point i can't trust it to do that. it some sense it reminds me of (and still reminds me of) those wysiwyg site builders where sure, your output might be fine for 90% of those cases like a simple landing page but the code underneath is a big pile of shit. good luck if you need to break out of that box at any point.

    i guess all to say AI will start replacing low level static site facing stuff. for instance if you're selling cookie cutter WP themes i'd be worried at some point. for everything else it's a non-factor right now.

  • benreesman 16 hours ago ago

    LLM coding stuff is interesting. Sonnet is a class apart and I think I’ve upgraded it to unqualified “useful” on certain tasks.

    But good luck trying to measure or even speculate about the impact that has on the market for software people.

    The big picture is geopolitics (which is crazy right now), and then monetary policy (which has been all over the place for years and years), then politics and fiscal policy implications (which is crazy as hell right now), then this recent trend towards just brutally bare-fanged go-for-broke class warfare Thiel shit where FAANG is “tightening its belt” and smashing EPS simultaneously (boomeranging people for less RSUs and shit) with products that are now basically loathed but sticky like a stalker.

    And all the quasi-consensus benchmarks of who’s even good are shot to hell: YC is funding Pear, Google can’t ship an LLM anyone wants, Amazon is just dark patterns and price fixing, they legendary shops just aren’t good anymore. Some of the best hackers I’ve met in decades are under-appreciated if they even have work, and “culture fit” (sociopath) knuckleheads are turning down windfalls.

    It’s a fucking casino in 2024 and AI coding is a blip on this scale.

    • benreesman 10 hours ago ago

      @dang what is with sibling getting flagged dead for saying he agrees with (my) parent.

      We’re both right but I understand that moderation has constraints that comments don’t.

      Please either un-flag sibling or flag me.

    • benreesman 9 hours ago ago

      @dang I hate to push the point but I’m a real stickler for the vulnerable getting signal boosted rather than shouted down.

      Maybe it is just legit flags that bury everything from Garry Tan tweeting “die motherfucker die” to elected officials, or Mike Seibel defrauding Autodesk with claims that SocialCam had novel technology before vesting in Thailand and ignoring emails for a year, or Emmett cashing out just in time for Twitch to fly into the side of a mountain, or for Altman to have been rescued from his abject incompetence via pg and Conway rigging up Green Dot. It goes on and on, and that’s for people who dismiss out of hand the allegations of sexual assault.

      I’ve not only lived in the same building as Drew Houston, I’ve held his head when he was vomitting up mystery pills in Tahoe: and there is no evidence I’ve ever seen that any of these people are any good at anything other than “becoming powerful” to quote Paul.

      I’ve just barely scratched the surface of the perversity, criminality, and sociopathy of the Culture Fit in 2024.

      You’re one of the good guys, and I don’t envy the head that wears the crown on being both scrupulously fair and painting YC in a good light.

      I know that it’s a small moderation team and that 3 humans don’t have the speed and clarity of an Azure IPv4 block.

      But if you tell me that 3-5 is still the right number of people who can flag something dead while the community has grown by 7 orders of magnitude, no sir.

      Flags are OP as fuck because you can count on the lie that comp.lang.lisp being hostile in bytes was worse than the contemporary fact that the Valley ends people’s livelihoods and smiles the whole time.

    • throwaway-snek 13 hours ago ago

      [dead]

  • hindsightbias 17 hours ago ago

    Given all these news articles are Ai generated I think they’re shouting at clouds now. The Ai knows how to generate clicks better than you do.

  • kkfx 10 hours ago ago

    Anyone who have ever tried to generate code via LLMs know that. The main hype point though it's selling something that can't work but people believe it do to hide a very simple phenomenon: after DECADES of technical possibility and opportunity IT automation timidly became a bit spread, so finally people have discovered they do not need to physically go to a bank to operate with it, since web-banking is here since decades, they discover they do not need an insurance broker for 99% of insurances instead they choose some on-line offer comparison portal, they do not even need an office to do office works since without paper we can WFH.

    This obviously means many jobs are not needed anymore. Much of ETL, office frontline etc for instance. But if you tell anyone this is not new, it's decade old stuff you are clearly a bad manager because you could makes things much better much before, of you simply state that most people are ignorant of IT and that's a dramatic problem for the society because if we know IT we can create new jobs, if not we can only dismiss many and so on. So "hej, it's AI!" [and obviously that's new eh]...

    Aside the office use of LLM could led to interesting nightmares in infosec terms, so brace yourself...

  • 7 hours ago ago
    [deleted]
  • jti107 17 hours ago ago

    our company uses them LLM's extensively for coding and it was something that was optional the company paid for. we got training on how to use it and was told don't blame the LLM for shitty code, you're responsible for the work you push to the branch. most people ended up using it...our superstar dev uses it and he think he is way more productive with it.