State of AI-assisted software development

(blog.google)

91 points | by meetpateltech a day ago ago

71 comments

  • kemayo a day ago ago

    I'm curious what their sample set for this survey was, because "90% of software developers use AI, at a median of 2 hours a day" is more than I'd have expected.

    (But maybe I'm out of touch!)

    • karakot a day ago ago

      well, just assume it's an IDE with 'smarter' autosuggest.

      • kemayo a day ago ago

        That's fair -- the vibe of the post was making me think of the more "Claude, write a function to do X" style of development, but a bunch of people answering the survey with "oh yeah, Xcode added that new autocomplete didn't it?" would do a lot to get us to that kind of number.

        • philipwhiuk a day ago ago

          I’ve always assumed that’s the point of these things. Ask a broad question that will allow you to write a puffy blogpost that backs your conclusion and then write it in a way that pushes your tools.

          The amount of free training coming out on AI shows just how keen they are to push adoption to meet their targets.

          Eventually these training will no longer be free as they pivot to profit.

  • dbs a day ago ago

    No need for evidence of net benefits to get mass adoption. We have mass adoption of digital touchpads in cars despite evidence they are not safe. We have widespread adoption of open spaces despite evidence of them not increasing productivity..

  • xgbi a day ago ago

    Rant mode on.

    For the second time of the week this morning, I spent 45 min reviewing a merge request where the guy has no idea what he did, didn’t test, and let the llm hallucinate a very bad solution to a simple problem.

    He just had to read the previous commit, which introduced the bug, and think about it for 1min.

    We are creating young people that have a very limited attention span, have no incentive to think about things, and have very pleasing metrics on the dora scale. When asked what their code is doing, they just don’t know. They can’t event explain the choices they made.

    Honestly I think AI is just a very very sharp knife. We’re going to regret this just like regretting the mass offshoring in the 2000s.

    • JLO64 a day ago ago

      I'm not surprised to see reports like this for open source projects where the bar for contributing is relatively low, but am surprised to see it in the workplace. You'd imagine that devs like that would be filtered out via the hiring process...

      I'm a coding tutor and the most frustrating part of my job is when my students use LLM generated code. They have no clue what the code does (or even what libraries they're using) and just care about the pretty output. Whenever I try asking them questions about the code one of them responded verbatim "I dunno" and continued prompting ChatGPT (I ditched that student afterward). Something like Warp where the expectation is to not even interact with the terminal is equally bad as far as I'm concerned since students won't have any incentive to understand what's under the hood of their GUIs.

      To be clear, I don't mind people using LLMs to code (I use them to code my SaaS project) but what I do mind is them not even trying to understand wtf is on their screen. This new breed of vibe coders are going to be close to useless in real world programming jobs which when combined with the push targeted at kids that "coding is the future" is going to result in a bunch of below mediocre devs both flooding the market and struggling to find employment.

      • xgbi a day ago ago

        Same, I use LLMs to figure out the correct options to pass in the AZ or the AWS CLI, or some low-key things. I still code on my own.

        But our management has drunk the Kool Aid and has now everybody obliged to use Copilot or other LLM assists.

      • saulpw 21 hours ago ago

        > You'd imagine that devs like that would be filtered out via the hiring process...

        ...except when the C-suite is pressuring the entire org to use AI tools. Then these people are blessed as the next generation of coders.

    • rhubarbtree a day ago ago

      Yes, we created them with social media. Lots of people on this site did that by working for the social media companies.

      AI usage like that is a symptom not the problem.

    • driverdan a day ago ago

      > We are creating young people that have a very limited attention span

      This isn't about age. I'm in my 40's and my attention span seems to have gotten worse. I don't use much social media anymore either. I see it in other people too regardless of age.

      • saulpw 21 hours ago ago

        Same. What do you think it's about? Future shock? Smartphone use (separate from social media)? Singularity overwhelm? Long Covid?

    • Archelaos a day ago ago

      Why did you I spent 45 min reviewing instead of outright rejecting it? (Honest question.)

      • xgbi a day ago ago

        Cause the codebase wasn't in my scope originally and I had to review in emergency due to a regression in production. I took the time to understand the issue at hand and why the code had to change.

        To be clear, the guy moved back a Docker image from being non-root (user 1000), to reusing a root user and `exec su` into the user after doing some root things in the entrypoint. The only issue is that when looking at the previous commit, you could see that the K8S deployment using this image wrongly changed the userId to be 1000 instead of 1001.

        But since the coding guy didn't take the time to take a cursory look at why working things started to not work, he asked the LLM "I need to change the owner of some files so that they are 1001" and the LLM happily obliged by using the most convoluted way (about 100 lines of code change).

        The actual fix I suggested was:

            securityContext:
          -    runAsUser: 1000
          +    runAsUser: 1001
        • Archelaos 19 hours ago ago

          Thank you for your explanation. I wondered what might motivate someone to devote so much time to something like this. An emergency due to a regression in production is, of course, a valid reason. And also thank you for sharing the details. It brought a sarcastic smile to my face.

      • GuardianCaveman a day ago ago

        He didn’t read it first either apparently

    • signatoremo a day ago ago

      Your rant is misplaced. It should be placed on hiring — candidates screening, on training — getting junior developers ready for their job, on engineering - code review and testing, and so on.

      If anything, AI helps expose shortcomings of companies. The strong ones will fix them. The weak ones will languish.

      • jdiff 15 hours ago ago

        Assuming you're right, I don't believe the effect will be at all dramatic. The vast majority of businesses are not in breakneck, life-or-death, do-or-die competition. The vast majority of business do quite a lot of languishing in a variety of areas, and yet they keep their clients and customers and even continue to grow despite not just languishing, but solid leaps backwards and even direct shots to the foot.

        How do you propose that AI will do what you suggest, exposing shortcomings of companies? Right now, when it's being implemented, it's largely dictates from above with little but FOMO driving it, no cohesive direction to guide its use.

    • SamuelAdams a day ago ago

      > We are creating young people that have a very limited attention span, have no incentive to think about things, and have very pleasing metrics on the dora scale. When asked what their code is doing, they just don’t know. They can’t event explain the choices they made.

      This has nothing to do with AI, and everything to do with a bad hire. If the developer is that bad with code, how did they get hired in the first place? If AI is making them lazier, and they refuse to improve, maybe they ought to be replaced by a better developer?

    • dawnerd a day ago ago

      I've just started immediately rejecting AI pull requests. I don't have time for that.

      There's going to be a massive opportunity for agencies that are skilled enough to come in and fix all of this nonsense when companies realize what they've invested in.

      • kemayo a day ago ago

        Almost worse is AI bug reports. I've gotten a few of them on GitHub projects, where someone clearly pasted an error message into ChatGPT and asked it to write a bug report... and they're incoherent.

        • fluoridation a day ago ago

          Some are using them to hunt bug bounties too. The CURL developer has complained about dealing with a deluge of bullshit reports that contain no substance. I watched a video the other day that demonstrated an example of a report of a buffer overflow. TL;DR: Code was generated by some means that included the libcurl header and called strlen() on a buffer with no null terminator, and that's all it did. It triggered ASAN and a report was generated from that, talking about how a remote website could overflow a buffer in the client's cookies using a crafted response. Mind you, the code didn't even call into libcurl once.

    • akomtu a day ago ago

      When neuralink becomes usable, the same hordes of people will rush to install the AI plugin so it can relieve their brains from putting in any effort. The rest will be given a difficult choice: do the same or become unemployable in the new AI economy.

      • bluefirebrand a day ago ago

        I can't wait until people are writing malware that targets neuralink users with brain death

        Cyberpunk future here we come baby

    • tmaly a day ago ago

      there is a temptation to fight AI slop with AI slop

  • wiz21c a day ago ago

    > This indicates that AI outputs are perceived as useful and valuable by many of this year’s survey respondents, despite a lack of complete trust in them.

    Or the respondents have hard times admitting AI can replace them :-)

    I'm a bit cynical but sometimes when I use Claude, it is downright frightening how good it is sometimes. Having coded for a lot of year, I'm sometimes a bit scared that my craft can, somtimes, be so easily replaced... Sure it's not building all my code, it fails etc. but it's a bit disturbing to see that somethign you have been trained a for a very long time can be done by a machine... Maybe I'm just feeling a glimpse of what others felt during the industrial revolution :-)

    • pluc a day ago ago

      Straight code writing has never been the problem - it's the understanding of said code that is. When you rely on AI, and AI creates something, it might increase productivity immediately but once you need to debug something that uses that piece of code, it will nullify that gain as you have no idea where to look. That's just one aspect of this false equivalency.

    • polotics a day ago ago

      Well when I use a power screwdriver I am always impressed by how much more quickly I can finish easy tasks too. I also occasionally busted a screw or three, that then I had to drill out...

    • cogman10 a day ago ago

      So long as you view AI as a sometimes competent liar, then it can be useful.

      I've found AI is pretty good at dumb boilerplate stuff. I was able to whip out prototypes, client interfaces, tests, etc pretty fast with AI.

      However, when I've asked AI "Identify performance problems or bugs in this code" I find it'll just make up nonsense. Particularly if there aren't problems with the code.

      And it makes sense that this is the case. AI has been trained on a mountain of boilerplate and a thimble of performance and bug optimizations.

      • fluoridation a day ago ago

        >AI has been trained on a mountain of boilerplate and a thimble of performance and bug optimizations.

        That's not exactly it, I think. If you look through a repository's entire history, the deltas for the bug fixes and optimizations will be there. However, even a human who's not intimately familiar with the code and the problem will have a hard time understanding why the change fixes the bug, even if they understand the bug conceptually. That's because source code encodes neither developer intent, nor specification, nor real design goals. Which was cause of the bug?

        * A developer who understood the problem and its solution, but made a typo or a similar miscommunication between brain and fingers.

        * A developer who understood the problem but failed to implement the algorithm that solves it.

        * An algorithm was used that doesn't solve the problem.

        * The algorithm solves the problem as specified, but the specification is misaligned with the expectations of the users.

        * Everything used to be correct, but an environment change made it so the correct solution stopped being correct.

        In an ideal world, all of this information could be somehow encoded in the history. In reality this is a huge amount of information that would take a lot of effort to condense. It's not that it wouldn't have value even for real humans, it's just that it would be such a deluge of information that it would be incomprehensible.

    • hu3 a day ago ago

      I also find it great for prompts like:

      "this function should do X, spot inconsistencies, potential issues and bugs"

      It's eye opening sometimes.

    • zwieback a day ago ago

      I find AI coding assistants useful when I'm using a new library or language feature I'm not super familiar with.

      When I have AI generate code using features I'm very familiar with I can see that it's okay but not premium code.

      So it makes sense that I feel more productive but also a little skeptical.

    • bopbopbop7 a day ago ago

      Or you aren’t as good as you think you are :-)

      Almost every person I worked with that is impressed by AI generated code has been a low performer that can’t spot the simplest bugs in the code. Usually the same developers that blindly copy pasted from stack overflow before.

    • apt-apt-apt-apt a day ago ago

      When I see the fabulous images generated by AI, I can't help but wonder how artists feel.

      Anyone got a pulse on what the art community thinks?

      • fluoridation a day ago ago

        Generally speaking, they don't like their public posts being scraped to train AIs, and they don't like accounts that post AI output without disclosing it.

    • surgical_fire a day ago ago

      In a report from Google, who is heavily invested in AI becoming the future, I actually expect the respondents to sound more positive about AI than they actually are

      Much like in person I pretend to think AI is much more powerful and inevitable than I actually think it is. Professionally it makes very little sense to be truthful. Sincerity won't pay the bills.

      • bluefirebrand a day ago ago

        Everyone lying to their bosses about how useful AI is has placed us all in a prisoner's dilemma where we all have to lie or we get replaced

        If only people could be genuinely critical without worrying they will be fired

        • surgical_fire a day ago ago

          I agree. I also don't make the rules.

          And to be honest, I don't really care. It is a very comfortable position to be in. Allow me to explain:

          I genuinely believe AI poses no threat to my employment. I identify the only medium term threat the very likely economic slowdown in the coming years.

          Meanwhile, I am happy to do this silly dance while companies waste money and resources on what I see as a dead-end, wasteful technology.

          I am not here to make anything better.

    • bitwize a day ago ago

      We may see a return to the days when businesses relied on systems analysts, not programmers, to design their information systems—except now, the programming work will be left to the machines.

  • riffic a day ago ago

    DORA stands for "DevOps Research and Assessment" in case anyone was curious.

    https://en.wikipedia.org/wiki/DevOps_Research_and_Assessment

    • mormegil a day ago ago

      I was confused, since DORA is also the EU Digital Operational Resilience Act.

      • riffic a day ago ago

        That's why it's always worth expanding acronyms in my opinion.

  • cloverich a day ago ago

    Its puzzling to me that people are still debating productivity after its been good enough to quantify for a while now.

    My (merged) PR rate is up about 3x since i started using claude code over the course of a few months. I correspondingly feel more productive and that i have a good grasp of what it can and cannot do. I definitely see some people use it wrong. I also see it fail on some tasks id expect it to succeed at, such as abstracting a singleton in an ios app i am tinkering with, that suggests its not merely operator error but also that its skill is uneven dep on task, ecosystem, and language.

    I am curious for those that use it regularly, have you measured your actual commit rates? Thats ofc still not the same as measuring long term valuable output, but were still a ways off from being able to determime that imho.

    • surgical_fire a day ago ago

      Measuring commit rates is a bad metric. It varies depending on the scope and complexity of what I am doing, and the size of individual commits.

      I can increase dramatically my number of commits by breaking up my commits in very small chunks.

      Typically when I am using AI I tend to reduce a lot the scope of a commit to make it more focused and easier to handle.

  • pluc a day ago ago

    Every study I've read says nobody is seeing productivity gains from AI use. Here's an AI vendor saying the opposite. Funny.

    • Pannoniae a day ago ago

      There's a few explanations for this, and it's not necessarily contradictory.

      1. AI doesn't improve productivity and people just have cognitive biases. (logical, but I also don't think it's true from what I know...)

      2. AI does improve productivity, but only if you find your own workflow and what tasks it's good for, and many companies try to shoehorn it into things which just don't work for it.

      3. AI does improve productivity, but people aren't incentivised to improve their productivity because they don't see returns from it. Hence, they just use it to work less and have the same output.

      4. The previous one but instead of working less, they work at a more leisurely pace.

      5. AI doesn't improve producivity, people just feel it's more productive because it requires less cognitive effort to use than actually doing the task.

      Any of these is plausible, yet they have massively different underlying explanations.... studies don't really show why that's the case. I personally think it's mostly 2. and 3., but it could really be any of these.

      • welshwelsh a day ago ago

        I think it's 5.

        I was very impressed when I first started using AI tools. Felt like I could get so much more done.

        A couple of embarrassing production incidents later, I no longer feel that way. I always tell myself that I will check the AI's output carefully, but then end up making mistakes that wouldn't have happened if I wrote the code myself.

        • enobrev a day ago ago

          This is what slows me down most. The initial implementation of a well defined task is almost always quite fast. But then it's a balance of either...

          * Checking it closely myself, which sometimes takes just as long as it would have taken me to implement it in the first-place, with just about as much cognitive load since I now have to understand something I didn't write

          * OR automating the checking by pouring on more AI, and that takes just as long or longer than it would have taken me to check it closely myself. Especially in cases where suddenly 1/3 of automated tests are failing and it either needs to find the underlying system it broke or iterate through all the tests and fix them.

          Doing this iteratively has made the overall process for an app I'm trying to implement 100% using LLMs to take at least 3x longer than I would have built it myself. That said, it's unclear I would have kept building this app without using these tools. The process has kept me in the game - so there's definitely some value there that offsets the longer implementation time.

      • ACCount37 a day ago ago

        "People use AI to do the same tasks with less effort" maps onto what we've seen with other types of workplace automation - like Excel formulas or VBA scripts.

        Why report to your boss that you managed to get a script to do 80% of your work, when you can just use that script quietly, and get 100% of your wage with 20% of the effort?

        • jdiff 15 hours ago ago

          That aligns well with past ideas, but it doesn't align with the studies that have been performed, where there aren't any of the conflicting priorities you mention.

      • DenisM a day ago ago

        6. It’s now easier to get something off the ground but structural debt accumulates invisibly. The inevitable cleanup operation happens outside of the initial assessed productivity window. If you expand the window across time and team boundaries the measured productivity reverts to the mean.

        This options is insidious in that not only people initially asked about the effect are initially oblivious, it is very beneficial for them to deny the outcome altogether. Individual integrity may or may not overcome this.

      • thinkmassive a day ago ago

        What's the difference between 1 & 5?

        I've personally witnessed every one of these, but those two seem like different ways to say the same thing. I would fully agree if one of them specified a negative impact to productivity, and the other was net neutral but artificially felt like a gain.

      • rsynnott a day ago ago

        (1) seems very plausible, if only because that is what happens with ~everything which promises to improve productivity. People are really bad at self-evaluating how productive they are, and productivity is really pretty hard to externally measure.

      • mlinhares a day ago ago

        Why not all? I've seen them all play out. There's also the people that are downstream of AI slop that feel less productive because now they have to clean up the shit other people produced.

        • Pannoniae a day ago ago

          You're right, it kinda depends on the situation itself! And the downstream effects. Although, I'd argue that the one you're talking about isn't really caused by AI itself, that's squarely a "I can't say no to the slop because they'll take my head off" problem. In healthy places, you would just say "hell no I'm not merging slop", just as you have previously said "no I'm not merging shit copypasted from stackoverflow".

      • pydry a day ago ago

        >1. AI doesn't improve productivity and people just have cognitive biases. (logical, but I also don't think it's true from what I know...)

        It is from what Ive seen. It has the same visible effect on devs as a slot machine giving out coins when it spits out something correct. Their faces light up with delight when it finally nails something.

        This would explain the study that showed a 20% decline in actual productivity where people "felt" 20% more productive.

      • fritzo a day ago ago

        2,3,4. While my agent refactors code, I do housework: fold laundry, wash dishes, stack firewood, prep food, paint the deck. I love this new life of offering occasional advice, then walking around and using my hands.

      • HardCodedBias a day ago ago

        (3) and (4) are likely true.

        In theory competition is supposed to address this.

        However, our evaluation processes generally occur on human and predictable timelines, which is quite slow compared to this impulse function.

        There was a theory that inter firm competition could speed this clock up, but that doesn't seem plausible currently.

        Almost certainly AI will be used, extensively, for reviews going forward. Perhaps that will accelerate the clock rate.

    • azdle a day ago ago

      It's not even claiming that. It's only claiming that people who responded to the survey feel more productive. (Unless you assume that people taking this survey have an objective measure for their own productivity.)

      > Significant productivity gains: Over 80% of respondents indicate that AI has enhanced their productivity.

      _Feeling_ more productive is inline with the one proper study I've seen.

      • thebigspacefuck a day ago ago

        The METR study showed even though people feel more productive they weren’t https://arxiv.org/abs/2507.09089

        • knes a day ago ago

          the MTR study is a joke. it surveyed only 16 devs. in the era of Sonnet 3.5

          Can we stop citing this study

          I'm not saying the DORA study is more accurate, but at least it surveyed 5000 developers, globally and more recently (between June 13 and July 21, 2025) which means using the most recent SOTA models

          • rsynnott a day ago ago

            > I'm not saying the DORA study is more accurate, but at least it surveyed 5000 developers, globally and more recently

            It's asking a completely different question; it is a survey of peoples' _perceptions of their own productivity_. That's basically useless; people are notoriously bad at self-evaluating things like that.

          • capnrefsmmat a day ago ago

            It didn't "survey" devs. It paid them to complete real tasks while they were randomly assigned to use AI or not, and measured the actual time taken to complete the tasks vs. just the perception. It is much higher quality evidence than a convenience sample of developers who just report their perceptions.

          • bopbopbop7 a day ago ago

            Yea cite the study funded by a company that invested billions into AI instead, that will surely be non biased and accurate.

      • Foobar8568 a day ago ago

        Well I feel and I am more productive, now on coding activities, I am not convinced, it basically replaced SO and google, but at the end of the day, I always need and want to check reference material that I may have known or not existed. Plenty of time, Google couldn't even find them.

        So in my case, yes but not on activities these sellers are usually claiming.

    • rsynnott a day ago ago

      This seems to be a poll of _users_. "Do people think it has improved _their_ productivity?" is a very different question to "Has it empirically improved aggregate productivity of a team/company/industry." People think _all_ sorts of snake oil improve their productivity; you can't trust people to self-report on things like this.

  • philipwhiuk a day ago ago

    What the heck is that "DORA AI Capabilities Model" diagram trying to show.

  • righthand a day ago ago

    > AI adoption among software development professionals has surged to 90%

    I am proudly part of the 10%!

  • dionian a day ago ago

    so the whole thing is about AI?

    • jdiff 15 hours ago ago

      ...The article titled "How are developers using AI?" tucked behind a link labeled "State of AI-assisted software development"?

      Yes, it's about AI. I'm interested to know what you were expecting. Was it titled or labeled differently 11 hours ago?

  • Fokamul a day ago ago

    2026, year of cybersecurity. Baby, let's goo :D