> who's responsible when that clone has a bug that causes someone to make a bad trade? Who understands the edge cases? Who can debug it when it breaks in production at 3 AM?
"A computer cannot be held accountable. Therefore a computer must never make a business decision." —IBM document from 1970s
Unless not making a decision would, "through inaction, allow a human being to come to harm". — Asimov, "Runaround", 1942.
The slope between insignificant and significant actions is so enormously long and shallow, it isn't going to impede machine decision making unless some widely accepted red line is defined and institutionalized.
Quickly.
If we can't agree that super-scaled predatory business models (unpermissioned or dark permissioned surveillance, corporate sharing or selling of our information, algorithmically feed/ad manipulation based on such surveillance or other conflicts of interest, knowledge appropriation without permission or compensation, predatory financial practices, ... etc.) are not acceptable, and apply oversight with practical means for making violations reliably risk-adjusted deeply unprofitable or criminally prosecuted, the decision making of machines isn't going to be impeded even when it is obviously causing great but not-yet-illegal harm.
After all, the umbrella problem is scalable harm with unchecked incentives. Ethics and accountability overall, not machines in particular.
Scaling of harm (even if the negative externalities from individual incidents seem small), has to be the redline. I.e. unethical behavior.
As a community, I think most of us are aware that the big automated bureaucracies that make up tech giant aggregators' "customer service" are already making life changing decisions, too often capriciously, and often with little recourse for those unfairly harmed.
I have personally been inflicted by that problem.
We are going to need both effective brakes, and reverse gear, to prevent this being an uncontrolled descent.
(Not being cynical. But if something is to be done, we need to address the actual scale and state of the problem. There isn't time left in human history for more slow incremental wack-a-mole efforts, or unrewarded attempts at corporate shaming. Those have failed us.)
In the hyper-scaled world, ethics mean nothing if not backed up by economics.
I don't like reading AI text because I feel each word matters a lot less, however the message the author is conveying can be preserved. I read an article like this for the quality of the message not the craftsmen of the medium.
This is the new world we live in. Writers use AI to balloon a 2 paragraph thought into a full article, readers then use AI to compress the article into something akin to a 2 paragraph easily digestible piece. Everyone much happy. Example:
Key points from The Human in the Loop..
- The author pushes back on the idea that AI has made software developers obsolete, arguing instead that it has shifted where human effort matters.
- AI is increasingly good at producing code quickly, but that doesn’t remove the need for human oversight—especially for correctness, security, edge cases, and architectural fit.
- The “human in the loop” is not a temporary bottleneck but the accountable party who must understand, review, and take responsibility for what ships.
- Senior engineers’ most valuable skill has always been judgment, not typing speed—and AI makes that judgment even more critical.
- The author warns against blaming AI for bugs or bad outcomes; responsibility still lies with the human who approved the result.
- Software practices, team structures, and workflows need to evolve to emphasize review, verification, and intent over raw code production.
But here's the thing. The LLM house writing style isn't just annoying, it's become unreadable through repeated exposure. This really gets to the heart of why human minds are starting to slide off it.
Not trying to be rude but your very short reply is hard to understand. "Unreadable", "starting to slide off", I honestly don't know what you're saying here.
Other people might point to more specific tells, but instead I'll reference https://zanlib.dev/blog/reliable-signals-of-honest-intent/, which says that you can tell mainly because of the subconscious uncanny valley effect, and then you start noticing the tells afterwards.
Here, there's a handful of specific phrases or patterns, but mostly it's just that the writing feels very AI-written (or at least AI-edited). It's all just slightly too perfect, like someone's trying to write the perfect LinkedIn post but are slightly too good at it? It's purely gut feeling, but I don't think that means that it's wrong (although equally it doesn't mean that it's proven beyond reasonable doubt either, so I'm not going to start any witch hunts about it).
These posts claiming that "we will review the output" etc., and that claim software engineers will still need to apply their expertise and wisdom to generated outputs, never seem to think this all the way through. Those who write such articles might indeed have enough experience and deep knowledge to evaluate AI outputs. What of subsequent generations of engineers? What about the forthcoming wave of people who may never attain the (required) deep knowledge, because they've been dependent on these generation tools during the course of their own education?
The structures of our culture combined with what generative AI necessarily is means that expertise will fade generationally. I don't see a way around that, and I see almost no discussion of ameliorating the issue.
The solution is to find a way to use these tools in such a way that saves us huge amounts of time but still forces us to think and document our decisions. Then, teach these methods in school.
Self-directed, individual use of LLMs for generating code is not the way forward for industrial software production.
Personally, I'm not as worried about this as an issue going forward.
When you look at technical people who grew up with the imperfect user interfaces/computers of the 80s, 90s and 00s before the rise of smartphones and tablets, you see people who have a naturally acquired knack for troubleshooting and organically gaining understanding of computers despite (in most cases) never being grounded in the low-level mathematical underpinnings of computer science.
IMO, the imperfections of modern AI are likely going to lead to a new generation of troubleshooters who will organically be forced to accumulate real understanding from a top-down perspective in much the same vein. It's just going to cost us all an absurd amount of electricity.
Another thing I keep thinking about is that review is harder than writing code. A casual LGTM is suitable for peer review, but applying deep context and checking for logic issues requires more thought. When I write code, I usually learn something about software or the context. "Writing is thinking" in a way that reading isn't.
This is why you aren't seeing GenAI used more in law firms. Lawyers can be disbarred by erroneous hallucinations, so they're all extremely cautious about using them. Imagine if there was that kind of accountability in our profession.
I don't understand how this is a new or unique problem. Regardless of when or where (or if!) my coworkers got their degrees, before or after access to AI tools, some of them are intellectually curious. Some do their job well. Some are in over their head & are improving. Some are probably better suited for other lines of work. It's always been an organizational function to identify & retain folks who are willing and able to grow into the experience and knowledge required for the role they currently have and future roles where they may be needed.
Academically, this is a non factor as well. You still learned your multiplication tables even though calculators existed, right?
There will always be a human in the loop, at what level is the question. It was a very short while ago, in the last couple of months in my case where it went from having to to go at a function level to what the posts describe (still not to the level the Death of SWE article is). It is hard for me to imagine that LLMs can go 1 level higher anytime soon. Progress is not guaranteed. Regardless on whether it improves or not I think it is best to assume that it won't and build using that assumption. The shortcomings of the current (NEW) system and their failings are what end up creating the new patterns for work and the industry. I think that is the more interesting conversation, not how quickly can we ship code but what that means for organizations what skills become the most valuable and what actually rises to the top.
> My worry isn't that software development is dying. It's that we'll build a culture where "I didn't review it, the AI wrote it" becomes an acceptable excuse.
I try to review 100% of my dependencies. My criticism of the npm ecosystem is they say "I didn't review it, someone else wrote it" and everyone thinks that is an acceptable excuse.
> who's responsible when that clone has a bug that causes someone to make a bad trade? Who understands the edge cases? Who can debug it when it breaks in production at 3 AM?
"A computer cannot be held accountable. Therefore a computer must never make a business decision." —IBM document from 1970s
Unless not making a decision would, "through inaction, allow a human being to come to harm". — Asimov, "Runaround", 1942.
The slope between insignificant and significant actions is so enormously long and shallow, it isn't going to impede machine decision making unless some widely accepted red line is defined and institutionalized. Quickly.
If we can't agree that super-scaled predatory business models (unpermissioned or dark permissioned surveillance, corporate sharing or selling of our information, algorithmically feed/ad manipulation based on such surveillance or other conflicts of interest, knowledge appropriation without permission or compensation, predatory financial practices, ... etc.) are not acceptable, and apply oversight with practical means for making violations reliably risk-adjusted deeply unprofitable or criminally prosecuted, the decision making of machines isn't going to be impeded even when it is obviously causing great but not-yet-illegal harm.
After all, the umbrella problem is scalable harm with unchecked incentives. Ethics and accountability overall, not machines in particular.
Scaling of harm (even if the negative externalities from individual incidents seem small), has to be the redline. I.e. unethical behavior.
As a community, I think most of us are aware that the big automated bureaucracies that make up tech giant aggregators' "customer service" are already making life changing decisions, too often capriciously, and often with little recourse for those unfairly harmed.
I have personally been inflicted by that problem.
We are going to need both effective brakes, and reverse gear, to prevent this being an uncontrolled descent.
(Not being cynical. But if something is to be done, we need to address the actual scale and state of the problem. There isn't time left in human history for more slow incremental wack-a-mole efforts, or unrewarded attempts at corporate shaming. Those have failed us.)
In the hyper-scaled world, ethics mean nothing if not backed up by economics.
Why would I want to take advice about keeping humans in the loop from someone who let an LLM write 90% of their blog post?
I don't like reading AI text because I feel each word matters a lot less, however the message the author is conveying can be preserved. I read an article like this for the quality of the message not the craftsmen of the medium.
This is the new world we live in. Writers use AI to balloon a 2 paragraph thought into a full article, readers then use AI to compress the article into something akin to a 2 paragraph easily digestible piece. Everyone much happy. Example:
Key points from The Human in the Loop..
- The author pushes back on the idea that AI has made software developers obsolete, arguing instead that it has shifted where human effort matters.
- AI is increasingly good at producing code quickly, but that doesn’t remove the need for human oversight—especially for correctness, security, edge cases, and architectural fit.
- The “human in the loop” is not a temporary bottleneck but the accountable party who must understand, review, and take responsibility for what ships.
- Senior engineers’ most valuable skill has always been judgment, not typing speed—and AI makes that judgment even more critical.
- The author warns against blaming AI for bugs or bad outcomes; responsibility still lies with the human who approved the result.
- Software practices, team structures, and workflows need to evolve to emphasize review, verification, and intent over raw code production.
If the author didn't have the good taste and decency to edit the painfully obvious generated text, I just assume the message is low quality.
On what basis did you make this judgement? I found the article to be reasonable and not excessively padded.
But here's the thing. The LLM house writing style isn't just annoying, it's become unreadable through repeated exposure. This really gets to the heart of why human minds are starting to slide off it.
Not trying to be rude but your very short reply is hard to understand. "Unreadable", "starting to slide off", I honestly don't know what you're saying here.
Pretty sure they are mocking LLM outputs by making their own comment look like as if it came from LLM. It's sarcasm.
Other people might point to more specific tells, but instead I'll reference https://zanlib.dev/blog/reliable-signals-of-honest-intent/, which says that you can tell mainly because of the subconscious uncanny valley effect, and then you start noticing the tells afterwards.
Here, there's a handful of specific phrases or patterns, but mostly it's just that the writing feels very AI-written (or at least AI-edited). It's all just slightly too perfect, like someone's trying to write the perfect LinkedIn post but are slightly too good at it? It's purely gut feeling, but I don't think that means that it's wrong (although equally it doesn't mean that it's proven beyond reasonable doubt either, so I'm not going to start any witch hunts about it).
The human pressed the red button. :)
> Mike asks: "If an idiot like me can clone a [Bloomberg terminal] that costs $30k per month in two hours, what even is software development?"
So that’s the baseline intellectual rigor we’re dealing with here.
These posts claiming that "we will review the output" etc., and that claim software engineers will still need to apply their expertise and wisdom to generated outputs, never seem to think this all the way through. Those who write such articles might indeed have enough experience and deep knowledge to evaluate AI outputs. What of subsequent generations of engineers? What about the forthcoming wave of people who may never attain the (required) deep knowledge, because they've been dependent on these generation tools during the course of their own education?
The structures of our culture combined with what generative AI necessarily is means that expertise will fade generationally. I don't see a way around that, and I see almost no discussion of ameliorating the issue.
The solution is to find a way to use these tools in such a way that saves us huge amounts of time but still forces us to think and document our decisions. Then, teach these methods in school.
Self-directed, individual use of LLMs for generating code is not the way forward for industrial software production.
Personally, I'm not as worried about this as an issue going forward.
When you look at technical people who grew up with the imperfect user interfaces/computers of the 80s, 90s and 00s before the rise of smartphones and tablets, you see people who have a naturally acquired knack for troubleshooting and organically gaining understanding of computers despite (in most cases) never being grounded in the low-level mathematical underpinnings of computer science.
IMO, the imperfections of modern AI are likely going to lead to a new generation of troubleshooters who will organically be forced to accumulate real understanding from a top-down perspective in much the same vein. It's just going to cost us all an absurd amount of electricity.
Another thing I keep thinking about is that review is harder than writing code. A casual LGTM is suitable for peer review, but applying deep context and checking for logic issues requires more thought. When I write code, I usually learn something about software or the context. "Writing is thinking" in a way that reading isn't.
This is why you aren't seeing GenAI used more in law firms. Lawyers can be disbarred by erroneous hallucinations, so they're all extremely cautious about using them. Imagine if there was that kind of accountability in our profession.
>software engineers will still need to apply their expertise and wisdom to generated outputs
And in my experience they don't really do that. They trust that it'll be good enough.
I don't understand how this is a new or unique problem. Regardless of when or where (or if!) my coworkers got their degrees, before or after access to AI tools, some of them are intellectually curious. Some do their job well. Some are in over their head & are improving. Some are probably better suited for other lines of work. It's always been an organizational function to identify & retain folks who are willing and able to grow into the experience and knowledge required for the role they currently have and future roles where they may be needed.
Academically, this is a non factor as well. You still learned your multiplication tables even though calculators existed, right?
Agreed. This is a moral panic because people are learning and adapting in new ways.
Aristotle blamed literacy for intellectual laziness among the youth compared to the old methods of memorization
The invention of calculators did not cause society to collapse.
Smart and industrious people will focus energy on economically important problems. That has always been the case.
Everything will work out just fine.
There will always be a human in the loop, at what level is the question. It was a very short while ago, in the last couple of months in my case where it went from having to to go at a function level to what the posts describe (still not to the level the Death of SWE article is). It is hard for me to imagine that LLMs can go 1 level higher anytime soon. Progress is not guaranteed. Regardless on whether it improves or not I think it is best to assume that it won't and build using that assumption. The shortcomings of the current (NEW) system and their failings are what end up creating the new patterns for work and the industry. I think that is the more interesting conversation, not how quickly can we ship code but what that means for organizations what skills become the most valuable and what actually rises to the top.
> LLMs can go 1 level higher anytime soon. Progress is not guaranteed.
I tend to agree, but I do think we'll get there in the next 5-10 years.
> When I fix a security vulnerability, I'm not just checking if the tests pass. I'm asking: does this actually close the attack vector?
If you have to ask, then you'd be better putting that effort into fixing the test coverage.
AI derived piece arguing with another AI derived piece about AI. It's slop all the way down.
> My worry isn't that software development is dying. It's that we'll build a culture where "I didn't review it, the AI wrote it" becomes an acceptable excuse.
I try to review 100% of my dependencies. My criticism of the npm ecosystem is they say "I didn't review it, someone else wrote it" and everyone thinks that is an acceptable excuse.
What is the bloomberg terminal thing? Did someone vibecode a competitor?