I spent some time thinking through what "building software" actually means in the age of AI, what's genuinely changing, what isn't, and why the discourse keeps getting stuck on the wrong question. The article summarizes my thoughts.
AI making programming accessible to more people is a genuine good. The apps are real, they run, and the excitement is deserved. But everyone is asking "will AI replace software engineers?" when the more interesting question is hiding in plain sight: what happens when there is more software than people who know how to operationalize it?
I tried to think this through honestly rather than just react to the hype. Curious what people here think.
I think this is slightly romanticizing the idea that humans “hold the territory” in their heads.
In most real systems no single engineer actually understands the full territory either. People rely on partial mental models, docs, logs, and tribal knowledge. In that sense, LLMs operating on maps might not be that different from how teams already work.
That's a fair point, and I'd actually agree with the premise. I work in an environment where the scale makes it impossible to fully understand the full picture, so it's true no single engineer holds the full territory.
But I think the distinction isn't about completeness of knowledge. It's about the feedback loop. Engineers hold partial mental models, but those models are constantly being corrected by reality. You get paged at 3am, you see traffic behave in ways the docs don't describe, you debug something and discover the system doesn't work the way anyone thought it did. Tribal knowledge is actually a good example of this. It exists precisely because someone experienced something that was never captured anywhere. LLMs can't acquire that because they don't experience the system IMO.
But I’m not sure it’s entirely inaccessible to models either. If you feed them enough signals,logs, incidents, metrics, past debugging threads they might approximate that feedback indirectly. Not the same as being paged at 3am, but maybe closer than we assume.
but your distinction is really good. The feedback loop is probably the key difference.
Hi HN.
I spent some time thinking through what "building software" actually means in the age of AI, what's genuinely changing, what isn't, and why the discourse keeps getting stuck on the wrong question. The article summarizes my thoughts.
AI making programming accessible to more people is a genuine good. The apps are real, they run, and the excitement is deserved. But everyone is asking "will AI replace software engineers?" when the more interesting question is hiding in plain sight: what happens when there is more software than people who know how to operationalize it?
I tried to think this through honestly rather than just react to the hype. Curious what people here think.
I think this is slightly romanticizing the idea that humans “hold the territory” in their heads.
In most real systems no single engineer actually understands the full territory either. People rely on partial mental models, docs, logs, and tribal knowledge. In that sense, LLMs operating on maps might not be that different from how teams already work.
That's a fair point, and I'd actually agree with the premise. I work in an environment where the scale makes it impossible to fully understand the full picture, so it's true no single engineer holds the full territory.
But I think the distinction isn't about completeness of knowledge. It's about the feedback loop. Engineers hold partial mental models, but those models are constantly being corrected by reality. You get paged at 3am, you see traffic behave in ways the docs don't describe, you debug something and discover the system doesn't work the way anyone thought it did. Tribal knowledge is actually a good example of this. It exists precisely because someone experienced something that was never captured anywhere. LLMs can't acquire that because they don't experience the system IMO.
But I’m not sure it’s entirely inaccessible to models either. If you feed them enough signals,logs, incidents, metrics, past debugging threads they might approximate that feedback indirectly. Not the same as being paged at 3am, but maybe closer than we assume. but your distinction is really good. The feedback loop is probably the key difference.