Once you have those real world integrations, no spec is ever complete enough. You can have a whole industry agree on decades-old RFCs and still need "but actually" details diffused through every implementation that actually gets used. There is always more detail in implementations than can be specified by any less than the code itself, and it's multiplied by the set of all possibly deployed versions, and again by possible configurations.
Fun fact: The most costly incident I ever "caused" was because I fixed a bug in an API to make it match the spec. The affected client escalated and forced me to restore the defect, even though it also affects many other clients.
Working in almost any mature space requires you to be extremely conservative in ways that have always proven difficult to specify.
Code isn't assembly, code is required to sufficiently express English in a way that is unambiguous.
You can't translate plain English into unambiguous code. Period. Not even engineers can. The only way this translation happens is that humans are good enough at communicating to get to a point that code can be produced and iterated on to produce an outcome with enough context the thing doesn't fall over after a week.
The only thing stopping AI doing this is the right communication models and enough context.
As an engineer by trade. Im quickly realizing how fucked we are right now.
I give us maybe 2-3 years until the last engineering roles are posted. Maybe longer depending on how many businesses survive the crash while refusing to automate their workflows.
Most of us are going to start transferring skills to more of a prompt programming and coordinator role.
>Most of us are going to start transferring skills to more of a prompt programming and coordinator role.
Human code reviews and in-depth understanding of architecture remain important even for codebases with comprehensive test suites because treating a computer program solely as a black box does not allow one to reason precisely about the program when it encounters general inputs. Relying solely on black box testing simply offloads more QA to production users.
In a future world where humans rely entirely on machines to interpret their ambiguous instructions, who does the checking and what do the checks involve?
Im still of the opinion that programming should be enjoyed more as a hobby/skill now. Just like a builder cannot build an entire house by himself, programmers now cannot build without an orchestrator.
Nonetheless, you can build a cabiner just for funsies and to feel like you've accomplished something
>The AI bullish case has a historical analogy at its core: when high-level languages replaced assembly, [1] assembly programmers resisted.
This is a very common myth, there was never a period where everyone programmed in assembly and then high level languages were introduced.
Pretty much since the first CPUs were released, there were already programming languages for them.
Alan Turing went from hooking physical wires up, to writing Autocode for the Ferranti in less than a Decade.
And it's not even that the period of Assembly programming was brief, spanning almost a decade after WW2, the guys wrote in symbolic mathematical language before setting on to write the physical schematics of their machines.
So no, there wasn't a period where we all programmed in Assembly and then we discovered programming languages and we saw that it was good.
The footnote on their sentence about assembly programmers: “I mean, I dunno. I'm not a historian. This is a vibes-level historical reconstruction. I would be curious if this is way off base though”
So, yeah. They just made it up because it felt right. (Which, I guess is what one would expect from AI related stuff these days.)
You’re definitely right though: it doesn’t take a deep dive into the history of computing and programming languages to find higher-than-assembly level languages emerging at the very dawn of computing.
Sometimes, in the interest of having something rather than nothing, I have to press publish. This entails getting things wrong, which is regrettable.
I will say, that I'm trying to steelman the code-as-assembly POV, and I dont think the exact historical analogy is critical to it being right or wrong. The main thing is that "we've seen the level of abstraction go up before, and people complained, but this is no different" is the crux. In that sense, a folk history is fine as long as the pattern is real
Once you have those real world integrations, no spec is ever complete enough. You can have a whole industry agree on decades-old RFCs and still need "but actually" details diffused through every implementation that actually gets used. There is always more detail in implementations than can be specified by any less than the code itself, and it's multiplied by the set of all possibly deployed versions, and again by possible configurations.
Fun fact: The most costly incident I ever "caused" was because I fixed a bug in an API to make it match the spec. The affected client escalated and forced me to restore the defect, even though it also affects many other clients.
Working in almost any mature space requires you to be extremely conservative in ways that have always proven difficult to specify.
Read something similar a while back.
Code isn't assembly, code is required to sufficiently express English in a way that is unambiguous.
You can't translate plain English into unambiguous code. Period. Not even engineers can. The only way this translation happens is that humans are good enough at communicating to get to a point that code can be produced and iterated on to produce an outcome with enough context the thing doesn't fall over after a week.
The only thing stopping AI doing this is the right communication models and enough context.
As an engineer by trade. Im quickly realizing how fucked we are right now.
I give us maybe 2-3 years until the last engineering roles are posted. Maybe longer depending on how many businesses survive the crash while refusing to automate their workflows.
Most of us are going to start transferring skills to more of a prompt programming and coordinator role.
Until those roles go to.
>Most of us are going to start transferring skills to more of a prompt programming and coordinator role.
Human code reviews and in-depth understanding of architecture remain important even for codebases with comprehensive test suites because treating a computer program solely as a black box does not allow one to reason precisely about the program when it encounters general inputs. Relying solely on black box testing simply offloads more QA to production users.
In a future world where humans rely entirely on machines to interpret their ambiguous instructions, who does the checking and what do the checks involve?
Im still of the opinion that programming should be enjoyed more as a hobby/skill now. Just like a builder cannot build an entire house by himself, programmers now cannot build without an orchestrator.
Nonetheless, you can build a cabiner just for funsies and to feel like you've accomplished something
>The AI bullish case has a historical analogy at its core: when high-level languages replaced assembly, [1] assembly programmers resisted.
This is a very common myth, there was never a period where everyone programmed in assembly and then high level languages were introduced.
Pretty much since the first CPUs were released, there were already programming languages for them.
Alan Turing went from hooking physical wires up, to writing Autocode for the Ferranti in less than a Decade.
And it's not even that the period of Assembly programming was brief, spanning almost a decade after WW2, the guys wrote in symbolic mathematical language before setting on to write the physical schematics of their machines.
So no, there wasn't a period where we all programmed in Assembly and then we discovered programming languages and we saw that it was good.
The footnote on their sentence about assembly programmers: “I mean, I dunno. I'm not a historian. This is a vibes-level historical reconstruction. I would be curious if this is way off base though”
So, yeah. They just made it up because it felt right. (Which, I guess is what one would expect from AI related stuff these days.)
You’re definitely right though: it doesn’t take a deep dive into the history of computing and programming languages to find higher-than-assembly level languages emerging at the very dawn of computing.
Sometimes, in the interest of having something rather than nothing, I have to press publish. This entails getting things wrong, which is regrettable.
I will say, that I'm trying to steelman the code-as-assembly POV, and I dont think the exact historical analogy is critical to it being right or wrong. The main thing is that "we've seen the level of abstraction go up before, and people complained, but this is no different" is the crux. In that sense, a folk history is fine as long as the pattern is real