> "the cultural zeitgeist has centered on the idea of runaway progress: artificial intelligence recursively improving itself until we become subservient to it. far more plausible .. a world where the curve flattens and our rate of progress slows to a crawl."
> "our mission should remain the same: accelerate to the maximum extent possible."
I think you need to justify why hurrying to be AI servants should be our mission :-|
You're right, I should have tied that back to the opening.
The acceleration we've experienced has allowed us to "outrun" our problems. In earlier generations, that meant famine or disease. Today, it might be climate change. Tomorrow, it'll be something else entirely.
Technological progress has generally been the reason humanity should be optimistic against challenges: it gives us ever improving tools to solve our hardest problems faster than we succumb to them. Without it, that optimism becomes much harder to justify.
Even if there is a plateau we can't cross, if we believe we drive more benefit from technology than the problems it creates, it makes sense to extract as much progress as we can from the physics we have.
Re building a bigger colliders - it occurs to me that it would be difficult to find somewhere to put a 10,000 km collider, but that is roughly the circumference of the moon (which is already a vacuum, so no need for a pressure vessel), or build it in free space if that is not big enough.
"artificial intelligence recursively improving itself until we become subservient to it."
Hmm, the cultural zeitgeist is about LLMs.
Are LLMs improving anything (in the sense of optimization)? I think LLMs are enabling us to automate tasks which are tedious and don't really add value (eg, compliance tasks). And they are helping us create art, content, and ads. I'm not aware of LLMs optimizing systems, let alone themselves. But I'm not very tuned in to all the applications.
Most of modern history has been defined by our ability to outpace our problems through technological acceleration. This essay argues that, rather than an uncontrollable AI takeoff, we may be approaching physical, economic, and regulatory limits — a long plateau where progress slows.
I was thinking about this the other day; and realized this would probably end the tech industry as we know it.
No new unicorns, no new kernel designs, no need for new engineered software that often. With the industry in stasis, the industry is finally able to be regulated to the same degree as plumbing, haircutting, or other licensed fields. An industry no longer any more exceptional than any other. The gold rush is over, the boring process of subjecting it to the will of the people and politicians begins.
I think we're also getting to the limits, across the board, soon. Consider AWS S3, infrastructure for society. 2021 - 100 trillion objects. 2025 - 350 trillion objects. Objects that need new hard drives every 3-5 years to store, replenished on a constant cycle. How soon until we reach the point even a minor prolonged disruption to hard drives, or GPUs, or DRAM, forces hard choices?
> Objects that need new hard drives every 3-5 years to store, replenished on a constant cycle
The replenishment of these hard drives is baked into the cost of S3. If there is a major disruption of hard drive supply then S3 prices will definitely rise, and enterprises that currently store lots of garbage that they don't need, will be priced out of storing this data on hard drives, into Glacier or at worst full deletion of old junk data. That's not necessarily a bad thing, in my opinion.
There is lots of junk data in S3 that should probably be in cold storage rather than spinning metal, if merely for environmental reasons.
I think there is still a lot we can do within the current paradigm - most software, especially for enterprise, is still quite bad. And that will continue to drive employment and growth.
But w may one day have to contend with expecting fewer "new" paradigms and the ultra rapid industry growth that accompanies them (dotcom, SaaS, ML, etc). Will "software eating the world" be enough to counteract this long term? Hard to say
If the clock speed improvements had happened over a much longer stretch of time then we probably would have seen much earlier multi-core capable tooling. We are still mostly optimized for single threaded applications, extracting the maximum from a CPU is really hard work. So also consider the backlog in tooling, there is so much work that still needs to be done there.
The entire scope of human storage or memory is not a constraint, but comes bottlenecked before constraints at arbitrary symbols, images and metaphors. The data even though it's analog, is still bottlenecked. Nothing specific even embedded or parameterized geometrically solves the bottleneck (AI can't do it, that's it's Achilles Heel, as it is ours). Think of language as a virus or parasite here, add symbols and logic, all a rendered from the arbitrary. How come nobody talks about this? We're mediocre thinkers using largely folk science in lieu of very evidently absent direct perception. In other words: we bought the sensation of language without ever verifying it, or correlating it to brain events or real events. The slow down to extinction was inevitable across all these mediums, technologies, preproductions, databases. Nothing solves it.
TFA makes a compelling contrarian point we probably need to consider more than we do. As someone who's been in high tech for 40 years, progress in fundamental enabling drivers like semi scaling has slowed significantly since 2010 and the industry's projections don't foresee a return to the extraordinary rates we enjoyed from ~1970 to ~2010.
Yes, both software improvements and tailored hardware will continue to pay dividends (huge gains from TPUs, chips built specifically for inference, etc, even if the underlying process node is unchanged).
Slowing transistor scaling just gives us one less domain through which to depend on for improvements - the others are all still valid, and will probably be something we come to invest more effort into.
More over there are probably lots of avenues we haven't even attempted because when the node scales down we get quadratic benefits (right?).
Where as tailored hardware, software improvements are unlikely to continue yielding such payoff again and again.
So the argument that cost of improvements will go up is not wrong. And maybe improvements will be more linear than exponential.
We also don't know that current semi tech stack is the best. But it's fair to argue that the cost of moving off a local optimum to a completely different technology stack would be wild.
The plateau you speak of is just inevitable human evolution. There will come a point where we will start to edit our genome and see what's capable of. Before we get there, though -
In time, AI and VR/AR will converge to allow us to evolve new ways to educate/entertain new generations, to distill knowledge in a faster and much more reliable way. We will experience societal upheavals before the "plateau". Our current world order will probably experience major changes.
AGI will probably be a long, long way ahead of us -- in its current state, LLMs are not going to spontaneously develop sentience. There is a massive scale (power, resources, space, etc) issue to contend with.
I doubt that editing our genome will ever accomplish much. Maybe some minor improvements here and there, primarily with reducing the incidence of certain diseases. But there's no free lunch in biology and every advantage comes with disadvantages. For example, some people want to be taller but they don't realize that increases the risk of back problems later in life.
Most people don't even get close to their genetic limits anyway. We're capable of much more than we realize but we fall into mental traps and fool ourselves into thinking we're incapable. Some guy just set a world record by holding his breath for 29 minutes: a few years ago most people would have said that's impossible.
Explain to me how the, "Technological Singularity" isn't just Christian eschatology for dorks? As a Neon Genesis Evangelion fan this really gets me going but that's kind of why I ask.
I tend to dislike the term AGI/ASI, since it's become a marketing label more than a coherent concept (which everyone will define differently)
In this case I use "singularity", by which I mean it more abstractly: a hypothetical point where technological progress begins to accelerate recursively, with heavily reduced human intervention.
My point isn't theological or utopian, just that the physical limits of computation, energy, and scale make that kind of runaway acceleration far less likely IMO than many assume.
Thanks for the coherent response jayw_lead! I too dislike the term but I'm coming from the Noam Chomsky, "language models are a nothingburger in terms of applied linguistics" angle. As far as I can see this still seems like , "secular post-Enlightenment science culture dreams up the Millennium" and given Peter Thiel's recent commentary about the, "anti-christ" I'm terrified to the point of thinking it would be wise to buy an assault rifle.
These aren't very convincing arguments; why don't aircraft factories of the 1930s or the ocean-going steamship dockyards of the 1830s count under "massive upfront investment and large and complex" and therefore predict progress stopping ages ago?
Asimov's story The Last Question ends with the Multivac machine having collected all the data in the universe and still not answering the question "how can entropy be reversed?", so it spends an immeasurable amount of time processing the data in all possible ways. The article argues that we might not get to "the singularity" because progress will stop, but even if we can't make better transistors, we can make more of them, and we can spend longer processing data with them. If what we're missing in an AGI is architectural it might only need insight and distributed computing, not future computers.
> "We built our optimism during a rare century when progress got cheaper as it got faster. That era may be over."
This effect of progress building on progress goes back a hundred years before that, and a hundred years before that. The first practical steam engine was weak, inefficient, and coal-hungry in the early 1700s and what made it 'practical' is that it pumped water out of coal mines. Coalmine owners could get more coal by buying a steam engine; the engine made its fuel cheaper and easier and more coal to sell. Probably this pattern goes back a lot before that because everything builds on everything, but this was a key industrial revolution turning point a long time before the article's claim. The era may be another two hundred years away from being over.
> "There are still areas of promise for step-function improvements: fusion, quantum computing, high-temperature superconductors. But scientific progress is not guaranteed to continue."
Opening with the recursively improving AGI and then having a section of "areas of promise for step-function improvements" and not mentioning any chance of an AGI breakthrough? Neuralink style cyborg interfaces, biological, genetic, health, anti-ageing, new materials or meta-materials, nanotechnology, distributed computing, vibe coding, no possible areas for step changes in any of those?
> "But the burden of proof lies with those claims. Based on what we know today, a plateau is inevitable. Within that plateau, we can only speculate:"
Based on what we know today there isn't "a" plateau, there are many, and they give way to newer things. Steam power plateaued, propellor aircraft plateaued, sailboat speed and size plateaued, cog and gear computer speed plateaued, then electro-mechanical computer speed, then valve computer speed, then discrete logic speed, then integrated circuit speed, then single core, then what, CPUs, then GPUs, then TPUs...
> "Are therapies for broad set of complex autoimmune diseases ahead of the plateau? Probably."
How many autoimmune diseases have been cured, ever? Where does this "Probably" come from - the burden of proof very much lies with that probably.
> "Will we have Earth-based space elevators before the plateau? Probably not."
We don't have a rope strong enough to hang 36km or a way to make one or a way to lift that much mass into geostationary orbit in one go. But if we could make a cable thicker in space, thinner at the ground, launch it in pieces and join it together, we might not be that far away from plausible space elevator. Like if Musk got a bee in his bonnet and opened his wallet wide, I wouldn't bet against SpaceX having a basic one by 2040. Or 2035. I probably would bet against 2028.
> "massive upfront investment and large and complex" and therefore predict progress stopping ages ago?
Regulatory and economic barriers are probably the easiest to overcome. But they are an obstacle. All it takes is for public sentiment to turn a bit more hostile towards technology, and progress can stall indefinitely.
> Opening with the recursively improving AGI and then having a section of "areas of promise for step-function improvements" and not mentioning any chance of an AGI breakthrough?
The premise of the article is that the hardware that AGI (or really ASI) would depend on may itself reach diminishing returns. What if progress is severely hampered by the need for one or two more process improvements that we simply can’t eke out?
Even if the algorithms exist, the underlying compute and energy requirements might hit hard ceilings before we reach "recursive improvement."
> How many autoimmune diseases have been cured, ever? Where does this “Probably” come from — the burden of proof very much lies with that probably.
The point isn't that we're there now, or even close. It’s that we likely don’t need a step-function technological breakthrough to get there.
With incremental improvements in CAR-T therapies — particularly those targeting B cells — Lupus is probably a prime candidate for an autoimmune disease that could feasibly be functionally "cured" within the next decade or so (using extensions of existing technology, not new physics).
In fact, one of the strongest counterpoints to the article's thesis is molecular biology, which has a remarkable amount of momentum and a lot of room left to run.
> We might not be that far away from a plausible space elevator.
I haven't seen convincing arguments that current materials can get us there, at least not on Earth. But the moon seems a lot more plausible due to lower gravity and virtually no atmosphere.
But I'd be very happy to be wrong about this.
> Based on what we know today, there isn’t “a” plateau — there are many, and they give way to newer things.
True. But the point is that when a plateau is governed by physical limits (for example, transistor size), further progress depends on a step-function improvement — and there's no guarantee that such an improvement exists.
Steam and coal weren't limited by physics. Which is the same reason why I didn't mention lithium batteries in the article (surely we can move beyond lithium to other chemistries, so the ceiling on what lithium can deliver isn't relevant). But for fields bounded by fundamental constants or quantum effects, there may not necessarily be a successor.
The paradigms are in initial conditions. This end state of binary/arbitrary/symbol/stat units are largely irrelevant. They're mindless proxies. They bear no relationship to the initial conditions. They're imagination-emasculated, they show no feel for the reality of real information signaling, merely an acquiescence to leadership expedience in anything arbitrary (like tokens).
Try to see the binary as an impassable ceiling that turns us into craven, greedy, status junkie apes. Mediocrity gone wild. That's the binary, and it's seeped already into language, which is symbolic arbitrariness. We don't know how to confront this because we've never confronted it collectively. There was never a front page image on Time Magazine that stated: Are we arbitrary?
Yet we are, we're the poster child for the extinct and doesn't know it sentient.
Each stage of our steady drive to emasculate signaling in favor of adding value displays this openly. Each stage of taking action and expression and rendering them as symbol, then binary, then as counted token, into pretend intelligence showcases a lunatic drive to double down on folk science using logic as a beard in defiance of scientific reason.
> "the cultural zeitgeist has centered on the idea of runaway progress: artificial intelligence recursively improving itself until we become subservient to it. far more plausible .. a world where the curve flattens and our rate of progress slows to a crawl."
> "our mission should remain the same: accelerate to the maximum extent possible."
I think you need to justify why hurrying to be AI servants should be our mission :-|
You're right, I should have tied that back to the opening.
The acceleration we've experienced has allowed us to "outrun" our problems. In earlier generations, that meant famine or disease. Today, it might be climate change. Tomorrow, it'll be something else entirely.
Technological progress has generally been the reason humanity should be optimistic against challenges: it gives us ever improving tools to solve our hardest problems faster than we succumb to them. Without it, that optimism becomes much harder to justify.
Even if there is a plateau we can't cross, if we believe we drive more benefit from technology than the problems it creates, it makes sense to extract as much progress as we can from the physics we have.
AI training helps as to fight the climate change? If anything until now the opposite has been true.
Re building a bigger colliders - it occurs to me that it would be difficult to find somewhere to put a 10,000 km collider, but that is roughly the circumference of the moon (which is already a vacuum, so no need for a pressure vessel), or build it in free space if that is not big enough.
"artificial intelligence recursively improving itself until we become subservient to it."
Hmm, the cultural zeitgeist is about LLMs.
Are LLMs improving anything (in the sense of optimization)? I think LLMs are enabling us to automate tasks which are tedious and don't really add value (eg, compliance tasks). And they are helping us create art, content, and ads. I'm not aware of LLMs optimizing systems, let alone themselves. But I'm not very tuned in to all the applications.
Most of modern history has been defined by our ability to outpace our problems through technological acceleration. This essay argues that, rather than an uncontrollable AI takeoff, we may be approaching physical, economic, and regulatory limits — a long plateau where progress slows.
I was thinking about this the other day; and realized this would probably end the tech industry as we know it.
No new unicorns, no new kernel designs, no need for new engineered software that often. With the industry in stasis, the industry is finally able to be regulated to the same degree as plumbing, haircutting, or other licensed fields. An industry no longer any more exceptional than any other. The gold rush is over, the boring process of subjecting it to the will of the people and politicians begins.
I think we're also getting to the limits, across the board, soon. Consider AWS S3, infrastructure for society. 2021 - 100 trillion objects. 2025 - 350 trillion objects. Objects that need new hard drives every 3-5 years to store, replenished on a constant cycle. How soon until we reach the point even a minor prolonged disruption to hard drives, or GPUs, or DRAM, forces hard choices?
> Objects that need new hard drives every 3-5 years to store, replenished on a constant cycle
The replenishment of these hard drives is baked into the cost of S3. If there is a major disruption of hard drive supply then S3 prices will definitely rise, and enterprises that currently store lots of garbage that they don't need, will be priced out of storing this data on hard drives, into Glacier or at worst full deletion of old junk data. That's not necessarily a bad thing, in my opinion.
There is lots of junk data in S3 that should probably be in cold storage rather than spinning metal, if merely for environmental reasons.
The golden age of software may indeed come to an end some day.
But I don't think the market is saturated just yet.
Even when we stop having unicorns, SWE salaries may go down, but that'll also open new opportunities.
I think there is still a lot we can do within the current paradigm - most software, especially for enterprise, is still quite bad. And that will continue to drive employment and growth.
But w may one day have to contend with expecting fewer "new" paradigms and the ultra rapid industry growth that accompanies them (dotcom, SaaS, ML, etc). Will "software eating the world" be enough to counteract this long term? Hard to say
If the clock speed improvements had happened over a much longer stretch of time then we probably would have seen much earlier multi-core capable tooling. We are still mostly optimized for single threaded applications, extracting the maximum from a CPU is really hard work. So also consider the backlog in tooling, there is so much work that still needs to be done there.
And so much of our tech stack runs on old abstractions.
Do you even need an MMU, if you have memory safe languages?
Honestly, maybe semi scaling is a problem for AI, but for most other software today, the problem is bugs, bloat, latency.
MMUs allow for overcommit and make it possible to dynamically allocate memory to applications that need it.
This is all very locally constrained.
The entire scope of human storage or memory is not a constraint, but comes bottlenecked before constraints at arbitrary symbols, images and metaphors. The data even though it's analog, is still bottlenecked. Nothing specific even embedded or parameterized geometrically solves the bottleneck (AI can't do it, that's it's Achilles Heel, as it is ours). Think of language as a virus or parasite here, add symbols and logic, all a rendered from the arbitrary. How come nobody talks about this? We're mediocre thinkers using largely folk science in lieu of very evidently absent direct perception. In other words: we bought the sensation of language without ever verifying it, or correlating it to brain events or real events. The slow down to extinction was inevitable across all these mediums, technologies, preproductions, databases. Nothing solves it.
TFA makes a compelling contrarian point we probably need to consider more than we do. As someone who's been in high tech for 40 years, progress in fundamental enabling drivers like semi scaling has slowed significantly since 2010 and the industry's projections don't foresee a return to the extraordinary rates we enjoyed from ~1970 to ~2010.
True, semi scaling is slowing down.
But our software could be orders of magnitude more efficient. Am I wrong?
Yes, both software improvements and tailored hardware will continue to pay dividends (huge gains from TPUs, chips built specifically for inference, etc, even if the underlying process node is unchanged).
Slowing transistor scaling just gives us one less domain through which to depend on for improvements - the others are all still valid, and will probably be something we come to invest more effort into.
More over there are probably lots of avenues we haven't even attempted because when the node scales down we get quadratic benefits (right?).
Where as tailored hardware, software improvements are unlikely to continue yielding such payoff again and again.
So the argument that cost of improvements will go up is not wrong. And maybe improvements will be more linear than exponential.
We also don't know that current semi tech stack is the best. But it's fair to argue that the cost of moving off a local optimum to a completely different technology stack would be wild.
The plateau you speak of is just inevitable human evolution. There will come a point where we will start to edit our genome and see what's capable of. Before we get there, though -
In time, AI and VR/AR will converge to allow us to evolve new ways to educate/entertain new generations, to distill knowledge in a faster and much more reliable way. We will experience societal upheavals before the "plateau". Our current world order will probably experience major changes.
AGI will probably be a long, long way ahead of us -- in its current state, LLMs are not going to spontaneously develop sentience. There is a massive scale (power, resources, space, etc) issue to contend with.
I doubt that editing our genome will ever accomplish much. Maybe some minor improvements here and there, primarily with reducing the incidence of certain diseases. But there's no free lunch in biology and every advantage comes with disadvantages. For example, some people want to be taller but they don't realize that increases the risk of back problems later in life.
Most people don't even get close to their genetic limits anyway. We're capable of much more than we realize but we fall into mental traps and fool ourselves into thinking we're incapable. Some guy just set a world record by holding his breath for 29 minutes: a few years ago most people would have said that's impossible.
https://www.uow.edu.au/media/2025/the-science-behind-a-freed...
Explain to me how the, "Technological Singularity" isn't just Christian eschatology for dorks? As a Neon Genesis Evangelion fan this really gets me going but that's kind of why I ask.
I tend to dislike the term AGI/ASI, since it's become a marketing label more than a coherent concept (which everyone will define differently)
In this case I use "singularity", by which I mean it more abstractly: a hypothetical point where technological progress begins to accelerate recursively, with heavily reduced human intervention.
My point isn't theological or utopian, just that the physical limits of computation, energy, and scale make that kind of runaway acceleration far less likely IMO than many assume.
Thanks for the coherent response jayw_lead! I too dislike the term but I'm coming from the Noam Chomsky, "language models are a nothingburger in terms of applied linguistics" angle. As far as I can see this still seems like , "secular post-Enlightenment science culture dreams up the Millennium" and given Peter Thiel's recent commentary about the, "anti-christ" I'm terrified to the point of thinking it would be wise to buy an assault rifle.
https://wiki.evageeks.org/S%C2%B2_Engine
These aren't very convincing arguments; why don't aircraft factories of the 1930s or the ocean-going steamship dockyards of the 1830s count under "massive upfront investment and large and complex" and therefore predict progress stopping ages ago?
Asimov's story The Last Question ends with the Multivac machine having collected all the data in the universe and still not answering the question "how can entropy be reversed?", so it spends an immeasurable amount of time processing the data in all possible ways. The article argues that we might not get to "the singularity" because progress will stop, but even if we can't make better transistors, we can make more of them, and we can spend longer processing data with them. If what we're missing in an AGI is architectural it might only need insight and distributed computing, not future computers.
> "We built our optimism during a rare century when progress got cheaper as it got faster. That era may be over."
This effect of progress building on progress goes back a hundred years before that, and a hundred years before that. The first practical steam engine was weak, inefficient, and coal-hungry in the early 1700s and what made it 'practical' is that it pumped water out of coal mines. Coalmine owners could get more coal by buying a steam engine; the engine made its fuel cheaper and easier and more coal to sell. Probably this pattern goes back a lot before that because everything builds on everything, but this was a key industrial revolution turning point a long time before the article's claim. The era may be another two hundred years away from being over.
> "There are still areas of promise for step-function improvements: fusion, quantum computing, high-temperature superconductors. But scientific progress is not guaranteed to continue."
Opening with the recursively improving AGI and then having a section of "areas of promise for step-function improvements" and not mentioning any chance of an AGI breakthrough? Neuralink style cyborg interfaces, biological, genetic, health, anti-ageing, new materials or meta-materials, nanotechnology, distributed computing, vibe coding, no possible areas for step changes in any of those?
> "But the burden of proof lies with those claims. Based on what we know today, a plateau is inevitable. Within that plateau, we can only speculate:"
Based on what we know today there isn't "a" plateau, there are many, and they give way to newer things. Steam power plateaued, propellor aircraft plateaued, sailboat speed and size plateaued, cog and gear computer speed plateaued, then electro-mechanical computer speed, then valve computer speed, then discrete logic speed, then integrated circuit speed, then single core, then what, CPUs, then GPUs, then TPUs...
> "Are therapies for broad set of complex autoimmune diseases ahead of the plateau? Probably."
How many autoimmune diseases have been cured, ever? Where does this "Probably" come from - the burden of proof very much lies with that probably.
> "Will we have Earth-based space elevators before the plateau? Probably not."
We don't have a rope strong enough to hang 36km or a way to make one or a way to lift that much mass into geostationary orbit in one go. But if we could make a cable thicker in space, thinner at the ground, launch it in pieces and join it together, we might not be that far away from plausible space elevator. Like if Musk got a bee in his bonnet and opened his wallet wide, I wouldn't bet against SpaceX having a basic one by 2040. Or 2035. I probably would bet against 2028.
> "massive upfront investment and large and complex" and therefore predict progress stopping ages ago?
Regulatory and economic barriers are probably the easiest to overcome. But they are an obstacle. All it takes is for public sentiment to turn a bit more hostile towards technology, and progress can stall indefinitely.
> Opening with the recursively improving AGI and then having a section of "areas of promise for step-function improvements" and not mentioning any chance of an AGI breakthrough?
The premise of the article is that the hardware that AGI (or really ASI) would depend on may itself reach diminishing returns. What if progress is severely hampered by the need for one or two more process improvements that we simply can’t eke out?
Even if the algorithms exist, the underlying compute and energy requirements might hit hard ceilings before we reach "recursive improvement."
> How many autoimmune diseases have been cured, ever? Where does this “Probably” come from — the burden of proof very much lies with that probably.
The point isn't that we're there now, or even close. It’s that we likely don’t need a step-function technological breakthrough to get there.
With incremental improvements in CAR-T therapies — particularly those targeting B cells — Lupus is probably a prime candidate for an autoimmune disease that could feasibly be functionally "cured" within the next decade or so (using extensions of existing technology, not new physics).
In fact, one of the strongest counterpoints to the article's thesis is molecular biology, which has a remarkable amount of momentum and a lot of room left to run.
> We might not be that far away from a plausible space elevator.
I haven't seen convincing arguments that current materials can get us there, at least not on Earth. But the moon seems a lot more plausible due to lower gravity and virtually no atmosphere.
But I'd be very happy to be wrong about this.
> Based on what we know today, there isn’t “a” plateau — there are many, and they give way to newer things.
True. But the point is that when a plateau is governed by physical limits (for example, transistor size), further progress depends on a step-function improvement — and there's no guarantee that such an improvement exists.
Steam and coal weren't limited by physics. Which is the same reason why I didn't mention lithium batteries in the article (surely we can move beyond lithium to other chemistries, so the ceiling on what lithium can deliver isn't relevant). But for fields bounded by fundamental constants or quantum effects, there may not necessarily be a successor.
The paradigms are in initial conditions. This end state of binary/arbitrary/symbol/stat units are largely irrelevant. They're mindless proxies. They bear no relationship to the initial conditions. They're imagination-emasculated, they show no feel for the reality of real information signaling, merely an acquiescence to leadership expedience in anything arbitrary (like tokens).
Try to see the binary as an impassable ceiling that turns us into craven, greedy, status junkie apes. Mediocrity gone wild. That's the binary, and it's seeped already into language, which is symbolic arbitrariness. We don't know how to confront this because we've never confronted it collectively. There was never a front page image on Time Magazine that stated: Are we arbitrary?
Yet we are, we're the poster child for the extinct and doesn't know it sentient.
Each stage of our steady drive to emasculate signaling in favor of adding value displays this openly. Each stage of taking action and expression and rendering them as symbol, then binary, then as counted token, into pretend intelligence showcases a lunatic drive to double down on folk science using logic as a beard in defiance of scientific reason.