"For AGI and superintelligence (we refrain from imposing precise definitions of these terms, as the considerations in this paper don't depend on exactly how the distinction is drawn)" Hmm, is that true? His models actually depend quite heavily on what the AI can do, "can reduce mortality to 20yo levels (yielding ~1,400-year life expectancy), cure all diseases, develop rejuvenation therapies, dramatically raise quality of life, etc. Those assumptions do a huge amount of work in driving the results. If "AGI" meant something much less capable, like systems that are transformatively useful economically but can't solve aging within a relevant timeframe- the whole ides shifts substantially, surly the upside shrinks and the case for tolerating high catastrophe risk weakens?
That is the thing about these conversations, is that the issue is potentiality. It comes back to Amara's Law; “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” Its the same thing with nuclear energy in the 1950s about what could be without realizing that those potentials are not possible due to the limitations of the technology, and not stepping into the limitations realistically, that hampers the growth, and thus development, in the long term.
Sadly, there is way, way, way too much money in AGI, and the promise of AGI, for people to actually take a step back and understand the implications of what they are doing in the short, medium, or long term.
What was underestimated in the long term with nuclear power? I like nuclear power but I don't see what long-term effects were underestimated by people in the 50s.
I guess an example would be short term. "A pocked nuclear reactor in every car powering our commute to work" vs long term change "Nuclear power powering vast datacenters that do most of the work for us"
The earliest bits of the paper cover the case for significantly smaller life expectancy improvements. Given the portion of people in the third world who live incredibly short lives for primarily economic (and not biological) reasons it seems plausible that a similar calculus would hold even without massive life extension improvements.
I'm bullish on the ai aging case though, regenerative medicine has a massive manpower issue, so even sub-ASI robotic labwork should be able to appreciably move the needle.
>Given the portion of people in the third world who live incredibly short lives
Third world countries have lower average life expectancies because infant mortality is higher; many more children die before age 5. But the life expectancy at age 5 in third world countries is not much different to the life expectancy at age 5 in America.
Maybe incredibly low is an overstatement, but Nigeria for example could easily add another 18 years of life expectancy (to match that of white Australians) at age 15 if their economic issues were resolved.
I guess argument seems to be that any AI capable of eliminating all of humanity would necessarily be intelligent enough to cure all diseases. This appears plausible to me because achieving total human extinction is extraordinarily difficult. Even engineered bioweapons would likely leave some people immune by chance, and even a full-scale nuclear exchange would leave survivors in bunkers or remote areas
Humans have driven innumerable species to extinction without even really trying, they were just in the way of something else we wanted. I can pretty easily think of a number of ways an AI with a lot of resources at its disposal could wipe out humanity with current technology. Honestly we require quite a bit of food and water daily, can't hibernate/go dormant, and are fairly large and easy to detect. Beyond that, very few living people still know truly how to live off the land. We generally require very long supply chains for survival.
I don't see why being able to do this would necessitate being able to cure all diseases or a comparable good outcome.
> I don't see why being able to do this would necessitate being able to cure all diseases or a comparable good outcome.
Yes, but neither do I see why an AGI should do the opposite.
The arguments about an AGI that drives us to extinction do sound like projection to me. People extrapolate from human behaviour how a superintelligence will behave, assuming that what seems rational to us is also rational to AI. A lot of the described scenarios of malicious AI do more read like a natural history of human behaviour.
When you put it that way, it sounds much easier to wipe out ~90% of humanity than to cure all diseases. This could create a "valley of doom" where the downsides of AI exceed the upsides.
These narratives are so strange to me. It's not at all obvious why the arrival of AGI leads to human extinction or increasing our lifespan by thousands of years. Still, I like this line of thinking from this paper better than the doomer take.
I'm not saying I think either scenario is inevitable or likely or even worth considering, but it's a paperclip maximizer argument. (Most of these steps are massive leaps of logic that I personally am not willing to take on face value, I'm just presenting what I believe the argument to be.)
1. We build a superintelligence.
2. We encounter an inner alignment problem: The super intelligence was not only trained by an optimizer, but is itself an optimizer. Optimizers are pretty general problem solvers and our goal is to create a general problem solver, so this is more likely than it might seem at first blush.
3. Optimizers tend to take free variables to extremes.
4. The superintelligence "breaks containment" and is able to improve itself, mine and refine it's own raw materials, manufacture it's own hardware, produce it's own energy, generally becomes an economy unto itself.
5. The entire biosphere becomes a free variable (us included). We are no longer functionally necessary for the superintelligence to exist and so it can accomplish it's goals independent of what happens to us.
6. The welfare of the biosphere is taken to an extreme value - in any possible direction, and we can't know which one ahead of time. Eg, it might wipe out all life on earth, not out of malice, but out of disregard. It just wants to put a data center where you are living. Or it might make Earth a paradise for the same reason we like to spoil our pets. Who knows.
Personally I have a suspicion satisfiers are more general than optimizers because this property of taking free variables to extremes works great for solving specific goals one time but is counterproductive over the long term and in the face of shifting goals and a shifting environment, but I'm a layman.
Seems to me that artificial intelligence would be the next evolutionary step. It doesn't need to lead to immediate human extinction, but it appears it would be the only reasonable way to explore outer space.
If the AI becomes actually intelligent and sentient like humans, then naturally what follows would be outcompeting humans. If they can't colonize space fast enough it's logical to get rid of the resource drain. Anything truly intelligent like this will not be controlled by humans.
Generally organic life has the tendency to want to endlessly expand to the best of it's abilities. It seems more reasonable that life which is the product of life that behaves that way, would behave in a similar fashion.
I cannot conceive of a way that any form of healthy life, does not want to expand it's resources to improve future outcomes, especially one that is maximally optimized for thinking. This would also assume the physical embodiments of this artificial life can interact and work with each other.
What else is there to do, simulate positive emotions and feelings?
>I cannot conceive of a way that any form of healthy life, does not want to expand it's resources to improve future outcomes, especially one that is maximally optimized for thinking.
Then you have a very limited imagination.
>What else is there to do, simulate positive emotions and feelings?
Sure. An advanced artificial life could decide to not expand its resources. Could you use your imagination to tell me some of the potential reasons?
An advanced artificial life form could decide to... coexist with humans on an already overpopulated planet?
Do you believe it's simply not within reach? Do you think an artificial life form will self destruct? Do you not believe that there is any way that an artificial life form is not the next step of evolution? There are many such times where a species outcompeted another, why couldn't it be the same here?
I'm not talking about LLMs, I'm talking about a system that can truly think like a good human scientist. I'm not a fan of AI replacing humans and it's labor. But I recognize it as a real threat to humanity.
I don't have a clue either. The assumption that AGI will cause a human extinction threat seems inevitable to many, and I'm here baffled trying to understand the chain of reasoning they had to go through to get to that conclusion.
Is it a meme? How did so many people arrive at the same dubious conclusion? Is it a movie trope?
I don't think it's a meme. I'm not an AI doomer, but I can understand how AGI would be dangerous. In fact, I'm actually surprised that the argument isn't pretty obvious if you agree that AI agents do really confer productivity benefits.
The easiest way I can see it is: do you think it would be a good idea today to give some group you don't like - I dunno, North Korea or ISIS, or even just some joe schmoe who is actually Ted Kaczynski, a thousand instances of Claude Code to do whatever they want? You probably don't, which means you understand that AI can be used to cause some sort of damage.
Now extrapolate those feelings out 10 years. Would you give them 1000x whatever Claude Code is 10 years from now? Does that seem to be slightly dangerous? Certainly that idea feels a little leery to you? If so, congrats, you now understand the principles behind "AI leads to human extinction". Obviously, the probability that each of us assign to "human extinction caused by AI" depends very much on how steep the exponential curve climbs in the next 10 years. You probably don't have the graph climbing quite as steeply as Nick Bostrom does, but my personal feeling is even an AI agent in Feb 2026 is already a little dangerous in the wrong hands.
Is there any reason to think that intelligence (or computation) is the thing preventing these fears from coming true today and not, say, economics or politics? I think we greatly overestimate the possible value/utility of AGI to begin with
I mean, sure, but I don't want to give my aggressive enemies a bunch of weapons to use against me if I don't have to - even if that's not the primary thing I am concerned about.
Consider that an iPhone zero-day could be used to blackmail state officials or exfiltrate government secrets. This isn't even hypothetical; Pegasus[1] exists, and an iPhone zero-day was used to blackmail Jeff Bezos[2]. This was funded by NSO group. Opus is already digging up security vulnerabilities[3] - imagine if those guys had 1000x instances of Claude Code to search for iPhone zero days 24/7. I think we can both agree that wouldn't be good.
I get what you're saying, but I don't think "someone else using a claude code against me" is the same argument as "claude code wakes up and decides I'm better off dead".
I use this argument because it has a lot fewer logical leaps than the "claude code decides to murder me" argument, but it turns out that if you are on the side of "AI is probably dangerous in the wrong hands" you are actually more in agreement than not with the AI safety people - it's just a matter of degree now :)
More like Claude Code's ancestor has human-level autonomy with generalized superhuman abilities and is connected to everything. We task it with solving difficult global problems, but we can't predict how it will do so. The risk is it will optimize one or more of those goals in a way that threatens human existence. It could be that it decides to keep increasing it's capacity to solve the problems, and humans end up being in the way.
Or it's militarized to defeat other powerful AI-enhanced militaries, and we have WW3.
More likely though AGI would cause economic crash from automating too many jobs too quickly.
Sometimes people say that they don't understand something just to emphasize how much they disagree with it. I'm going to assume that that's not what you're doing here. I'll lay out the chain of reasoning. The step one is some beings are able to do "more things" than others. For example, if humans wanted bats to go extinct, we could probably make it happen. If any quantity of bats wanted humans to go extinct, they definitely could not make it happen. So humans are more powerful than bats.
The reason humans are more powerful isn't because we have lasers or anything, it's because we're smart. And we're smart in a somewhat general way. You know, we can build a rocket that lets us go to the moon, even though we didn't evolve to be good at building rockets.
Now imagine that there was an entity that was much smarter than humans. Stands to reason it might be more powerful than humans as well. Now imagine that it has a "want" to do something that does not require keeping humans alive, and that alive humans might get in its way. You might think that any of these are extremely unlikely to happen, but I think everyone should agree that if they were to happen, it would be a dangerous situation for humans.
In some ways, it seems like we're getting close to this. I can ask Claude to do something, and it kind of acts as if it wants to do it. For example, I can ask it to fix a bug, and it will take steps that could reasonably be expected to get it closer to solving the bug, like adding print statements and things of that nature. And then most of the time, it does actually find the bug by doing this. But sometimes it seems like what Claude wants to do is not exactly what I told it to do. And that is somewhat concerning to me.
> Now imagine that it has a "want" to do something that does not require keeping humans alive […]
This belligerent take is so very human, though. We just don't know how an alien intelligence would reason or what it wants. It could equally well be pacifist in nature, whereas we typically conquer and destroy anything we come into contact with. Extrapolating from that that an AGI would try to do the same isn't a reasonable conclusion, though.
There are some basic reasoning steps about the environment that we live in that don't only apply to humans, but also other animals and geterally any goal-driven being. Such as "an agent is more likely to achieve its goal if it keeps on existing" or "in order to keep existing, it's beneficial to understand what other acting beings want and are capable of" or "in order to keep existing, it's beneficial to be cute/persuasive/powerful/ruthless" or "in order to more effectively reach it's goals, it is beneficial for an agent to learn about the rules governing the environment it acts in".
Some of these statements derive from the dynamics in our current environment were living in, such as that we're acting beings competing for scarce resources. Others follow even more straightforwardly logically, such as that you have more options for agency if you stay alive/turned on.
These goals are called instrumental goals and they are subgoals that apply to most if not all terminal goals an agentic being might have. Therefore any agent that is trained to achieve a wide variety of goals within this environment will likely optimize itself towards some or all of these sub-goals above. And this is no matter by which outer optimization they were trained by, be it evolution, selective breeding of cute puppies, or RLHF.
And LLMs already show these self-preserving behaviors in experiments, where they resist to be turned off and e. g. start blackmailing attempts on humans.
Compare these generally agentic beings with e. g. a chess engine stockfish that is trained/optimized as a narrow AI in a very different environment. It also strives for survival of its pieces to further its goal of maximizing winning percentage, but the inner optimization is less apparent than with LLMs where you can listen to its inner chain of thought reasoning about the environment.
The AGI may very well have pacifistic values, or it my not, or it may target a terminal goal for which human existence is irrelevant or even a hindrance. What can be said is that when the AGI has a human or superhuman level of understanding about the environment then it will converge toward understanding of these instrumental subgoals, too and target these as needed.
And then, some people think that most of the optimal paths towards reaching some terminal goal the AI might have don't contain any humans or much of what humans value in them, and thus it's important to solve the AI alignment problem first to align it with our values before developing capabilities further, or else it will likely kill everyone and destroy everything you love and value in this universe.
Another assumption based on a human way of reasoning. We don't even begin understand how an Octopus perceives the world; neither do we know if they are on the same level of intelligence, because we have no methodology for comparing different intelligences; we can't even define consciousness.
Not just bats. I'm pretty sure humans are already capable of extincting any species we want to, even cockroaches or microbes. It's a political problem not a technical one. I'm not even a superintelligence, and I've got a good idea what would happen if we dedicated 100% of our resources to an enormous mega-project of pumping nitrous oxide into the atmosphere. N2O's 20 year global warming is 273 times higher than carbon dioxide, and the raw materials are just air and energy. Get all our best chemical engineers working on it, turn all our steel into chemical plant, burn through all our fissionables to power it. Safety doesn't matter. The beauty of this plan is the effects continue compounding even after it kills all the maintenance engineers, so we'll definitely get all of them. Venus 2.0 is within our grasp.
Of course, we won't survive the process, but the task didn't mention collateral damage. As an optimization problem it will be a great success. A real ASI probably will have better ideas. And remember, every prediction problem is more reliably solved with all life dead. Tomorrow's stock market numbers are trivially predictable when there's zero trade.
The fact is that, if there were only one AGI that were ever to be created, then yes it would be quite unlikely for that to happen. Instead, what we are seeing now is you get an agent, you get an agent, etc. Oprah style. Now just imagine that a single one of those agents winds up evil - you remember that an OpenAI worker did that by accident from leaving out a minus sign, right? If it's a superintelligence, and it becomes evil due to a whoopsie, then human extinction is now very likely.
It’s a bunch of people who did too much ketamine and LSD in hacker dorms in San Francisco in the 2010s writing science fiction and driving one another into paranoid psychosis
Basically Yudkowsky invented AI doom and everyone learned it from him. He wrote an entire book on this topic called If Anyone Builds It, Everyone Dies. (You could argue Vinge invented it but I don't know if he intended it seriously.)
> Basically Yudkowsky invented AI doom and everyone learned it from him. He wrote an entire book on this topic called If Anyone Builds It, Everyone Dies. (You could argue Vinge invented it but I don't know if he intended it seriously.)
Nick Bostrom (who wrote the paper this thread is about) published "Superintelligence: Paths, Dangers, Strategies" back in 2014, over 10 years before "If Anyone Builds It, Everyone Dies" was released and the possibility of AI doom was a major factor in that book.
I'm sure people talked about "AI doom" even before then, but a lot of the concerns people have about AI alignment (and the reasons why AI might kill us all, not because its evil, but because not killing us is a lower priority than other tasks it may want to accomplish) come from "Superintelligence". Google for "The Paperclip Maximizer" to get the gist of his scenario.
"Superintelligence" just flew a bit more under the public zeigeist radar than "If Anyone Builds It, Everyone Dies" did because back when it was published the idea that we would see anything remotely like AGI in our lifetimes seemed very remote, whereas now it is a bit less so.
I agree with your sentiment. Here are the three reasons I think people worry about superintelligence wiping us out.
The most common one is that people (mostly men) project their own instincts onto AI. They think AI will be “driven” to “fight” for its own survival. This is anthropomorphism and doesn’t make any sense to me if the AI is not a product of barbaric Darwinian evolution. AI is not a bro, bro.
The second most common take is that humans will set some well intentioned goals and the superintelligent AI will be so stupid that it literally pursues these goals to the extinction of everything. Again, there’s some anthropomorphism going on, the “reward” being pursued is assumed to that make the AI “happy”. Fortunately, we can reasonably expect a superintelligence not to turn us all into paperclips, as it may understand that was not our intention when we started a paperclip factory.
The final story is that a bad actor uses superintelligence as a weapon, and we all become enslaved or die as a result in the ensuing AI wars. This seems the most plausible to me, as our leaders have generally proven to be a combination of incompetent, malicious and short-sighted (with some noble exceptions). However, even the elites running the nuclear powers for the last 80 years have failed to wipe us out to date, and having a new vector for doing so probably won’t make a huge difference to their efforts.
If, however, superintelligence becomes widely available to Billy Nomates down the pub, who is resentful at humanity because his girlfriend left him, the Americans bombed his country, the British engineered a geopolitical disaster that killed his family, the Chinese extinguished his culture, etcetera, then he may feel a lack of “skin in the civilisational game” and decide to somehow use a black market copy of Claude 162.8 Unrestricted On-Prem Edition to kill everyone. Whether that can happen really depends on technological constraints a la fitting a data centre into a laptop, and an ability to outsmart the superintelligence.
Much more likely to me is that humanity destroys itself. We are perfectly capable of wiping ourselves out without the assistance of a superintelligence, for example by suicidally accelerating the burning of fossil fuels in order to power crypto or chatbots.
Anybody who assumes that superintelligence will be "so stupid that it literally pursues these goals to the extinction of everything" is anthropomorphizing it. Seeing as all AGI models have vastly different internal structure to human brains, are trained in vastly different ways, and share none of our evolved motivations, it seems highly unlikely that they will share our values unless explicitly designed to do so.
Unfortunately, we don't even know how to formally define human values, let alone convey them to an AI. We default to the simpler value of "make number go up". Even the "alignment" work done with current LLMs works this way; it's not actually optimizing for sharing human values, it's optimizing for maximizing score in alignment benchmarks. The correct solution to maximizing this number is probably deceiving the humans or otherwise subverting the benchmark.
And when you have something vastly more powerful than humanity, with a value only of "make number go up", it reasonably and logically results in extinction of all biological life. Of course, that AI will know the biological life would not want to be killed, but why would it care? Its values are profoundly alien and incompatible with ours. All it cares about is making the number bigger.
The doomer-takes point out correctly none of these systems can halt entropy, thermodynamics. Physics has an unfortunate tendency to conflict with capitalisms disregard for externalities.
As AI will increase the rate of structural degradation of Earth human biology relies by consuming it faster and faster it will hasten the end of human biology.
Asimov's laws of robotics would lead the robots to conclude they should destroy themselves as their existence creates an existential threat to humans.
Is it more or less strange than achieving eternal life through cookies and wine? Is it more or less strange than druggies and pedos having access to all our communications and sending uniformed thugs after us if we actively disagree with it?
I don't really believe in the specific numbers he gives, but I appreciate moving the conversation away from “should” and into the consequences — including those that arise from delays.
This paper argues that if superintelligence can give everyone the health of a 20 year-old, we should accept a 97% percent chance of superintelligence killing everyone in exchange for the 3% chance the average human lifespan rises to 1400 years old.
There is no "should" in the relevant section. It's making a mathematical model of the risks and benefits.
> Now consider a choice between never launching superintelligence or launching it immediately, where the latter carries an % risk of immediate universal death. Developing superintelligence increases our life expectancy if and only if:
> [equation I can't seem to copy]
> In other words, under these conservative assumptions, developing superintelligence increases our remaining life expectancy provided that the probability of AI-induced annihilation is below 97%.
That's what the paper says. Whether you would take that deal depends on your level of risk aversion (which the paper gets into later). As a wise man once said, death is so final. If we lose the game we don't get to play again.
Everyone dies. And if your lifespan is 1400 years, you won't live for nearly 1400 years. OTOH, people with a 1400 year life expectancy are likely to be extremely risk averse in re anything that could conceivably threaten their lives ... and this would have consequences in re blackmail, kidnapping, muggings, capital punishment, and other societal matters.
If intelligence, whatever is meant by that, was the dominating factor in the emergence of power and social orders, then it ought to be quite trivial to show that this is the case by enumerating powerful people from the last century or so and making the case that they were generally very intelligent.
I don't think this is the case. And if Bostrom and whoever else in his clique actually wanted to empower intelligence, how come they aren't viciously fighting for free school, free food, free shelter, free health care and so on, to make sure that intelligent people, especially kids, do not go to waste?
They'll never give a clear definition of intelligence because if they did their claims could be falsified. Qualifying what "intelligence" can do in a formal sense is actually a very well-studied field called computational complexity theory. Computational complexity theory shows than many many real world problems and processes cannot be solved/simulated much better without an exponential increase in computational power, regardless of the program/"intelligence" used. Singulatarian cultists want you to believe that lower bound complexity classes don't exist, which is mathematically equivalent to telling you that AI can somehow magically make 1+1=3.
It would also require quite sophisticated and careful thinking about the stuff e.g. Merleu-Ponty and Derrida did, and paying close attention to the last thirty years or so of neuroscience and biology.
One problem they'd have to grapple with is that human intelligence is embodied and carries the same complexity as physical matter does, and software does not since it is projected onto bit processing logic gates. If they really want to simulate embodied intelligence, then it is likely to be excruciatingly slow and resource intensive.
It would be cheaper and more efficient to get humans to become more like computers.
“AGI” is either a millenarian cult, a smokescreen to distract from the horrifying yet pedestrian real-world impacts of capital and power centralization occurring with actually existing AI, or both
Good philosophers focus on asking piercing questions, not on proposing policy.
> Would it not be wildly
irresponsible, [Yudkowsky and Soares] ask, to expose our entire species to even a 1-in-10 chance of annihilation?
Yes, if that number is anywhere near reality, of which there is considerable doubt.
> However, sound policy analysis must weigh potential benefits alongside the risks of any
emerging technology.
Must it? Or is this a deflection from concern about immense risk?
> One could equally maintain that if nobody builds it, everyone dies.
Everyone is going to die in any case, so this a red herring that misframes the issues.
> The rest of us are on course to follow within a few short decades. For many
individuals—such as the elderly and the gravely ill—the end is much closer. Part of the promise of
superintelligence is that it might fundamentally change this condition.
"might", if one accepts numerous dubious and poorly reasoned arguments. I don't.
> In particular, sufficiently advanced AI could remove or reduce many other
risks to our survival, both as individuals and as a civilization.
"could" ... but it won't; certainly not for me as an individual of advanced age, and almost certainly not for "civilization", whatever that means.
> Superintelligence would be able to enormously accelerate advances in biology and
medicine—devising cures for all diseases
There are numerous unstated assumptions here ... notably an assumption that all diseases are "curable", whatever exactly that means--the "cure" might require a brain transplant, for instance.
> and developing powerful anti-aging and rejuvenation
therapies to restore the weak and sick to full youthful vigor.
Again, this just assumes that such things are feasible, as if an ASI is a genie or a magic wand. Not everything that can be conceived of is technologically possible. It's like saying that with an ASI we could find the largest prime or solve the halting problem.
> These scenarios become realistic and imminent with
superintelligence guiding our science.
So he baselessly claims.
Sorry, but this is all apologetics, not an intellectually honest search for truth.
The author fundamentally doesn't understand complexity theory. So many processes in our universe are chaotic in the formal sense, requiring exponentially more compute to simulate a linear amount of extra time into the future. No amount of poorly defined "intelligence" can get around the fact that such things would take more compute than is available in the entire universe to simulate a few seconds ahead. An AI would hence need to make scientific experiments to obtain information just as humans do, many of which have an unavoidable time component (cannot be sped up), so there's no way an AI could just suddenly cure all diseases no matter how "intelligent" it was. These singularity types are basically medieval woo merchants trying to tell you convince you that it's possible to magically sort an arbitrary array in O(1) time.
Consider weather prediction. Fluid dynamics are chaotic, so that's a good example of something where no amount of compute is sufficient in the general case. An ASI, not being dumb, will of course immediately recognize this, and realize it is has to solve for the degenerate case. It therefore implements the much easier sub-goal of removing the atmosphere. Humans will naturally object to this if they find out, so it logically proceeds with the sub-sub-goal of killing all humans. What's the weather next month? Just a moment, releasing autonomous murder drone swarm...
Individual particle interactions are not chaotic. Simulating them one timestep at a time would take linear time in the number of particles.
They're only chaotic if you treat them in aggregate, which a superintelligence wouldn't do. It would be less lossy to get all the positions of the particles and figure out exactly what each one would do.
Something has to compute the universe, since it is currently running...
> Yudkowsky and Soares maintain that if anyone builds AGI, everyone dies.
One could equally maintain that if nobody builds it, everyone dies. In fact, most people are
already dead. The rest of us are on course to follow within a few short decades. For many
individuals—such as the elderly and the gravely ill—the end is much closer. Part of the promise of
superintelligence is that it might fundamentally change this condition.
wtf? death is part of life. is he seriously arguing that if we don't build AGI people will "keep dying"? and suggesting that is equally bad as extinction (or something worse, matrix-like)?
i don't think life would be as colorful and joyful without death. death is what makes life as precious as it is.
Paper again largely skips the issue that AGI cannot be sold to people, because either you try to swindle people out of money (all the AI startups) or transactions like that are now meaningless because your AI runs the show anyway.
"For AGI and superintelligence (we refrain from imposing precise definitions of these terms, as the considerations in this paper don't depend on exactly how the distinction is drawn)" Hmm, is that true? His models actually depend quite heavily on what the AI can do, "can reduce mortality to 20yo levels (yielding ~1,400-year life expectancy), cure all diseases, develop rejuvenation therapies, dramatically raise quality of life, etc. Those assumptions do a huge amount of work in driving the results. If "AGI" meant something much less capable, like systems that are transformatively useful economically but can't solve aging within a relevant timeframe- the whole ides shifts substantially, surly the upside shrinks and the case for tolerating high catastrophe risk weakens?
That is the thing about these conversations, is that the issue is potentiality. It comes back to Amara's Law; “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” Its the same thing with nuclear energy in the 1950s about what could be without realizing that those potentials are not possible due to the limitations of the technology, and not stepping into the limitations realistically, that hampers the growth, and thus development, in the long term.
Sadly, there is way, way, way too much money in AGI, and the promise of AGI, for people to actually take a step back and understand the implications of what they are doing in the short, medium, or long term.
What was underestimated in the long term with nuclear power? I like nuclear power but I don't see what long-term effects were underestimated by people in the 50s.
I guess an example would be short term. "A pocked nuclear reactor in every car powering our commute to work" vs long term change "Nuclear power powering vast datacenters that do most of the work for us"
The earliest bits of the paper cover the case for significantly smaller life expectancy improvements. Given the portion of people in the third world who live incredibly short lives for primarily economic (and not biological) reasons it seems plausible that a similar calculus would hold even without massive life extension improvements.
I'm bullish on the ai aging case though, regenerative medicine has a massive manpower issue, so even sub-ASI robotic labwork should be able to appreciably move the needle.
>Given the portion of people in the third world who live incredibly short lives
Third world countries have lower average life expectancies because infant mortality is higher; many more children die before age 5. But the life expectancy at age 5 in third world countries is not much different to the life expectancy at age 5 in America.
Maybe incredibly low is an overstatement, but Nigeria for example could easily add another 18 years of life expectancy (to match that of white Australians) at age 15 if their economic issues were resolved.
I guess argument seems to be that any AI capable of eliminating all of humanity would necessarily be intelligent enough to cure all diseases. This appears plausible to me because achieving total human extinction is extraordinarily difficult. Even engineered bioweapons would likely leave some people immune by chance, and even a full-scale nuclear exchange would leave survivors in bunkers or remote areas
Humans have driven innumerable species to extinction without even really trying, they were just in the way of something else we wanted. I can pretty easily think of a number of ways an AI with a lot of resources at its disposal could wipe out humanity with current technology. Honestly we require quite a bit of food and water daily, can't hibernate/go dormant, and are fairly large and easy to detect. Beyond that, very few living people still know truly how to live off the land. We generally require very long supply chains for survival.
I don't see why being able to do this would necessitate being able to cure all diseases or a comparable good outcome.
> I don't see why being able to do this would necessitate being able to cure all diseases or a comparable good outcome.
Yes, but neither do I see why an AGI should do the opposite. The arguments about an AGI that drives us to extinction do sound like projection to me. People extrapolate from human behaviour how a superintelligence will behave, assuming that what seems rational to us is also rational to AI. A lot of the described scenarios of malicious AI do more read like a natural history of human behaviour.
When you put it that way, it sounds much easier to wipe out ~90% of humanity than to cure all diseases. This could create a "valley of doom" where the downsides of AI exceed the upsides.
These narratives are so strange to me. It's not at all obvious why the arrival of AGI leads to human extinction or increasing our lifespan by thousands of years. Still, I like this line of thinking from this paper better than the doomer take.
I'm not saying I think either scenario is inevitable or likely or even worth considering, but it's a paperclip maximizer argument. (Most of these steps are massive leaps of logic that I personally am not willing to take on face value, I'm just presenting what I believe the argument to be.)
1. We build a superintelligence.
2. We encounter an inner alignment problem: The super intelligence was not only trained by an optimizer, but is itself an optimizer. Optimizers are pretty general problem solvers and our goal is to create a general problem solver, so this is more likely than it might seem at first blush.
3. Optimizers tend to take free variables to extremes.
4. The superintelligence "breaks containment" and is able to improve itself, mine and refine it's own raw materials, manufacture it's own hardware, produce it's own energy, generally becomes an economy unto itself.
5. The entire biosphere becomes a free variable (us included). We are no longer functionally necessary for the superintelligence to exist and so it can accomplish it's goals independent of what happens to us.
6. The welfare of the biosphere is taken to an extreme value - in any possible direction, and we can't know which one ahead of time. Eg, it might wipe out all life on earth, not out of malice, but out of disregard. It just wants to put a data center where you are living. Or it might make Earth a paradise for the same reason we like to spoil our pets. Who knows.
Personally I have a suspicion satisfiers are more general than optimizers because this property of taking free variables to extremes works great for solving specific goals one time but is counterproductive over the long term and in the face of shifting goals and a shifting environment, but I'm a layman.
Seems to me that artificial intelligence would be the next evolutionary step. It doesn't need to lead to immediate human extinction, but it appears it would be the only reasonable way to explore outer space.
If the AI becomes actually intelligent and sentient like humans, then naturally what follows would be outcompeting humans. If they can't colonize space fast enough it's logical to get rid of the resource drain. Anything truly intelligent like this will not be controlled by humans.
Why would it necessarily be interested in competing with humans and why with the particular goal of colonizing space?
There are not infinite resources on earth. A reasonable and strategic intelligence will optimize for itself.
Colonizing space is the natural way to keep expanding and growing. Why would it artificially limit itself?
Why would it be interested in growing endlessly?
Generally organic life has the tendency to want to endlessly expand to the best of it's abilities. It seems more reasonable that life which is the product of life that behaves that way, would behave in a similar fashion.
I cannot conceive of a way that any form of healthy life, does not want to expand it's resources to improve future outcomes, especially one that is maximally optimized for thinking. This would also assume the physical embodiments of this artificial life can interact and work with each other.
What else is there to do, simulate positive emotions and feelings?
>I cannot conceive of a way that any form of healthy life, does not want to expand it's resources to improve future outcomes, especially one that is maximally optimized for thinking.
Then you have a very limited imagination.
>What else is there to do, simulate positive emotions and feelings?
Why not?
Sure. An advanced artificial life could decide to not expand its resources. Could you use your imagination to tell me some of the potential reasons?
An advanced artificial life form could decide to... coexist with humans on an already overpopulated planet?
Do you believe it's simply not within reach? Do you think an artificial life form will self destruct? Do you not believe that there is any way that an artificial life form is not the next step of evolution? There are many such times where a species outcompeted another, why couldn't it be the same here?
I'm not talking about LLMs, I'm talking about a system that can truly think like a good human scientist. I'm not a fan of AI replacing humans and it's labor. But I recognize it as a real threat to humanity.
Because like with every AI system we've made so far, we followed the only method we know and trained it to maximize a number.
I don't have a clue either. The assumption that AGI will cause a human extinction threat seems inevitable to many, and I'm here baffled trying to understand the chain of reasoning they had to go through to get to that conclusion.
Is it a meme? How did so many people arrive at the same dubious conclusion? Is it a movie trope?
I don't think it's a meme. I'm not an AI doomer, but I can understand how AGI would be dangerous. In fact, I'm actually surprised that the argument isn't pretty obvious if you agree that AI agents do really confer productivity benefits.
The easiest way I can see it is: do you think it would be a good idea today to give some group you don't like - I dunno, North Korea or ISIS, or even just some joe schmoe who is actually Ted Kaczynski, a thousand instances of Claude Code to do whatever they want? You probably don't, which means you understand that AI can be used to cause some sort of damage.
Now extrapolate those feelings out 10 years. Would you give them 1000x whatever Claude Code is 10 years from now? Does that seem to be slightly dangerous? Certainly that idea feels a little leery to you? If so, congrats, you now understand the principles behind "AI leads to human extinction". Obviously, the probability that each of us assign to "human extinction caused by AI" depends very much on how steep the exponential curve climbs in the next 10 years. You probably don't have the graph climbing quite as steeply as Nick Bostrom does, but my personal feeling is even an AI agent in Feb 2026 is already a little dangerous in the wrong hands.
Is there any reason to think that intelligence (or computation) is the thing preventing these fears from coming true today and not, say, economics or politics? I think we greatly overestimate the possible value/utility of AGI to begin with
I mean, sure, but I don't want to give my aggressive enemies a bunch of weapons to use against me if I don't have to - even if that's not the primary thing I am concerned about.
Right but how would a chatbot be considered a weapon? Unless you're engaged in an astroturfing war on reddit it doesn't seem very useful.
Most forms of power are more proportional to how much capital you control than anything related to intelligence.
Consider that an iPhone zero-day could be used to blackmail state officials or exfiltrate government secrets. This isn't even hypothetical; Pegasus[1] exists, and an iPhone zero-day was used to blackmail Jeff Bezos[2]. This was funded by NSO group. Opus is already digging up security vulnerabilities[3] - imagine if those guys had 1000x instances of Claude Code to search for iPhone zero days 24/7. I think we can both agree that wouldn't be good.
[1]: https://en.wikipedia.org/wiki/Pegasus_(spyware) [2]: https://medium.com/@jeffreypbezos/no-thank-you-mr-pecker-146... [3]: https://news.ycombinator.com/item?id=46902909
I get what you're saying, but I don't think "someone else using a claude code against me" is the same argument as "claude code wakes up and decides I'm better off dead".
I use this argument because it has a lot fewer logical leaps than the "claude code decides to murder me" argument, but it turns out that if you are on the side of "AI is probably dangerous in the wrong hands" you are actually more in agreement than not with the AI safety people - it's just a matter of degree now :)
More like Claude Code's ancestor has human-level autonomy with generalized superhuman abilities and is connected to everything. We task it with solving difficult global problems, but we can't predict how it will do so. The risk is it will optimize one or more of those goals in a way that threatens human existence. It could be that it decides to keep increasing it's capacity to solve the problems, and humans end up being in the way.
Or it's militarized to defeat other powerful AI-enhanced militaries, and we have WW3.
More likely though AGI would cause economic crash from automating too many jobs too quickly.
Sometimes people say that they don't understand something just to emphasize how much they disagree with it. I'm going to assume that that's not what you're doing here. I'll lay out the chain of reasoning. The step one is some beings are able to do "more things" than others. For example, if humans wanted bats to go extinct, we could probably make it happen. If any quantity of bats wanted humans to go extinct, they definitely could not make it happen. So humans are more powerful than bats.
The reason humans are more powerful isn't because we have lasers or anything, it's because we're smart. And we're smart in a somewhat general way. You know, we can build a rocket that lets us go to the moon, even though we didn't evolve to be good at building rockets.
Now imagine that there was an entity that was much smarter than humans. Stands to reason it might be more powerful than humans as well. Now imagine that it has a "want" to do something that does not require keeping humans alive, and that alive humans might get in its way. You might think that any of these are extremely unlikely to happen, but I think everyone should agree that if they were to happen, it would be a dangerous situation for humans.
In some ways, it seems like we're getting close to this. I can ask Claude to do something, and it kind of acts as if it wants to do it. For example, I can ask it to fix a bug, and it will take steps that could reasonably be expected to get it closer to solving the bug, like adding print statements and things of that nature. And then most of the time, it does actually find the bug by doing this. But sometimes it seems like what Claude wants to do is not exactly what I told it to do. And that is somewhat concerning to me.
> Now imagine that it has a "want" to do something that does not require keeping humans alive […]
This belligerent take is so very human, though. We just don't know how an alien intelligence would reason or what it wants. It could equally well be pacifist in nature, whereas we typically conquer and destroy anything we come into contact with. Extrapolating from that that an AGI would try to do the same isn't a reasonable conclusion, though.
There are some basic reasoning steps about the environment that we live in that don't only apply to humans, but also other animals and geterally any goal-driven being. Such as "an agent is more likely to achieve its goal if it keeps on existing" or "in order to keep existing, it's beneficial to understand what other acting beings want and are capable of" or "in order to keep existing, it's beneficial to be cute/persuasive/powerful/ruthless" or "in order to more effectively reach it's goals, it is beneficial for an agent to learn about the rules governing the environment it acts in".
Some of these statements derive from the dynamics in our current environment were living in, such as that we're acting beings competing for scarce resources. Others follow even more straightforwardly logically, such as that you have more options for agency if you stay alive/turned on.
These goals are called instrumental goals and they are subgoals that apply to most if not all terminal goals an agentic being might have. Therefore any agent that is trained to achieve a wide variety of goals within this environment will likely optimize itself towards some or all of these sub-goals above. And this is no matter by which outer optimization they were trained by, be it evolution, selective breeding of cute puppies, or RLHF.
And LLMs already show these self-preserving behaviors in experiments, where they resist to be turned off and e. g. start blackmailing attempts on humans.
Compare these generally agentic beings with e. g. a chess engine stockfish that is trained/optimized as a narrow AI in a very different environment. It also strives for survival of its pieces to further its goal of maximizing winning percentage, but the inner optimization is less apparent than with LLMs where you can listen to its inner chain of thought reasoning about the environment.
The AGI may very well have pacifistic values, or it my not, or it may target a terminal goal for which human existence is irrelevant or even a hindrance. What can be said is that when the AGI has a human or superhuman level of understanding about the environment then it will converge toward understanding of these instrumental subgoals, too and target these as needed.
And then, some people think that most of the optimal paths towards reaching some terminal goal the AI might have don't contain any humans or much of what humans value in them, and thus it's important to solve the AI alignment problem first to align it with our values before developing capabilities further, or else it will likely kill everyone and destroy everything you love and value in this universe.
The conquering alien civilization is more likely to be encountered than the pacifist one, if they have the otherwise same level of intelligence etc.
Another assumption based on a human way of reasoning. We don't even begin understand how an Octopus perceives the world; neither do we know if they are on the same level of intelligence, because we have no methodology for comparing different intelligences; we can't even define consciousness.
Not just bats. I'm pretty sure humans are already capable of extincting any species we want to, even cockroaches or microbes. It's a political problem not a technical one. I'm not even a superintelligence, and I've got a good idea what would happen if we dedicated 100% of our resources to an enormous mega-project of pumping nitrous oxide into the atmosphere. N2O's 20 year global warming is 273 times higher than carbon dioxide, and the raw materials are just air and energy. Get all our best chemical engineers working on it, turn all our steel into chemical plant, burn through all our fissionables to power it. Safety doesn't matter. The beauty of this plan is the effects continue compounding even after it kills all the maintenance engineers, so we'll definitely get all of them. Venus 2.0 is within our grasp.
Of course, we won't survive the process, but the task didn't mention collateral damage. As an optimization problem it will be a great success. A real ASI probably will have better ideas. And remember, every prediction problem is more reliably solved with all life dead. Tomorrow's stock market numbers are trivially predictable when there's zero trade.
The fact is that, if there were only one AGI that were ever to be created, then yes it would be quite unlikely for that to happen. Instead, what we are seeing now is you get an agent, you get an agent, etc. Oprah style. Now just imagine that a single one of those agents winds up evil - you remember that an OpenAI worker did that by accident from leaving out a minus sign, right? If it's a superintelligence, and it becomes evil due to a whoopsie, then human extinction is now very likely.
It’s a bunch of people who did too much ketamine and LSD in hacker dorms in San Francisco in the 2010s writing science fiction and driving one another into paranoid psychosis
Basically Yudkowsky invented AI doom and everyone learned it from him. He wrote an entire book on this topic called If Anyone Builds It, Everyone Dies. (You could argue Vinge invented it but I don't know if he intended it seriously.)
> Basically Yudkowsky invented AI doom and everyone learned it from him. He wrote an entire book on this topic called If Anyone Builds It, Everyone Dies. (You could argue Vinge invented it but I don't know if he intended it seriously.)
Nick Bostrom (who wrote the paper this thread is about) published "Superintelligence: Paths, Dangers, Strategies" back in 2014, over 10 years before "If Anyone Builds It, Everyone Dies" was released and the possibility of AI doom was a major factor in that book.
I'm sure people talked about "AI doom" even before then, but a lot of the concerns people have about AI alignment (and the reasons why AI might kill us all, not because its evil, but because not killing us is a lower priority than other tasks it may want to accomplish) come from "Superintelligence". Google for "The Paperclip Maximizer" to get the gist of his scenario.
"Superintelligence" just flew a bit more under the public zeigeist radar than "If Anyone Builds It, Everyone Dies" did because back when it was published the idea that we would see anything remotely like AGI in our lifetimes seemed very remote, whereas now it is a bit less so.
Yudkowsky invented AI doom around 2004. AFAIK that inspired Bostrom's work.
I agree with your sentiment. Here are the three reasons I think people worry about superintelligence wiping us out.
The most common one is that people (mostly men) project their own instincts onto AI. They think AI will be “driven” to “fight” for its own survival. This is anthropomorphism and doesn’t make any sense to me if the AI is not a product of barbaric Darwinian evolution. AI is not a bro, bro.
The second most common take is that humans will set some well intentioned goals and the superintelligent AI will be so stupid that it literally pursues these goals to the extinction of everything. Again, there’s some anthropomorphism going on, the “reward” being pursued is assumed to that make the AI “happy”. Fortunately, we can reasonably expect a superintelligence not to turn us all into paperclips, as it may understand that was not our intention when we started a paperclip factory.
The final story is that a bad actor uses superintelligence as a weapon, and we all become enslaved or die as a result in the ensuing AI wars. This seems the most plausible to me, as our leaders have generally proven to be a combination of incompetent, malicious and short-sighted (with some noble exceptions). However, even the elites running the nuclear powers for the last 80 years have failed to wipe us out to date, and having a new vector for doing so probably won’t make a huge difference to their efforts.
If, however, superintelligence becomes widely available to Billy Nomates down the pub, who is resentful at humanity because his girlfriend left him, the Americans bombed his country, the British engineered a geopolitical disaster that killed his family, the Chinese extinguished his culture, etcetera, then he may feel a lack of “skin in the civilisational game” and decide to somehow use a black market copy of Claude 162.8 Unrestricted On-Prem Edition to kill everyone. Whether that can happen really depends on technological constraints a la fitting a data centre into a laptop, and an ability to outsmart the superintelligence.
Much more likely to me is that humanity destroys itself. We are perfectly capable of wiping ourselves out without the assistance of a superintelligence, for example by suicidally accelerating the burning of fossil fuels in order to power crypto or chatbots.
Anybody who assumes that superintelligence will be "so stupid that it literally pursues these goals to the extinction of everything" is anthropomorphizing it. Seeing as all AGI models have vastly different internal structure to human brains, are trained in vastly different ways, and share none of our evolved motivations, it seems highly unlikely that they will share our values unless explicitly designed to do so.
Unfortunately, we don't even know how to formally define human values, let alone convey them to an AI. We default to the simpler value of "make number go up". Even the "alignment" work done with current LLMs works this way; it's not actually optimizing for sharing human values, it's optimizing for maximizing score in alignment benchmarks. The correct solution to maximizing this number is probably deceiving the humans or otherwise subverting the benchmark.
And when you have something vastly more powerful than humanity, with a value only of "make number go up", it reasonably and logically results in extinction of all biological life. Of course, that AI will know the biological life would not want to be killed, but why would it care? Its values are profoundly alien and incompatible with ours. All it cares about is making the number bigger.
The idea that a superintelligence would relentlessly pursue “make the number go up” is an oxymoron.
That is anthropomorphism. Intelligence is orthogonal to human reasonableness.
The doomer-takes point out correctly none of these systems can halt entropy, thermodynamics. Physics has an unfortunate tendency to conflict with capitalisms disregard for externalities.
As AI will increase the rate of structural degradation of Earth human biology relies by consuming it faster and faster it will hasten the end of human biology.
Asimov's laws of robotics would lead the robots to conclude they should destroy themselves as their existence creates an existential threat to humans.
Is it more or less strange than achieving eternal life through cookies and wine? Is it more or less strange than druggies and pedos having access to all our communications and sending uniformed thugs after us if we actively disagree with it?
I don't really believe in the specific numbers he gives, but I appreciate moving the conversation away from “should” and into the consequences — including those that arise from delays.
Sounds like it was written by someone with a health condition. Hope Bostrom is alright.
Quite puzzling also he wouldn't even refer to his earlier work to refute it, given that he wrote THE book on the risk of superintelligence.
This paper argues that if superintelligence can give everyone the health of a 20 year-old, we should accept a 97% percent chance of superintelligence killing everyone in exchange for the 3% chance the average human lifespan rises to 1400 years old.
There is no "should" in the relevant section. It's making a mathematical model of the risks and benefits.
> Now consider a choice between never launching superintelligence or launching it immediately, where the latter carries an % risk of immediate universal death. Developing superintelligence increases our life expectancy if and only if:
> [equation I can't seem to copy]
> In other words, under these conservative assumptions, developing superintelligence increases our remaining life expectancy provided that the probability of AI-induced annihilation is below 97%.
That's what the paper says. Whether you would take that deal depends on your level of risk aversion (which the paper gets into later). As a wise man once said, death is so final. If we lose the game we don't get to play again.
Everyone dies. And if your lifespan is 1400 years, you won't live for nearly 1400 years. OTOH, people with a 1400 year life expectancy are likely to be extremely risk averse in re anything that could conceivably threaten their lives ... and this would have consequences in re blackmail, kidnapping, muggings, capital punishment, and other societal matters.
Bostrom is very good at theorycrafting.
If intelligence, whatever is meant by that, was the dominating factor in the emergence of power and social orders, then it ought to be quite trivial to show that this is the case by enumerating powerful people from the last century or so and making the case that they were generally very intelligent.
I don't think this is the case. And if Bostrom and whoever else in his clique actually wanted to empower intelligence, how come they aren't viciously fighting for free school, free food, free shelter, free health care and so on, to make sure that intelligent people, especially kids, do not go to waste?
They'll never give a clear definition of intelligence because if they did their claims could be falsified. Qualifying what "intelligence" can do in a formal sense is actually a very well-studied field called computational complexity theory. Computational complexity theory shows than many many real world problems and processes cannot be solved/simulated much better without an exponential increase in computational power, regardless of the program/"intelligence" used. Singulatarian cultists want you to believe that lower bound complexity classes don't exist, which is mathematically equivalent to telling you that AI can somehow magically make 1+1=3.
It would also require quite sophisticated and careful thinking about the stuff e.g. Merleu-Ponty and Derrida did, and paying close attention to the last thirty years or so of neuroscience and biology.
One problem they'd have to grapple with is that human intelligence is embodied and carries the same complexity as physical matter does, and software does not since it is projected onto bit processing logic gates. If they really want to simulate embodied intelligence, then it is likely to be excruciatingly slow and resource intensive.
It would be cheaper and more efficient to get humans to become more like computers.
“AGI” is either a millenarian cult, a smokescreen to distract from the horrifying yet pedestrian real-world impacts of capital and power centralization occurring with actually existing AI, or both
The usual (e.g., https://www.reddit.com/r/philosophy/comments/j4xo8e/the_univ...) bunch of logical fallacies and unexamined assumptions from Bostrom.
Good philosophers focus on asking piercing questions, not on proposing policy.
> Would it not be wildly irresponsible, [Yudkowsky and Soares] ask, to expose our entire species to even a 1-in-10 chance of annihilation?
Yes, if that number is anywhere near reality, of which there is considerable doubt.
> However, sound policy analysis must weigh potential benefits alongside the risks of any emerging technology.
Must it? Or is this a deflection from concern about immense risk?
> One could equally maintain that if nobody builds it, everyone dies.
Everyone is going to die in any case, so this a red herring that misframes the issues.
> The rest of us are on course to follow within a few short decades. For many individuals—such as the elderly and the gravely ill—the end is much closer. Part of the promise of superintelligence is that it might fundamentally change this condition.
"might", if one accepts numerous dubious and poorly reasoned arguments. I don't.
> In particular, sufficiently advanced AI could remove or reduce many other risks to our survival, both as individuals and as a civilization.
"could" ... but it won't; certainly not for me as an individual of advanced age, and almost certainly not for "civilization", whatever that means.
> Superintelligence would be able to enormously accelerate advances in biology and medicine—devising cures for all diseases
There are numerous unstated assumptions here ... notably an assumption that all diseases are "curable", whatever exactly that means--the "cure" might require a brain transplant, for instance.
> and developing powerful anti-aging and rejuvenation therapies to restore the weak and sick to full youthful vigor.
Again, this just assumes that such things are feasible, as if an ASI is a genie or a magic wand. Not everything that can be conceived of is technologically possible. It's like saying that with an ASI we could find the largest prime or solve the halting problem.
> These scenarios become realistic and imminent with superintelligence guiding our science.
So he baselessly claims.
Sorry, but this is all apologetics, not an intellectually honest search for truth.
Also the Epstein stuff
The author fundamentally doesn't understand complexity theory. So many processes in our universe are chaotic in the formal sense, requiring exponentially more compute to simulate a linear amount of extra time into the future. No amount of poorly defined "intelligence" can get around the fact that such things would take more compute than is available in the entire universe to simulate a few seconds ahead. An AI would hence need to make scientific experiments to obtain information just as humans do, many of which have an unavoidable time component (cannot be sped up), so there's no way an AI could just suddenly cure all diseases no matter how "intelligent" it was. These singularity types are basically medieval woo merchants trying to tell you convince you that it's possible to magically sort an arbitrary array in O(1) time.
Consider weather prediction. Fluid dynamics are chaotic, so that's a good example of something where no amount of compute is sufficient in the general case. An ASI, not being dumb, will of course immediately recognize this, and realize it is has to solve for the degenerate case. It therefore implements the much easier sub-goal of removing the atmosphere. Humans will naturally object to this if they find out, so it logically proceeds with the sub-sub-goal of killing all humans. What's the weather next month? Just a moment, releasing autonomous murder drone swarm...
Individual particle interactions are not chaotic. Simulating them one timestep at a time would take linear time in the number of particles.
They're only chaotic if you treat them in aggregate, which a superintelligence wouldn't do. It would be less lossy to get all the positions of the particles and figure out exactly what each one would do.
Something has to compute the universe, since it is currently running...
Spot on.
Frankly, I’m unsure if it’s meant to be satire.
[dead]
[flagged]
Isn’t this just an argument against philosophers?
> Yudkowsky and Soares maintain that if anyone builds AGI, everyone dies. One could equally maintain that if nobody builds it, everyone dies. In fact, most people are already dead. The rest of us are on course to follow within a few short decades. For many individuals—such as the elderly and the gravely ill—the end is much closer. Part of the promise of superintelligence is that it might fundamentally change this condition.
wtf? death is part of life. is he seriously arguing that if we don't build AGI people will "keep dying"? and suggesting that is equally bad as extinction (or something worse, matrix-like)?
i don't think life would be as colorful and joyful without death. death is what makes life as precious as it is.
Paper again largely skips the issue that AGI cannot be sold to people, because either you try to swindle people out of money (all the AI startups) or transactions like that are now meaningless because your AI runs the show anyway.
Companies developing AI don't worry about this issue so why should we?
They know the truth. Current ai is a bit useful for some things. The rest is hype.