> As AI gets smarter, access to AI will be a fundamental driver of the economy, and maybe eventually something we consider a fundamental human right.
My product is going to be the fundamental driver of the economy. Even a human right!
> Maybe with 10 gigawatts of compute, AI can figure out how to cure cancer.
How?
> We are particularly excited to build a lot of this in the US; right now, other countries are building things like chips fabs and new energy production much faster than we are, and we want to help turn that tide.
There's the appeal to the current administration.
> Over the next couple of months, we’ll be talking about some of our plans and the partners we are working with to make this a reality. Later this year, we’ll talk about how we are financing it
Not a word or whisper about environmental impact, either. I mean at least do some hand waving or something. I feel like a habitable planet is a fundamental right.
The US is the second most energy-hungry country on Earth (the first one being China, a country that accounts for almost 30% of the global manufacturing output), so I think it's better to compare it against the rest of the world:
Every country bellow the top 40 has a consumption lower than 87.6 TWh per year, that includes developed countries like Finland and Belgium, so yes, 10 gigawatts is a lot of power.
It's not worth anyone's time to meticulously fact check known (and I'm being kind here) 'exaggerator' Sam Altman, because by the time you're done, he's already spread 10 more 'exaggerations'.
But for real, the leap from GPT4 to GPT5 was nowhere near as impressive as from GTP3 to GPT4. They'll have to do a lot more to give any weight to their usual marketing ultra-hype.
Agreed. Their naming conventions in a way really broke the perception of progress. GPT-4 to o3 or GPT-5 is truly impressive. The leap from GPT-4o to GPT-5 is less impressive but GPT-4o is generally recognized as GPT-4.
All that being said, it does seem like OpenAI and Anthropic are on a quest for more dollars by promoting fantasy futures where there is not a clear path from A to B, at least to those of us on the outside.
I think people have a lot of rosy glasses and fondness for those early days, combined with general usability benchmarks being mostly saturated now. GPT-3.5 would say Dallas was the capital of USA, but GPT-4 got it every time!
GPT-4 launched with 8k context. It hallucinated regularly. It was slow. One-shotting code was unheard of, you had to iterate and iterate. It fell over even doing basic math problems.
GPT-5 thinking on the other hand is so capable that the average person wouldn't be able to really test it's abilities. It's really only experts operating in their domain who can find it's stumbling blocks.
I think because we have seen these constant incremental updates that it creates a staircase with small steps, but if you really reflect and look back, you'll see the actual capability gap from 3.5 to 4 compared to 4 to 5 is way way smaller. This is echoed in benchmarks too, GPT-5 is solving problems so wildly beyond what GPT-4 was capable of.
No offense but your comment is basically HN parody.
OpenAI created AI tech decades ahead of estimates. And they just signed a 100B deal with Nvidea.
They are actually doing the things that are astonishing.
Every engineer I see in coffee shops uses AI. All my coworkers use AI. I use AI. AI nearly solved protein folding. It is beginning to unlock personalized medicine. AI absolutely will be a fundamental driver of the economy in the future.
Being skeptical is still reasonable.. but flippant dismissal of legitimately amazing accomplishments is peak HN top comment.
> OpenAI created AI tech decades ahead of estimates. And they just signed a 100B deal with Nvidea.
Definitely don't look into the financial details of that deal with Nvidia!
> Every engineer I see in coffee shops uses AI. All my coworkers use AI. I use AI.
Okay
> AI nearly solved protein folding.
FAH predates OpenAI by fifteen years and ChatGPT 3 by twenty. Do not fall for Altman's conflation of LLMs with every other form of machine learning he and OpenAI had nothing to do with!
I don't think there's any criticism of the (remarkable) things which have been achieved so far, more the breathless hype about how AI is going to solve all our current and future problems if we just keep shovelling money and energy in. Predicting the future is hard, and I don't think Sam is particularly better at knowing what's going to happen in ten years time than anyone else.
Yeah, just like privatised utilities that operator solely for the profits of execs and investors with a complete disregard for regulations or best practices only to hide behind govt not regulating enough when things eventually go wrong.
You can get your drinking water from a utility, or you can get bottled water. Guess which one he's gonna be selling?
And if you think for a second that the "utility" models will be trained on data as pristine as the data that the "bottled" models will be trained on, I've got a bridge in Brooklyn to sell you. (The "utility" models will not even have any access to all of the "secret sauce" currently being hoarded inside these labs.)
Essentially we can all expect to go back to the Lexis-Google type dichotomy. You can go into court on Google searches, nothing's stopping you. But nearly everyone will pay for LexisNexis because they're not idiots and they actually want to compete.
Y2K errors in old COBOL code kick-started Indian IT sector, which then lead to immense economic progress and mass scale reduction in poverty. I hope LLMs pepper every thing they touch with many such errors, so that nations of Africa and poorer parts of Latin America (that can't do cheap manufacturing due to a lack of infrastructure and capital) can also begin their upwards economic journey by providing services to fix these mistakes.
In order to help reduce global poverty (much of which was caused by colonialism), it is the moral and ethical duty of the Global North to adopt LLMs on a mass scale and use them in every field imaginable, and then give jobs to the global poor to fix the resulting mess.
That's funny, but unless LLM bugs break foundational ML codebases beyond human repair (and somehow also delete all existing code, research, and researchers), the models will likely just get better than people at this in a couple years. I mean, the trajectory so far is obvious.
> We are particularly excited to build a lot of this in the US; right now, other countries are building things like chips fabs and new energy production much faster than we are, and we want to help turn that tide.
> Our vision is simple: we want to create a factory that can produce a gigawatt of new AI infrastructure every week.
The moat will be how efficiently you convert electricity into useful behavior. Whoever industrializes evaluation and feedback loops wins the next decade.
* Nvidia invests 5 billion in Intel
* Nvidia and OpenAI announce partnership to deploy 10 gigawatts of NVIDIA systems (Investment of upto 100 billion)
* This indirectly benefits TSMC (which implies they'll be investing more in the US)
What's the serious counter-argument to the idea that a) AI will become more ubiquitous and inexpensive and b) economic/geopolitical success will be tied in some way to AI ability?
Because I do agree with him on that front. The question is whether the AI industry will end up like airplanes: massively useful technology that somehow isn't a great business to be in. If indeed that is the case, framing OpenAI as a nation-bound "human right" is certainly one way to ensure its organizational existence if the market becomes too competitive.
LLMs aren’t AI. These are language processing tools which are highly effective and it turns out language is a large component of intelligence but they aren’t AI alone.
Intelligence isn’t the solution or bottleneck to solving the world’s most pressing problems. Famines are political. We know how to deploy clean energy.
Now that doesn’t quite answer your question but I think it says two things. First that the time horizon to real AI is still way longer than sama is currently considering. Second that AI won’t be as useful as many believe.
Right, but if you just replace AI with LLM in my comment, I'm not sure it really changes. "Real AI" might not be necessary to the two things I wrote.
I agree that all of the predictions regarding AI are probably overblown if they're just LLMs. But that might not matter if we're just talking about geopolitics.
Yea that’s fair. And if there’s enough money behind something even if it’s not great it can still bend the whole world. I think with a lot of comments like yours people take them, at least I did, to be more slanted to actually be saying something like “what’s the argument against AI 2027”. Which isn’t fair and is why the hype can be so damaging to honest discourse.
So I cannot think of a good argument of a reason this isn’t going to change the world even if that does look more like the AI as a normal technology[0] argument or simply a slopapolocypse.
Ubiquity doesn't depend on the AI getting much better as much as it depends on the computational cost going down (i.e., better hardware + software optimizations). When you can put a ChatGPT-class model locally on every desktop or phone, people will use it even if the accuracy or safety isn't quite there.
Just look at how people are using Grok on Twitter, or how they're pasting ChatGPT output to win online arguments, or how they're trusting Google AI snippets. This is only gonna escalate.
That said, this is probably not the future Sam Altman is talking about. His vision for the future must justify the sky-high valuations of OpenAI, and cheap ubiquity of this non-proprietary tech runs counter to that. So his "ubiquity" is some sort of special, qualified ubiquity that is 100% dependent on his company.
>When you can put a ChatGPT-class model locally on every desktop or phone, people will use it even if the accuracy or safety isn't quite there.
Will they though?
>Just look at how people are using Grok on Twitter, or how they're pasting ChatGPT output to win online arguments, or how they're trusting Google AI snippets. This is only gonna escalate.
That's why the competitive moat for frontier LLMs is access to proprietary training data. OpenAI and their competitors are paying fortunes to license private data sets, and in some cases even hiring human experts to write custom documents on specific topics as additional training data. This is how they hope to stay ahead of open-source alternatives.
I think it'll be slightly different - without clearly marking AI-generated content, it'll be effectively impossible to find new content that isn't sold to you in pristine packages already, and even that you just sorta have to trust.
Of course, you can't train LLMs on LLM-generated content.
I'm more worried that publicly available LLMs "will be capped by the amount of training data present in the wild". But private LLMs, available only to the wealthy and powerful, will have additional, more pristine and accurate, data sources made available to them for training.
Think about the legal field. The masses tend to use Google, whereas the wealthy and powerful all use LexisNexis. Who do you think has been winning in court?
I think it’s just scale to the moon rhetoric, like “what if we used 100x more compute?”. Since the units are power and not energy, I’m going with 10 GW continuous load (for training? inference?) but I think it’s not exactly meant literally
"Maybe with 10 gigawatts of compute, AI can figure out how to cure cancer. Or with 10 gigawatts of compute, AI can figure out how to provide customized tutoring to every student on earth."
> Maybe with 10 gigawatts of compute, AI can figure out how to cure cancer.
Something I've never understood: why do AGI perverts think that a superintelligence is any more likely to "cure cancer" than "create unstoppable super-cancer"
AI will do neither of those things because curing or creating cancer requires physical experiments and trials on real people or animals, as does all science outside of computer science (which is often more math than science).
I can see AI being helpful in generating hypothesis, or potential compounds to synthesize, or helping with literature search, but science is a physical process. You don't generally do science just by sitting there and pondering, despite what the movies suggest.
There are a few fully automated wet labs and many semi-autonomous. They are called "Cloud Labs", and they will only become more plentiful. AI can identify and execute the physical experiments after using simulations to filter and score the candidate hypotheses.
They're actually right in that there are several attempts to create automated labs to speed up the physical part. But in reality there are only a handful and they are very very narrowly scoped.
But yes, potentially in some narrow domains this will be possible, but it still only automates a part of the whole process when it comes to drugs. How a drug operates on a molecular test chip is often very different than how it works in the body.
Exactly - AI allows for intersections in concepts from training data; up to the user to make sense of it. Thanks for stating this (I end up repeating same thing in every conversation, but is common sense).
Somehow it never crossed my mind, but human civilization could plausibly end in the next 10 years. Many thought if then the cause would be a nuclear war, turns out it's more like the 90's movie 12 monkeys. I would love to be proven wrong, yet there is no international regulation on AI.
>I would love to be proven wrong, yet there is no international regulation on AI.
What are the chances of advancing in AI regulation before any monumental fuck up that changes the public opinion to "yeah this thing is really dangerous". Like a Hiroshima or Chernobyl but for AI.
I'm not sure he's even talking about AGI (which feels unusual for Altman). He might be talking about GPT5 in agentic workflows. Or whatever their next model will be called
It will create demand for electricity. Demand always creates supply. Perpahs that was the missing link to the mass deployment of solar (because there's just no other way similar amounts of energy can be produced).
Perhaps, but Hank Green published a pretty convincing argument recently that electricity supply has nowhere the necessary elasticity, and the politicised nature of power generation in the US means that isn't going to change:
The fundamental driver of the economy will be people, as always, because only people define "value".
Word generating machine will not "figure out how to cure cancer" but it could help, obviously. AI is extremely valuable tool but it does not work on my behalf, except in same sense a coin sorter does. It's a tool. I still see this thing confuse left and right (no exaggeration), which would be fine - tools aren't perfect - except for all bullshit from VCs. That is where the danger lies, not with the tool but idolatry.
I am concerned the system encourages suicide, delusional thinking etc. They need to work that out immediately. Must be held to at least the standard of a lawn mower or couch with regard to safety. Probably should make it safe before hooking it up to 10GW power plant. Does not help perception that author of this blog saying how awesome future will be is also building a hideaway bunker for himself.
This is all on you at OpenAI, Anthropic, Microsoft, Twitter etc. Whatever happens.
Well, i see the case that those with great power want factories building ai factories, much like a factorio simulation, to solve cancer. Sure sama, you have to name the most noble goal imaginable everyone agrees on, though we know that's not what drives you. AI to become a fundamental right, that won't work as some kind of subscription service, we also agree on this?
> If AI stays on the trajectory that we think it will, then amazing things will be possible. Maybe with 10 gigawatts of compute, AI can figure out how to cure cancer.
At least the statement starts with a conditional, even if it is a silly one.
If you know your growth curve is ultimately going to be a sigmoid, fitting a model with only data points before the inflection point is underdetermined.
> If AI stays on the trajectory that we think it will
Is a statement that no amount of prior evidence can support.
Assuming AI stays on the trajectory they think it will doesn't mean they assume it will be infinite exponential growth. If you know for a fact it's sigmoid curve, presumably the path you think it will continue on is the sigmoid curve. The trillion dollar question is does performance plateau before or after AI can do the really exciting stuff, and while I may not agree with it myself the more optimistic position is not an unreasonable belief.
Also you can most certainly fit a sigmoid function only from past data points. Any projection will obviously have error, but your error at any given point should be smaller than for an exponential function with the same sampling.
I'm an AI booster, but he's right, these models are in the sigmoid elbow and we're being hard-headed trying to push frontier models, it's not sustainable. We need to take a step back and work on the engineering of the systems around the frontier models while trying to find a new architecture that scales better.
That being said the current models are transformative on their own, once the systems catch up to the models that will be glaringly obvious to everyone.
Putting aside the questions of (as some comments here have) whether
* AI is a “fucking dud” (you have to be either highly ignorant or trolling to say this)
* Altman is a “charlatan” (definitely no but it does look like he has some unsavory personal traits, quite common BTW for people at that level)
* the ridiculousness of touting a cancer cure (I guess the post is targeted to the technical hoi polloi, with whom such terminology resonates, but also see protein 3D structure discovery advances)
I found the following to be interesting in this post:
1. Altman clearly signaling affinity for the Abundance bandwagon with a clear reference right in the title. Post is shorter but has the flavor of Marc Andreessen's "It's Time to Build" post from 2020: https://a16z.com/its-time-to-build/
2. He advances the vision of "creat[ing] a factory that can produce a gigawatt of new AI infrastructure every week". This may be called frighteningly ambitious at a minimum: U.S. annual additions have been ~10-20 GW/year for solar builds (https://www.climatecentral.org/report/solar-and-wind-power-2...)
As a technical user of AI, I think there is certainly value in the capabilities of the current IDE/agentic systems, and as a builder of AI systems professionally I think there is enterprise value as well, although realizing that value in a meaningful way is an ongoing challenge/work in progress. There is also clearly a problem with AI slop, both in codebases and in other professional deliverables. Having said that, what’s more interesting to me is whether we have seen AI produce novel and valuable outputs independently. Altman asserts that 10GW could possibly “cure cancer”, but frankly I’d like to see any discrete examples of AI advancing frontier knowledge areas and moving the needle in small but measurable ways that stand up to peer review. Before we can cure cancer or have world peace through massive consumption of power and spend I’d like to see a meaningful counterpoint to the argument that AI as a derivative technology from all human knowledge is incapable of extending beyond the current limits of human knowledge. Intuitively I think AI should be capable of novel scientific advancement, but I feel like we’re short on examples.
I have stopped reading all the sam altman posts. Not because I am a hater or anything. Maybe AI is a bubble, maybe it is not but one thing is clear. Your ability to do something is enforced in neural pathways that you exercise on a regular basis. If you let LLMs and AI do your architecture, database, coding, testing and just about all the things you are dreaming, then gradually but surely, your brain will lose these pathways and you won't be able to process and IDEATE code anymore like the way you do currently. This is one reason I don't intend to use AI for anything outside file format conversions and maybe image generations.
I hear you, but I don't understand the logic. If you were wealthy enough to have an executive assistant or a chief of staff who handled most of your email and admin on your behalf, is your instinct to say, "I will still write my own emails because I don't want that skill to atrophy?" No. It's simply not high-value work. I'd assume you rather do something else.
You don't see the ultra-wealthy say "Oh no! Not my ability to still do 'architecture, database, coding, testing' on my own!" They just move further and further up the stack.
And I think this is a useful frame for everyone else: just move further up the stack.
Again, I hear you. As a fellow nerd, I love all of these activities too. Computers can be really fun, endlessly fulfilling, truly. But I have the awareness to say to myself, "Ya, but seems like that may have been a temporary phenomenon ... of getting to control and master these machines, just like I don't really crave to hack away at stone tools anymore, because that's not the time period I was born in."
except you got one part of that wrong. Most beginners are trying to generate entire UI, API, design, database schemas, and god knows what else with AI. the concept of the so called marginal AI user simply doesn't exist and you know it
I think that's a passing artifact of the current phase of AI development we find ourselves in. Capabilities (note how I'm not saying 'model intelligence', as I think various agentic flows and robust scaffolding/harnesses can lead to capabilities growth that goes beyond model intelligence plateauing) will continue to improve such that 'beginners' will gain greater and greater leverage.
If by 'marginal AI user' you mean a user who leverages AI tools to enhance the marginal utility of their labor or tasks (by making them more productive or efficient, broadly defined), then I do think that user archetype definitely exists.
"We are already in the phase of co-evolution — the AIs affect, effect, and infect us, and then we improve the AI. We build more computing power and run the AI on it, and it figures out how to build even better chips."
I think AI compute is one of the biggest grifts of century. A capital that is being redistributed from talented people to this compute when we can clearly see it is not making a huge difference (oai vs deepseek) feels like a grift.
We is everything so negative remember all the people joking about OpenAI when it started well ... this thread will also be remembered and I am posting here for future purposes once AI has revolutionized our lives do not underestimate
"What do you mean you won't give us trillions of dollars? Don't you want to cure cancer??" The models are getting more efficient - there's something very weird going on here - like the AI bubble is trying to merge with an energy/datacenter bubble to create a mega bubble.
Apple: Privacy is a fundamental Human right. That is why we must control everything. And stop our user from sharing any form of data other than to Apple.
OpenAI: AI is a fundamental Human right.....
There is something about Silicon Valley that is philosophically very odd for the past 15 to 20 years.
I think reasonable people will agree that Bitcoin's energy consumption had huge impacts on costs of power with very little to show for, at least for the average user.
What are ways in which we can incentivize investments and place societal guardrails so that something similar doesn't happen with AI data centers.
Do governments need to invest in nuclear power?
Scale up energy generation in other ways through renewables?
Insulate or subsidize the average non-corporate electricity consumer through something like rent control?
"As AI gets smarter, access to AI will be a fundamental driver of the economy, and maybe eventually something we consider a fundamental human right. Almost everyone will want more AI working on their behalf."
I don't buy it at all.
This sounds like complete and total bullshit to me.
years of consistent disappointment with the user experience, along with years of misleading internet propaganda dramatically overselling the quality and power of the underlying technology.
It surprises me that people still believe this! I've seen AI deliver incredible value over the past year. I believe the application level is utilizing <.5% (probably less) of the total value that can be derived from current foundation models.
> As AI gets smarter, access to AI will be a fundamental driver of the economy, and maybe eventually something we consider a fundamental human right.
My product is going to be the fundamental driver of the economy. Even a human right!
> Maybe with 10 gigawatts of compute, AI can figure out how to cure cancer.
How?
> We are particularly excited to build a lot of this in the US; right now, other countries are building things like chips fabs and new energy production much faster than we are, and we want to help turn that tide.
There's the appeal to the current administration.
> Over the next couple of months, we’ll be talking about some of our plans and the partners we are working with to make this a reality. Later this year, we’ll talk about how we are financing it
Beyond parody.
Not a word or whisper about environmental impact, either. I mean at least do some hand waving or something. I feel like a habitable planet is a fundamental right.
Not to defend what Altman is saying, but is OpenAI actually using or going to use that much power? This Reuters source the US power budget will be 4,187 TWh in 2025: https://www.reuters.com/business/energy/us-power-use-reach-r...
The US is the second most energy-hungry country on Earth (the first one being China, a country that accounts for almost 30% of the global manufacturing output), so I think it's better to compare it against the rest of the world:
https://en.m.wikipedia.org/wiki/List_of_countries_by_electri...
Every country bellow the top 40 has a consumption lower than 87.6 TWh per year, that includes developed countries like Finland and Belgium, so yes, 10 gigawatts is a lot of power.
Adding 10GW of power as they plan, is 87TWh. 2%.
That’s huge. The hope is that this can drive renewables or nuclear rollout, but not sure that hope is realistic.
Altman is an investor in OKLO. I'm expecting an announcement with companies like that where they can federal dollars to speed up R&D.
No worries once we have AGI we will ask it how to make up for all the emissions and solve climate change ...
AGI answer: "Individual people must change their habits"
Maybe it'll be "There is as yet insufficient data for a meaningful answer".
Duh, you just ask the magical AI "how fix global warming" and it fixes global warming, same way it'll cure cancer magically.
You skipped any attempt to prove his statements wrong, this is just reddit-level sneering with zero discussion material.
Sam Altman skipped any attempt to prove his own statements right, so...
So be better than him..?
It's not worth anyone's time to meticulously fact check known (and I'm being kind here) 'exaggerator' Sam Altman, because by the time you're done, he's already spread 10 more 'exaggerations'.
Sam Altman has been a joke for awhile now, heard only his investors defend him for their next round increase - is that who you are?
There's nothing to seriously discuss here.
When people have access to food and shelter as human rights then we can entertain nonsense.
Maybe the other countries aren't rounding up engineers working on opening factories and detaining them in inhumane conditions.
Yes, this post ironically makes me more bearish
Stop thinking and give them money.
But for real, the leap from GPT4 to GPT5 was nowhere near as impressive as from GTP3 to GPT4. They'll have to do a lot more to give any weight to their usual marketing ultra-hype.
The jump from GPT4 through o3 to GPT5 was very impressive
Agreed. Their naming conventions in a way really broke the perception of progress. GPT-4 to o3 or GPT-5 is truly impressive. The leap from GPT-4o to GPT-5 is less impressive but GPT-4o is generally recognized as GPT-4.
All that being said, it does seem like OpenAI and Anthropic are on a quest for more dollars by promoting fantasy futures where there is not a clear path from A to B, at least to those of us on the outside.
I think people have a lot of rosy glasses and fondness for those early days, combined with general usability benchmarks being mostly saturated now. GPT-3.5 would say Dallas was the capital of USA, but GPT-4 got it every time!
GPT-4 launched with 8k context. It hallucinated regularly. It was slow. One-shotting code was unheard of, you had to iterate and iterate. It fell over even doing basic math problems.
GPT-5 thinking on the other hand is so capable that the average person wouldn't be able to really test it's abilities. It's really only experts operating in their domain who can find it's stumbling blocks.
I think because we have seen these constant incremental updates that it creates a staircase with small steps, but if you really reflect and look back, you'll see the actual capability gap from 3.5 to 4 compared to 4 to 5 is way way smaller. This is echoed in benchmarks too, GPT-5 is solving problems so wildly beyond what GPT-4 was capable of.
What problems?
No offense but your comment is basically HN parody. OpenAI created AI tech decades ahead of estimates. And they just signed a 100B deal with Nvidea. They are actually doing the things that are astonishing.
Every engineer I see in coffee shops uses AI. All my coworkers use AI. I use AI. AI nearly solved protein folding. It is beginning to unlock personalized medicine. AI absolutely will be a fundamental driver of the economy in the future.
Being skeptical is still reasonable.. but flippant dismissal of legitimately amazing accomplishments is peak HN top comment.
> OpenAI created AI tech decades ahead of estimates. And they just signed a 100B deal with Nvidea.
Definitely don't look into the financial details of that deal with Nvidia!
> Every engineer I see in coffee shops uses AI. All my coworkers use AI. I use AI.
Okay
> AI nearly solved protein folding.
FAH predates OpenAI by fifteen years and ChatGPT 3 by twenty. Do not fall for Altman's conflation of LLMs with every other form of machine learning he and OpenAI had nothing to do with!
> Definitely don't look into the financial details of that deal with Nvidia!
could you elaborate about what do you mean?..
I don't think there's any criticism of the (remarkable) things which have been achieved so far, more the breathless hype about how AI is going to solve all our current and future problems if we just keep shovelling money and energy in. Predicting the future is hard, and I don't think Sam is particularly better at knowing what's going to happen in ten years time than anyone else.
AI will become "something we consider a fundamental human right", according to the guy that wants to sell you access to it
So it's going to be regulated like a utility?
Yeah, just like privatised utilities that operator solely for the profits of execs and investors with a complete disregard for regulations or best practices only to hide behind govt not regulating enough when things eventually go wrong.
No, the government is going to provide it to everyone with money that it prints or collects as taxes.
He'll have no problem with that.
You can get your drinking water from a utility, or you can get bottled water. Guess which one he's gonna be selling?
And if you think for a second that the "utility" models will be trained on data as pristine as the data that the "bottled" models will be trained on, I've got a bridge in Brooklyn to sell you. (The "utility" models will not even have any access to all of the "secret sauce" currently being hoarded inside these labs.)
Essentially we can all expect to go back to the Lexis-Google type dichotomy. You can go into court on Google searches, nothing's stopping you. But nearly everyone will pay for LexisNexis because they're not idiots and they actually want to compete.
Great analogy! Look up Dasani some time.
OpenAI is like privatizing water. It's a "fundamental right", but I am one of the few to provide it.
Y2K errors in old COBOL code kick-started Indian IT sector, which then lead to immense economic progress and mass scale reduction in poverty. I hope LLMs pepper every thing they touch with many such errors, so that nations of Africa and poorer parts of Latin America (that can't do cheap manufacturing due to a lack of infrastructure and capital) can also begin their upwards economic journey by providing services to fix these mistakes.
In order to help reduce global poverty (much of which was caused by colonialism), it is the moral and ethical duty of the Global North to adopt LLMs on a mass scale and use them in every field imaginable, and then give jobs to the global poor to fix the resulting mess.
I am only 10% joking.
That's funny, but unless LLM bugs break foundational ML codebases beyond human repair (and somehow also delete all existing code, research, and researchers), the models will likely just get better than people at this in a couple years. I mean, the trajectory so far is obvious.
disco-stu-pointing-at-chart.gif
Go short some stocks if you're so certain the trend will change any minute now (like it was always about to for the past two years).
Unfortunately as they say, the market can remain irrational longer than I can remain solvent.
>(much of which was caused by colonialism)
I found the 10%
Reading replies like this, I am left to wonder if maybe an AI apocalypse isn't such a bad idea.
>it is the moral and ethical duty of the Global North
This is also a pretty good joke
I know, I wrote it. Glad you liked it, not so glad that you then chose to go full mask off.
What mask do you think I just took off?
Civility. Empathy. Humanity.
> We are particularly excited to build a lot of this in the US; right now, other countries are building things like chips fabs and new energy production much faster than we are, and we want to help turn that tide.
Did Donald call him ?
they’re certainly cozy with the administration. this was in the openai RSS feed yesterday: https://openai.com/global-affairs/american-made-innovation/
interestingly, it doesn’t seem to be linked from the “news” section of their website.
> Our vision is simple: we want to create a factory that can produce a gigawatt of new AI infrastructure every week.
The moat will be how efficiently you convert electricity into useful behavior. Whoever industrializes evaluation and feedback loops wins the next decade.
So...
* Nvidia invests 5 billion in Intel * Nvidia and OpenAI announce partnership to deploy 10 gigawatts of NVIDIA systems (Investment of upto 100 billion) * This indirectly benefits TSMC (which implies they'll be investing more in the US)
Looks like the US is cooking something...
Altman fatigue anyone?
What's the serious counter-argument to the idea that a) AI will become more ubiquitous and inexpensive and b) economic/geopolitical success will be tied in some way to AI ability?
Because I do agree with him on that front. The question is whether the AI industry will end up like airplanes: massively useful technology that somehow isn't a great business to be in. If indeed that is the case, framing OpenAI as a nation-bound "human right" is certainly one way to ensure its organizational existence if the market becomes too competitive.
I think the most compelling arguments are:
LLMs aren’t AI. These are language processing tools which are highly effective and it turns out language is a large component of intelligence but they aren’t AI alone.
Intelligence isn’t the solution or bottleneck to solving the world’s most pressing problems. Famines are political. We know how to deploy clean energy.
Now that doesn’t quite answer your question but I think it says two things. First that the time horizon to real AI is still way longer than sama is currently considering. Second that AI won’t be as useful as many believe.
Right, but if you just replace AI with LLM in my comment, I'm not sure it really changes. "Real AI" might not be necessary to the two things I wrote.
I agree that all of the predictions regarding AI are probably overblown if they're just LLMs. But that might not matter if we're just talking about geopolitics.
Yea that’s fair. And if there’s enough money behind something even if it’s not great it can still bend the whole world. I think with a lot of comments like yours people take them, at least I did, to be more slanted to actually be saying something like “what’s the argument against AI 2027”. Which isn’t fair and is why the hype can be so damaging to honest discourse.
So I cannot think of a good argument of a reason this isn’t going to change the world even if that does look more like the AI as a normal technology[0] argument or simply a slopapolocypse.
0: https://knightcolumbia.org/content/ai-as-normal-technology
Maybe AI will become more ubiquitous. But I predict LLMs will be capped by the amount of training data present in the wild.
Ubiquity doesn't depend on the AI getting much better as much as it depends on the computational cost going down (i.e., better hardware + software optimizations). When you can put a ChatGPT-class model locally on every desktop or phone, people will use it even if the accuracy or safety isn't quite there.
Just look at how people are using Grok on Twitter, or how they're pasting ChatGPT output to win online arguments, or how they're trusting Google AI snippets. This is only gonna escalate.
That said, this is probably not the future Sam Altman is talking about. His vision for the future must justify the sky-high valuations of OpenAI, and cheap ubiquity of this non-proprietary tech runs counter to that. So his "ubiquity" is some sort of special, qualified ubiquity that is 100% dependent on his company.
>When you can put a ChatGPT-class model locally on every desktop or phone, people will use it even if the accuracy or safety isn't quite there.
Will they though?
>Just look at how people are using Grok on Twitter, or how they're pasting ChatGPT output to win online arguments, or how they're trusting Google AI snippets. This is only gonna escalate.
But will they though?
That's why the competitive moat for frontier LLMs is access to proprietary training data. OpenAI and their competitors are paying fortunes to license private data sets, and in some cases even hiring human experts to write custom documents on specific topics as additional training data. This is how they hope to stay ahead of open-source alternatives.
I think it'll be slightly different - without clearly marking AI-generated content, it'll be effectively impossible to find new content that isn't sold to you in pristine packages already, and even that you just sorta have to trust.
Of course, you can't train LLMs on LLM-generated content.
I'm more worried that publicly available LLMs "will be capped by the amount of training data present in the wild". But private LLMs, available only to the wealthy and powerful, will have additional, more pristine and accurate, data sources made available to them for training.
Think about the legal field. The masses tend to use Google, whereas the wealthy and powerful all use LexisNexis. Who do you think has been winning in court?
I don't think the masses are representing themselves in court... and even then, legal text is borderline obfuscation it's so poorly designed.
> provide customized tutoring to every student on earth
It could start by figuring out how to keep kids from using AI to write all their essays.
Can someone help me make some sense of the 10 gigawatts of compute?
Colossus data has 230K GPUs (150,000 H100 GPUs, 50,000 H200 GPUs and 30,000 GB200 GPUs) [source https://x.ai/colossus]
Energy usage: up to 150 megawatts of electricity per day [source: https://en.wikipedia.org/wiki/Colossus_(supercomputer)]
So, when SamA talks about 10 gigawatts of compute does he mean per day or GWH (Gigawatts-hour)?
Video. It's for video.
We don't have the compute to do video on demand right now like we do images or text or audio.
Combining all the modalities together, smoothly, at speed, and for cheap, is going to take a hell of a lot of thinking sand powered by magic rocks.
I think it’s just scale to the moon rhetoric, like “what if we used 100x more compute?”. Since the units are power and not energy, I’m going with 10 GW continuous load (for training? inference?) but I think it’s not exactly meant literally
"Maybe with 10 gigawatts of compute, AI can figure out how to cure cancer. Or with 10 gigawatts of compute, AI can figure out how to provide customized tutoring to every student on earth."
what's in between this line and the next:
"or it might not. Now give me moar money!!!!!"
> Maybe with 10 gigawatts of compute, AI can figure out how to cure cancer.
Something I've never understood: why do AGI perverts think that a superintelligence is any more likely to "cure cancer" than "create unstoppable super-cancer"
AI will do neither of those things because curing or creating cancer requires physical experiments and trials on real people or animals, as does all science outside of computer science (which is often more math than science).
I can see AI being helpful in generating hypothesis, or potential compounds to synthesize, or helping with literature search, but science is a physical process. You don't generally do science just by sitting there and pondering, despite what the movies suggest.
There are a few fully automated wet labs and many semi-autonomous. They are called "Cloud Labs", and they will only become more plentiful. AI can identify and execute the physical experiments after using simulations to filter and score the candidate hypotheses.
Sorry but your concept of AI is marketing driven. It's probabilistic, understanding is past your pay grade.
They're actually right in that there are several attempts to create automated labs to speed up the physical part. But in reality there are only a handful and they are very very narrowly scoped.
But yes, potentially in some narrow domains this will be possible, but it still only automates a part of the whole process when it comes to drugs. How a drug operates on a molecular test chip is often very different than how it works in the body.
Exactly - AI allows for intersections in concepts from training data; up to the user to make sense of it. Thanks for stating this (I end up repeating same thing in every conversation, but is common sense).
Somehow it never crossed my mind, but human civilization could plausibly end in the next 10 years. Many thought if then the cause would be a nuclear war, turns out it's more like the 90's movie 12 monkeys. I would love to be proven wrong, yet there is no international regulation on AI.
>I would love to be proven wrong, yet there is no international regulation on AI.
What are the chances of advancing in AI regulation before any monumental fuck up that changes the public opinion to "yeah this thing is really dangerous". Like a Hiroshima or Chernobyl but for AI.
Zero. Too much skin in the game all around for the train to stop now.
I'm not sure he's even talking about AGI (which feels unusual for Altman). He might be talking about GPT5 in agentic workflows. Or whatever their next model will be called
> Our vision is simple: we want to create a factory that can produce a gigawatt of new AI infrastructure every week.
If a tenth of this happens, and we don't build a new power plant every ten weeks... then what?
It will create demand for electricity. Demand always creates supply. Perpahs that was the missing link to the mass deployment of solar (because there's just no other way similar amounts of energy can be produced).
Perhaps, but Hank Green published a pretty convincing argument recently that electricity supply has nowhere the necessary elasticity, and the politicised nature of power generation in the US means that isn't going to change:
https://www.youtube.com/watch?v=39YO-0HBKtA
The fundamental driver of the economy will be people, as always, because only people define "value".
Word generating machine will not "figure out how to cure cancer" but it could help, obviously. AI is extremely valuable tool but it does not work on my behalf, except in same sense a coin sorter does. It's a tool. I still see this thing confuse left and right (no exaggeration), which would be fine - tools aren't perfect - except for all bullshit from VCs. That is where the danger lies, not with the tool but idolatry.
I am concerned the system encourages suicide, delusional thinking etc. They need to work that out immediately. Must be held to at least the standard of a lawn mower or couch with regard to safety. Probably should make it safe before hooking it up to 10GW power plant. Does not help perception that author of this blog saying how awesome future will be is also building a hideaway bunker for himself.
This is all on you at OpenAI, Anthropic, Microsoft, Twitter etc. Whatever happens.
Well, i see the case that those with great power want factories building ai factories, much like a factorio simulation, to solve cancer. Sure sama, you have to name the most noble goal imaginable everyone agrees on, though we know that's not what drives you. AI to become a fundamental right, that won't work as some kind of subscription service, we also agree on this?
> Maybe with 10 gigawatts of compute, AI can figure out how to cure cancer.
The growth in energy is because of the increase in the output tokens due to increased demand for them.
Models do not get smarter the more they are used.
So why does he expect them to solve cancer if they haven't already?
And why do we need to solve cancer more than once?
> If AI stays on the trajectory that we think it will, then amazing things will be possible. Maybe with 10 gigawatts of compute, AI can figure out how to cure cancer.
At least the statement starts with a conditional, even if it is a silly one.
If you know your growth curve is ultimately going to be a sigmoid, fitting a model with only data points before the inflection point is underdetermined.
> If AI stays on the trajectory that we think it will
Is a statement that no amount of prior evidence can support.
Assuming AI stays on the trajectory they think it will doesn't mean they assume it will be infinite exponential growth. If you know for a fact it's sigmoid curve, presumably the path you think it will continue on is the sigmoid curve. The trillion dollar question is does performance plateau before or after AI can do the really exciting stuff, and while I may not agree with it myself the more optimistic position is not an unreasonable belief.
Also you can most certainly fit a sigmoid function only from past data points. Any projection will obviously have error, but your error at any given point should be smaller than for an exponential function with the same sampling.
I think if you're critiquing AI, you should use harsher words.
AI boosters are going to spam the replies to your comment in attempts to muddy the waters.
I'm an AI booster, but he's right, these models are in the sigmoid elbow and we're being hard-headed trying to push frontier models, it's not sustainable. We need to take a step back and work on the engineering of the systems around the frontier models while trying to find a new architecture that scales better.
That being said the current models are transformative on their own, once the systems catch up to the models that will be glaringly obvious to everyone.
Putting aside the questions of (as some comments here have) whether
I found the following to be interesting in this post:1. Altman clearly signaling affinity for the Abundance bandwagon with a clear reference right in the title. Post is shorter but has the flavor of Marc Andreessen's "It's Time to Build" post from 2020: https://a16z.com/its-time-to-build/
2. He advances the vision of "creat[ing] a factory that can produce a gigawatt of new AI infrastructure every week". This may be called frighteningly ambitious at a minimum: U.S. annual additions have been ~10-20 GW/year for solar builds (https://www.climatecentral.org/report/solar-and-wind-power-2...)
fix'd: Abundant Money
As a technical user of AI, I think there is certainly value in the capabilities of the current IDE/agentic systems, and as a builder of AI systems professionally I think there is enterprise value as well, although realizing that value in a meaningful way is an ongoing challenge/work in progress. There is also clearly a problem with AI slop, both in codebases and in other professional deliverables. Having said that, what’s more interesting to me is whether we have seen AI produce novel and valuable outputs independently. Altman asserts that 10GW could possibly “cure cancer”, but frankly I’d like to see any discrete examples of AI advancing frontier knowledge areas and moving the needle in small but measurable ways that stand up to peer review. Before we can cure cancer or have world peace through massive consumption of power and spend I’d like to see a meaningful counterpoint to the argument that AI as a derivative technology from all human knowledge is incapable of extending beyond the current limits of human knowledge. Intuitively I think AI should be capable of novel scientific advancement, but I feel like we’re short on examples.
I have stopped reading all the sam altman posts. Not because I am a hater or anything. Maybe AI is a bubble, maybe it is not but one thing is clear. Your ability to do something is enforced in neural pathways that you exercise on a regular basis. If you let LLMs and AI do your architecture, database, coding, testing and just about all the things you are dreaming, then gradually but surely, your brain will lose these pathways and you won't be able to process and IDEATE code anymore like the way you do currently. This is one reason I don't intend to use AI for anything outside file format conversions and maybe image generations.
I hear you, but I don't understand the logic. If you were wealthy enough to have an executive assistant or a chief of staff who handled most of your email and admin on your behalf, is your instinct to say, "I will still write my own emails because I don't want that skill to atrophy?" No. It's simply not high-value work. I'd assume you rather do something else.
You don't see the ultra-wealthy say "Oh no! Not my ability to still do 'architecture, database, coding, testing' on my own!" They just move further and further up the stack.
And I think this is a useful frame for everyone else: just move further up the stack.
Again, I hear you. As a fellow nerd, I love all of these activities too. Computers can be really fun, endlessly fulfilling, truly. But I have the awareness to say to myself, "Ya, but seems like that may have been a temporary phenomenon ... of getting to control and master these machines, just like I don't really crave to hack away at stone tools anymore, because that's not the time period I was born in."
except you got one part of that wrong. Most beginners are trying to generate entire UI, API, design, database schemas, and god knows what else with AI. the concept of the so called marginal AI user simply doesn't exist and you know it
I think that's a passing artifact of the current phase of AI development we find ourselves in. Capabilities (note how I'm not saying 'model intelligence', as I think various agentic flows and robust scaffolding/harnesses can lead to capabilities growth that goes beyond model intelligence plateauing) will continue to improve such that 'beginners' will gain greater and greater leverage.
If by 'marginal AI user' you mean a user who leverages AI tools to enhance the marginal utility of their labor or tasks (by making them more productive or efficient, broadly defined), then I do think that user archetype definitely exists.
In fact, Sama talks about this too:
https://archive.is/20250109011408/https://blog.samaltman.com...
"We are already in the phase of co-evolution — the AIs affect, effect, and infect us, and then we improve the AI. We build more computing power and run the AI on it, and it figures out how to build even better chips."
I think AI compute is one of the biggest grifts of century. A capital that is being redistributed from talented people to this compute when we can clearly see it is not making a huge difference (oai vs deepseek) feels like a grift.
We is everything so negative remember all the people joking about OpenAI when it started well ... this thread will also be remembered and I am posting here for future purposes once AI has revolutionized our lives do not underestimate
"What do you mean you won't give us trillions of dollars? Don't you want to cure cancer??" The models are getting more efficient - there's something very weird going on here - like the AI bubble is trying to merge with an energy/datacenter bubble to create a mega bubble.
He might be right about intelligence becoming the new currency in a world where intelligence becomes fungible.
Lots of assumptions about the path to get there, though.
And interesting that he's measuring intelligence in energy terms.
A company is building technology more powerful than nuclear weaponry, and this comment section is thinking they're "overselling" it. Fun.
There is no world in which LLMs are more powerful or impactful than nuclear weapons.
A world where LLMs are used to trick or pit people against each to launch nuclear weapons?
What makes you think LLMs are "more powerful than nuclear weaponry" ?
Nobody will be afraid to use AI.
>technology more powerful than nuclear weaponry
God-like technology that you can avoid by touching grass?
Unless you're talking about a Skynet type rogue ASI, which is probably not gonna happen anytime soon.
Sam Altman gives me dark triad vibes.
Google: Do no Evil.
Apple: Privacy is a fundamental Human right. That is why we must control everything. And stop our user from sharing any form of data other than to Apple.
OpenAI: AI is a fundamental Human right.....
There is something about Silicon Valley that is philosophically very odd for the past 15 to 20 years.
Looks like they're trying to seem more than a profit-seeking corporation but a way of life, culture and value system.
Who decides what is good and evil, and what are our human rights. Is it any of their business? Through their actions they're shaping society.
let me put some more in nvidia now
I think reasonable people will agree that Bitcoin's energy consumption had huge impacts on costs of power with very little to show for, at least for the average user.
What are ways in which we can incentivize investments and place societal guardrails so that something similar doesn't happen with AI data centers.
Do governments need to invest in nuclear power?
Scale up energy generation in other ways through renewables?
Insulate or subsidize the average non-corporate electricity consumer through something like rent control?
using abundance discourse to market ai slop is the most innovative thing openai has done yet
"The factory must grow"
"As AI gets smarter, access to AI will be a fundamental driver of the economy, and maybe eventually something we consider a fundamental human right. Almost everyone will want more AI working on their behalf."
I don't buy it at all.
This sounds like complete and total bullshit to me.
Why do you feel that way?
The fundamental driver of the economy is people eating and clothing themselves, not writing memos that are never read.
Food is 13% of US consumer spending, and clothing is 2.7%. Both have declined steadily since the industrial revolution.
I assure that people being alive is going to be the fundamental driver of the economy no matter what percent of consumer spending it is.
Now cut off people’s access to those and let’s see how the economy does
ELIZA is that way ->.
Try asking "what evidence supports your conclusions?".
years of consistent disappointment with the user experience, along with years of misleading internet propaganda dramatically overselling the quality and power of the underlying technology.
it's a fucking dud.
It surprises me that people still believe this! I've seen AI deliver incredible value over the past year. I believe the application level is utilizing <.5% (probably less) of the total value that can be derived from current foundation models.
It's only a niche weird opinion you'll find on forums like HN.
In the real world, it's immensely useful to millions of people. It's possible for a thing to both be incredibly useful and overhyped at the same time.
What evidence supports your conclusions?
What evidence are you aware of that counters it?
Based on your gung ho attitude I suspect that you are personally invested in "AI products" or otherwise work for a firm that creates "AI products"
Are you Jenny Holzer, “you are trapped on the earth so you will explode” conceptual artist?
My intelligence dropped a few points by reading anything from this charlatan.
It's intelligent to separate intelligence from power. At the macro level, intelligence is often overvalued.
[dead]
Those guys are so stupid. AI is just going to get weaponized by companies and governments to enslave more people and accumulate riches for the few
Dorks with forks