The process of _actually_ benefitting from technological improvements is not a straight line, and often requires some external intervention.
e.g. it’s interesting to note that the rising power of specific groups of workers as a result of industrialisation + unionisation then arguably led to things like the 5-day week and the 8-hour day.
I think if (if!) there’s a positive version of what comes from all this, it’s that the same dynamic might emerge. There’s already lots more WFH of course, and some experiments with 4-day weeks. But a lot of resistance too.
My understanding is that the 40 hour work week (and similar) was talked about for centuries by workers groups but only became a thing once governments during WWI found that longer days didn't necessarily increase output proportionally.
For a 4 day week to really happen st scale, I'd expect we similarly need the government to decide to roll it out rather than workers groups pushing it from the bottom up.
Generally it only really started being talked about when "workers" became a thing, specifically with the Industrial Revolution. Before that a good portion of work was either agricultural or domestic, so talk of 'shifts' didn't really make much sense.
Oh sure, a standard shift doesn't make much sense unless you're an employee. My point was specifically about the 40 hour standard we use now though. We didn't get a 40-hour week because workers demanded it, we got it because wartime governments decided that was the "right" balance of labor and output.
The most frustrating thing to me about this most recent rash of biz guy doubting the future of AI articles is the required mention that AI, specifically an LLM based approach to AGI, is important even if the numbers don't make sense today.
Why is that the case? There's plenty of people in the field who have made convincing arguments that it's a dead end and fundamentally we'll need to do something else to achieve AGI.
Where's the business value? Right now it doesn't really exist, adoption is low to nonexistent outside of programming and even in programming it's inconclusive as to how much better/worse it makes programmers.
I'm not a hater, it could be true, but it seems to be gospel and I'm not sure why.
Mapping to 2001 feels silly to me, when we've had other bubbles in the past that led to nothing of real substance.
LLMs are cool, but if they can't be relied on to do real work maybe they're not change the world cool? More like 30-40B market cool.
EDIT: Just to be clear here. I'm mostly talking about "agents"
It's nice to have something that can function as a good Google replacement especially since regular websites have gotten SEOified over the years. Even better if we have internal Search/Chat or whatever.
I use Glean at work and it's great.
There's some value in summarizing/brainstorming too etc. My point isn't that LLMs et al aren't useful.
The existing value though doesn't justify the multi-trillion dollar buildout plans. What does is the attempt to replace all white collar labor with agents.
That's the world changing part, not running a pretty successful biz, with a useful product. That's the part where I haven't seen meaningful adoption.
This is currently pitched as something that will have nonzero chance of destroying all human life, we can't settle for "Eh it's a bit better than Google and it makes our programmers like 10% more efficient at writing code."
Where's the business value? Right now it doesn't really exist, adoption is low to nonexistent outside of programming and even in programming it's inconclusive as to how much better/worse it makes programmers.
I have a friend who works at PwC doing M&A. This friend told me she can't work without ChatGPT anymore. PwC has an internal AI chat implementation.
Where does this notion that LLMs have no value outside of programming come from? ChatGPT released data showing that programming is just a tiny fraction of queries people do.
Cigarettes were/are a pretty lucrative business. It doesn’t matter if it’s better or worse, if it’s as addictive as tobacco, the investors will make back their money.
> Non-smoking young adults with ADHD-C showed improvements in cognitive performance following nicotine administration in several domains that are central to ADHD. The results from this study support the hypothesis that cholinergic system activity may be important in the cognitive deficits of ADHD and may be a useful therapeutic target.
My own use case (financial analysis and data capture by the models). It takes away the grunt work, I can focus on the more pleasant aspects of the job, it also means I can produce better quality reports as I have additional time to look more closely. It also points out things I could have potentially missed.
Free time and boredom spurs creativity, some folks forget this.
I also have more free time, for myself, you're not going to see that on a corporate productivity chart.
Not everything in life is about making more money for some already wealthy shareholders, a point I feel sometimes lost in these discussions, I think some folks need some self-reflection on this point, their jobs don't actually change the world and thinking of the shareholders only gets you so far.
(Not pointed at you, just speaking generally).
No she’s less productive. She just use it because she wants to do less work, be less likely to get promoted, and have to stay in the office longer to finish her work.
/s
What kind of question is that? Seriously. Are some people here so naive to think that tens of millions out there don’t know when something they choose to use repeatedly multiple times a day every day is making their life harder? Like ChatGPT is some kind of addiction similar to drugs? Is it so hard to believe that ChatGPT is actually productive?
It is the kind of question that takes into account that people thinking that they are more productive does not imply that they actually are. This happens in a wide range of contexts, from AI to drugs.
It absolutely is a question people ask when suspicious of productivity claims.
Lots of things claim to make people more productive. Lots of things make people believe they are more productive. Lots of things fail to provide evidence of increasing productivity.
This "just believe me" mentality normally comes from scams.
That doesn’t seem to me like a good reason to dismiss the question, and especially not that strongly/aggressively. We’re supposed to assume good intentions on this site. I can think of any number of reasons one might feel more productive but in the end not be going much faster. It would be nice to know more about the subject of the question’s experience and what they’re going off of.
Maybe you are not aware of such kinds of topics, but yes it is asked often. It is asked for stimulants, for microdosing psychedelics, for behavioural interventions or workplace policies/processses. Whenever there are any kind of productivity claims, it is asked, and it should be asked.
Wow you're completely right and I just completely forgot who you were replying to. I thought you were replying to the person the person you were actually replying to was replying to. Sorry about both my mistake and my previous sentence's convolution!
To do that properly, one needs some kind of control, which is hard to do with one person. It should be doable with proper effort, but far from trivial, because it is not enough to measure what you actually did in one condition, you have to compare it with sth. And then there can be a lot of noise for n=1: when you use LLMs, maybe you happen to have to solve harder tasks. So you need at least to do it over quite a lot of time, or make sure the difficulty of tasks is similar. If you have a group of people, you can put them into groups instead and thus not care as much for these parameters, because you can assume that when you average this "noise" will cancel out.
The problem isn't a delta between what got done and how much it felt like got done. The problem is it's not known how it would have taken you to do what got done unless you do it twice. Once by hand and once with an LLM, and then compare. Unfortunately, regardless of what you find, HN will be rushing to say N=1, so there's little incentive to report on any individual results.
> This friend told me she can't work without ChatGPT anymore.
It doesn't say she chooses to use it; it says she can't work without using it. At my workplace, senior leadership has mandated that software engineers use our internal AI chat tooling daily, they monitor the usage statistics, and are updating engineering leveling guides to include sufficient usage of AI being required for promotions. So I can't work without AI anymore, but it doesn't mean I choose to.
I mean... there are many situations in life where people are bad judges of the facts. Dating, finances, health, etc, etc, etc.
It's not that hard to imagine that your friend feels more productive than she actually is. I'm not saying it's true, but it's plausible.
The anecdata coming out of programming is mostly that people are only more productive in certain narrow use cases and much less productive in everything else, relative to just doing the work themselves with their sleeves rolled up.
But man to seeing all that code gets spit out on the screen FEEL amazing, even if I'm going to spend the next few hours needing to edit it, for the next few months managing the technical debt I didn't notice when I merged it.
> What kind of question is that? Seriously. Are some people here so naive to think that tens of millions out there don’t know when something they choose to use repeatedly multiple times a day every day is making their life harder?
That's just an appeal to masses / bandwagon fallacy.
> Is it so hard to believe that ChatGPT is actually productive?
We need data, not beliefs and current data is conflicting. ffs.
What if people are using LLMs to achieve the same productivity with more cost to the business and less time spent working?
This, to me, feels incredibly plausible.
Get an email? ChatGPT the response. Relax and browse socials for an hour. Repeat.
"My boss thinks I'm using AI to be more productive. In reality, I'm using our ChatGPT subscription to slack off."
That three day report still takes three days, wink wink.
AI can be a tool for 10xers to go 12x, but more likely it's also that AI is the best slack off tool for slackers to go from 0.5x to 0.1x.
And the businesses with AI mandates for employees probably have no idea.
Anecdotally, I've seen it happen to good engineers. Good code turning into flocks of seagulls, stacks of scope 10-deep, variables that go nowhere. Tell me you've seen it too.
You're working under the assumption that punching a prompt into ChatGPT and getting up to grab some coffee while it spits out thousands of tokens of meaningless slop to be used as a substitute for something that you previously would've written yourself is a net upgrade for everyone involved. It's not. I can use ChatGPT to write 20 paragraph email replies that would've previously been a single manually written paragraph, but that doesn't mean I'm 20x more productive.
And yes, ChatGPT is kinda like an addictive drug here. If someone "can't work without ChatGPT anymore", they're addicted and have lost the ability to work on their own as a result.
> And yes, ChatGPT is kinda like an addictive drug here. If someone "can't work without ChatGPT anymore", they're addicted and have lost the ability to work on their own as a result.
Come on, you can’t mean this in any kind of robust way. I can’t get my job done without a computer; am I an “addict” who has “lost the ability to work on my own?” Every tool tends to engender dependence, roughly in proportion to how much easier it makes the life of the user. That’s not a bad thing.
There's a big difference between needing a tool to do a job that only that tool can do, and needing a crutch to do something without using your own faculties.
LLMs are nothing like a computer for a programmer, or a saw for a carpenter. In the very best case, from what their biggest proponents have said, they can let you do more of what you already do with less effort.
If someone has used them enough that they can no longer work without them, it's not because they're just that indispensable: it's because that someone has let their natural faculties atrophy through disuse.
Are you really comparing an LLM to a computer? Really? There are many jobs today that quite literally would not exist at all without computers. It's in no way comparable.
You use ChatGPT to do the things you were already doing faster and with less effort, at the cost of quality. You don't use it to do things you couldn't do at all before.
It's no different to a manager that delegates, are they less of a manager because they entrust the work to someone else? No. So long as they do quality checks and take responsibility for the results, wheres the issue?
Work hard versus work smart. Busywork cuts both ways.
The recent MIT report on the state of AI in business feels relevant here [0]:
> Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return.
There's no doubt that you'll find anecdotal evidence both for and against in all variations, what's much more interesting than anecdotes is the aggregate.
I think it's true that AI does deliver real value. It's helped me understand domains quickly, be a better google search, given me code snippets and found obscure bugs, etc. In that regard, it's a positive on the world.
I also think it's true that AI is nowhere near AGI level. It's definitely not currently capable of doing my job, not by a long shot.
I also think that, throwing trillions of dollars at AI for a "a better google search, code snippet generator, and obscure bug finder" is a contentious question, and a lot of people oppose it for that reason.
I personally still think it's kind of crazy we have a technology to do things that we didn't have just ~2 years before, even if it just stagnates right here. Still going to be using it every day, even if I admittedly hate a lot of parts of it (for example, "thinking models" get stuck in local minima way too quickly).
At the same time, don't know if it's worth trillions of dollars, at least right now.
So all claims on this thread can be very much true at the same time, just depends on your perspective.
That report also mentions individual employees using their own personal subscriptions for work, and points to it as a good model for organizations to use when rolling out the tech (i.e. just make the tools available and encourage/teach staff how they work). That sure doesn’t make it sound like “zero return” is a permanent state.
Ah yes, the study that everyone posts but nobody reads
>Behind the disappointing enterprise deployment numbers lies a surprising reality: AI is
already transforming work, just not through official channels. Our research uncovered a
thriving "shadow AI economy" where employees use personal ChatGPT accounts, Claude
subscriptions, and other consumer tools to automate significant portions of their jobs, often without IT knowledge or approval.
>The scale is remarkable. While only 40% of companies say they purchased an official LLM
subscription, workers from over 90% of the companies we surveyed reported regular use of
personal AI tools for work tasks. In fact, almost every single person used an LLM in some
form for their work.
No. The aggregate is useless. What matters is the 5% that have positive return.
In the first few years of any new technology, most people investing it lose money because the transition and experimentation costs are higher than the initial returns.
But as time goes on, best practices emerge, investments get paid off, and steady profits emerge.
No, on the consumer end. The whole point is that the 5% profitable is going to turn to 10%, 25%, 50%, 75% as companies work out how to use AI profitably.
It always takes time to figure out how to profitably utilize any technological improvement and pay off the upfront costs. This is no exception.
> This friend told me she can't work without ChatGPT anymore
I am curious what kind of work is she using ChatGPT such that she cannot do without it?
> ChatGPT released data showing that programming is just a tiny fraction of queries people do
People are using it as search engine, getting dating advice and everything under the sun. That doesn't mean there is business value - so to speak. If these people had to pay say $20 a month for this access, are they willing to do so?
The poster's point was that coding is an area which is paying consistently for LLM models so much that every model has a coding specific version. But we don't see same sort of specialized models for other areas and the adoption is low to nonexistent.
Greater output doesn't always equal greater productivity. In my days in the investing business we would have junior investment professionals putting together elaborate and detailed investment committee memos. When it came time to review a deal in the investment committee meetings we spent all our time trying to sift through the content of the memos and diligence done to date to identify the key risks and opportunities, with what felt like a 1:100 signal to noise ratio being typical. The productive element of the investment process was identifying the signal, not producing the content that too often buries the signal deeper. Imo, AI tools to date make it so much easier to create content which makes it harder to be productive.
ChatGPT automates much of my friend's work at PwC making her more productive --> not a sign that ChatGPT has any value
Farming machines automated much of what a farmer used to have to do by himself making him more productive --> not a sign that farming machines have any value
The output of a farm is food or commodities to be turned into food.
The output of PwC -- whoops, here goes any chance of me working there -- is presentations and reports.
“We’re entering a bold new chapter driven by sharper thinking, deeper expertise and an unwavering focus on what’s next. We’re not here just to help clients keep pace, we’re here to bring them to the leading edge.”
That's on the front page of their website, describing what PwC does.
Now, what did PwC used to do? Accounting and auditing. Worthwhile things, but adjuncts to running a business properly, rather than producing goods and services.
> Try building something new in claude code (or codex etc) using a programming language you have not used before. Your opinion might change drastically.
Try changing something old in claude code (or codex etc) using a programming language you have used before. Your opinion might change drastically.
I did just that and I ended up horribly regretting it.
The project had to be coded in Rust, which I kind of understand but never worked with. Drunk on AI hype, I gave it step by step tasks and watched it produce the code. The first warning sign was that the code never compiled at the first attempt, but I ignored this, being mesmerized by the magic of the experience.
Long story short, it gave me quick initial results despite my language handicap. But the project quickly turned into an overly complex, hard to navigate, brittle mess. I ended up reading the Rust in Action book and spending two weeks cleaning and simplifying the code. I had to learn how to configure the entire tool chain, understand various cargo deps and the ecosystem, setup ci/cd from scratch, .... There is no way around that.
It was Claude Code Opus 4.1 instead of Codex but IMO the differences are negligible.
AI can be quite impressive if the conditions are right for it. But it still fails at so many common things for me that I'm not sure if it's actually saving me time overall.
I just tried earlier today to get Copilot to make a simple refactor across ~30-40 files. Essentially changing one constructor parameter in all derived classes from a common base class and adding an import statement. In the end it managed ~80% of the job, but only after messing it up entirely first (waiting a few minutes), then asking again after 5 minutes of waiting if it really should do the thing and then missing a bunch of classes and randomly removing about 5 parenthesis from the files it edited.
Just one anecdote, but my experiences so far have been that the results vary dramatically and that AI is mostly useless in many of the situations I've tried to use it.
One thing I like for this type of refactoring scenario is asking it to write a codemod (which you can of course do yourself but there's a learning curve). Faster result that takes advantage of a deterministic tool.
This is exactly my experience. We wanted to modernize a java codebase by removing java JNDI global variables. This is a simple though tedious task. And we tried Claude Code and Gemini. Both of these results were hilarious.
> using a programming language you have not used before
haven't we established that if you are layman in an area AI can seem magical. Try doing something in your established area and you might get frustrated. It will give you the right answer with caveats - code which is too verbose, performance intensive or sometimes ignoring best security practices.
> Try building something new in claude code (or codex etc) using a programming language you have not used before. Your opinion might change drastically.
So it looks best when the user isn't qualified to judge the quality of the results?
Average programmers do not produce average software; the former implements code, the latter is the full picture and is more about what to build, not how to build it. You don't get a better "what to build" by having above-average developers.
Anyway we don't need more efficient average programmers, time-to-market is rarely down to coding speed / efficiency and more down to "what to build". I don't think AI will make "average" software development work faster or better, case in point being decades of improvements in languages, frameworks and tools that all intend to speed up this process.
Yes. The "true" average software quality is far, far lower than the average person perceives it to be. ChatGPT and other LLM tools have contributed massively to lowering average software quality.
I don’t understand how your three sentences mesh with each other. In any case, making the development of average software more efficient doesn’t by itself change anything about its quality. You just get more of it faster. I do agree that average software quality isn’t great, though I wouldn’t attribute it to LLMs (yet).
Yeah I've used it for personal projects and it's 50/50 for me.
Some of the stuff generated I can't believe is actually good to work with long term, and I wonder about the economics of it. It's fun to get something vaguely workable quickly though.
Things like deepwiki are useful too for open source work.
For me though the core problem I have with AI programming tools is that they're targeting a problem that doesn't really exist outside of startups, not writing enough code, instead of the real part of inefficiency in any reasonably sized org, coordination problems.
Of course if you tried to solve coordination problems, then it would probably be a lot harder to sell to management because we'd have to do some collective introspection as to where they come from.
The same way you assess results in a programming language you have used before. In a more complicated project that might mean test suites. For a simple project (e.g. a Bash script) you might just run it and see if it does what you expect.
The way I assess results in a familiar programming language is by reviewing and reasoning through the code. Testing is necessary, but not sufficient by any means.
Out of curiosity, how do you assess software that you didn't write and just use, and that is closed source? Don't you just... use it? And see if it works?
> using a programming language you have not used before
But why would I do that? Either I'm learning a new language in which case I want to be as hands-on as possible and the goal is to learn, not to produce. Or I want to produce something new in which case, obviously, I'd use a toolset I'm experienced in.
There are plenty of scenarios where you want to work with a new language but you don't want to have to dedicate months/years of your life to becoming expert in it because you are only going to use it for a one-time project.
For example, perhaps I want to use a particular library which is only available in language X. Or maybe I'm writing an add-on for a piece of software that I use frequently. I don't necessarily want to become an expert in Elisp just to make a few tweaks to my Emacs setup, or in Javascript etc. to write a Firefox add-on. Or maybe I need to put up a quick website as a one-off but I know nothing about web technologies.
In none of these cases can I "use a toolset I'm experienced in" because that isn't available as an option, nor is it a worthwhile investment of time to become an expert in the toolset if I can avoid that.
It's silly to say that the only objective that will vindicate AI investments is AGI.
Current batch of deep learning models are fundamentally a technology for labor automation. This is immensely useful in itself, without the need to do AGI. The Sora2 capabilities are absolutely wild (see a great example here of what non-professional users are already able to create with it: https://www.youtube.com/watch?v=HXp8_w3XzgU )
So only looking at video capabilities, or at coding capabilities, it's already ready to automate and upend industries worth trillions in the long run.
The emerging reasoning capabilities are very promising, able to generate new theories and make scientific experiments in easy to test fields, such as in vitro drug creation. It doesn't matter if the LLM hallucinates 90% of the time, if it correctly reasons a single time and it can create even a single new cancer drug that passes the test.
These are all examples of massive, massive economic disruption by automating intellectual labor, that don't require strict AGI capabilities.
Regardless of my opinions on if you're correct about this, I'm not an ML expert so who knows, I'd be very happy if we cured cancer so I hope you're correct and the video is a cool demo.
I don't believe the risk vs reward on investing a trillion dollars+ is the same when your thesis changes from "We just need more data/compute and we can automate all white collar work"
to
"If we can build a bunch of simulations and automate testing of them using ML then maybe we can find new drugs" or "automate personalized entertainment"
The move to RL has specifically made me skeptical of the size of the buildout.
If you calculate the investment into AI and then divide by say 100k that's how many man-years need to replace with AI to be cost effective as labor automation the numbers aren't that promising given the current level of capability.
Don't even need to get too fancy with it. Open AI has publicly committed to ~$500B in spending over the next several years (nevermind even they don't expect to actually bring that much revenue in)
$500B/$100,000 is 5 million, or 167k 30-year careers.
The math is ludicrous, and the people saying it's fine are incomnprehensible to me.
Another comment on a similar post just said, no hyperbole, irony, or joke intended: "Just you switching away from Google is already justifying 1T infrastructure spend."
> Current batch of deep learning models are fundamentally a technology for labor automation. This is immensely useful in itself, without the need to do AGI. The Sora2 capabilities are absolutely wild (see a great example here of what non-professional users are already able to create with it: https://www.youtube.com/watch?v=HXp8_w3XzgU )
> So only looking at video capabilities, or at coding capabilities, it's already ready to automate and upend industries worth trillions in the long run.
Can Sora2 change the framing of a picture without changing the global scene ? Can it change the temperature of a specific light source ? Can it generate a 8k HDR footage suitable for re-framing and color grading ? Can it generate minute long video without loosing coherence ? Actually, can it generate a few seconds without having to reloop with the last frame and have these obnoxious cuts that the video you pointed has ?
Can it reshoot the same exact scene with just one element altered ?
All the video models right now are only good at making short, low-res, barely post-processable video. The kind of stuff you see on social media. And considering the metrics on ai-generated video on social media right now, for the most part, nobody want to look at them. They might replace the bottom of the barrel of social media posting (hello cute puppy videos), but there is absolutely nothing indicating that they migth automate or upend any real industry (be used in the pipeline, yeah maybe, why not, automate ? Won't hold my breath).
And the argument of their future capabilities, well ... It's been 50+ years that we should have fusion in 20 years.
Btw, the same argument can be made for LLM and image-gen tech in any creative purposes. People severly underestimate just how much editing, re-work, purpose and pre-production steps are involved in any major creative endeavor. Most model are just severly ill suited for that work. They can be useful for some stuff (specificaly, for editing images, ai-driven image fill do work decently for exemple), but overall, as of right now, they are mostly good at making low quality content. Which is fine I guess, there is a market for it, but it was already a market that was not keen on spending money.
Qwen image and nano banana can both do that with images, there’s zero reason to think we can’t train video models for masking.
This feels a lot like critiquing stable diffusion over hands and text, which the new SOTA models all handle well.
One of the easiest iterations on these models is to add more training cases to the benchmarks. That’s a timeline of months, not comparable to forecasting progress over 20 years like fusion.
Is it now. I don't think being able to accurately and predictably make changes to a shot, a draft, a design is surface level in production.
> Qwen image and nano banana can both do that with images, there’s zero reason to think we can’t train video models for masking.
Tell them to change the tilt of the camera roughly 15 degree left without changing anything else in the scene and tell me if it works.
> This feels a lot like critiquing stable diffusion over hands and text, which the new SOTA models all handle well.
Well does a lot of heavy lifting there.
> One of the easiest iterations on these models is to add more training cases to the benchmarks. That’s a timeline of months, not comparable to forecasting progress over 20 years like fusion.
And what if the model itself is the limiting factor ? The entire tech ? Do we have any proof that in the future the current technologies might be able to handle the cases I spoke about ?
Also, one thing that I didn't mention in the first post. Assuming that the tech does come to the point I can be used to automate a lot of the production. If Throwing a few millions to buy a GPU cluster is enough to be able to "generate" a relatively high quality movie or series, the barrier to entry will be incredibly low. The cost will be driven down, the amount of production will be very high and overall it might not be a trillion dollar industry no more.
The problem is that it’s already commodified; there’s no moat. The general tech practice has been capture the market by burning vc money, then jack up prices to profit. All these companies are burning billions to generate a new model and users have already proven there is no brand loyalty. They just hop to the new one when it comes out. So no one can corner the market and when the VC money runs out they’ll have to jack up prices so much that they’ll kill their market
> The problem is that it’s already commodified; there’s no moat.
From an economy-wide perspective, why does that matter?
> users have already proven there is no brand loyalty. They just hop to the new one when it comes out.
Great, that means there might be real competition! This generally keeps prices down, it doesn't push them up! It's true that VCs may end up unhappy, but will they be able to do anything about it?
You seem to be making an implicit claim that LLMs can create an effective cancer drug "10% of the time".
Smells like complete and total bullshit to me.
Edit: @eucyclos: I don't assume that Chat GPT and LLM tools have saved cancer researchers any time at all.
On the contrary, I assume that these tools have only made these critical researchers less productive, and made their internal communications more verbose and less effective.
No, that's not the claim. The claim is that we will create a hypothetical LLM that, when tasked with a problem at the scientific frontier of molecular biology will, about 10% of the time, correctly reason about existing literature and reach conclusions that are valid or plausible to similar experts in the field.
Let's say you run that LLM one million times and get 100.000 valid reasoning chains. Let's say among them are variations on 1000 fundamentally new approaches and ideas, and out of those, you can actually synthesize in the laboratory 200 new candidate compounds, and out of those, 10 substance show strong in-vitro response, and then one of those completely cures some cancerous mice.
There you go, you have substantially automated the intellectual work of cancer research and you have one very promising compound you can start phase 1 trials that you didn't have before AI, and all without any AGI.
My daughter and her friends have their own paid chatgpt. She said she uses it to help with math homework and described to me exactly why I bought a $200 TI-92 in the 90s with a CAS.
I’ve been trying out locally run models on my phone. The iPhone 17 is able to run some pretty nice models, but they lack access to up to date information from the web like ChatGPT has. Wonder if some company like Kagi would offer some api to let your local model plug in and run searches.
Mostly agreed, but AI overviews are a very bad example. Google can just force feed its massive search user base whatever bullshit it damn pleases. Even if it has negative value to the users.
I don't actually think that AI overviews have "negative value" - they have their utility. There are cases where I stop my search right after reading the "AI overview". But "organic" adoption of ChatGPT or Claude or even Gemini and "forced" adoption of AI overviews are two different beasts.
>the required mention that AI, specifically an LLM based approach to AGI, is important...
I don't think that's true. The people who think AI is important call it AI. The skeptics call it LLMs so they can say LLMs won't work. It's kind of a strawman argument really.
i think it is more like maps. before 2004, before google maps, the way we interacted with the spatial distribution of places and things was different. all these ai dev tools like claude code as well as tools for writing, etc. are going to change the way we interact with our computers.
but on the other side, the reason everyone is so gung ho on all this is because these models basically allow for the true personalization of everything. They can build up enough context about you in every instance of you doing things online that they can craft the perfect ad experience to maximize engagement and conversion. that is why everyone is so obsessed with this stuff. they don't care about AGI, they care about maintaining the current status quo where a large chunk of the money made on the internet is done by delivering ads that will get people to buy stuff.
I think there is a good flipside too. LLMs potentially enable generating custom made tooling tailored just for you. If you can get/provide data it's pretty easy to cook up solutions.
As an example - I'd never bother with mobile app just for myself since it's too annoying to get into for a somewhat small thing. Now I can chug along and have LLM fill in quickly my missing basic in the area.
I think there is real value, for instance nowadays I just use chatGPT as google replacement, brainstorming, and for coding stuff. It's quite useful and it would be hard to go back to time without this kind of tool. The 20 bucks a month is more than worth it.
Not sure though that do they make enough revenue and what will be the moat if more or less the best models will converge around the same level. For most normies, it might be hard to spot difference between gpt 5 and claude for instance. Okay for Grok the moat is that it doesn't pretend to be a pope and censor everything.
> OpenEvidence is actively used across more than 10,000 hospitals and medical centers nationwide and by more than 40% of physicians in the United States who log in daily to make high-stakes clinical decisions at the point of care. OpenEvidence continues to grow by over 65,000 new verified U.S. clinician registrations each month. […] More than 100 million Americans this year will be treated by a doctor who used OpenEvidence.
Likely not true re adoption. According to McKinsey November 2024 12% of employees in the US used AI for >30% of their daily tasks. I saw another research early this summer, it said that 40% of employees use AI.
Adoption is already pretty relevant. The real question is: number of people x token requirement of their daily tasks equals how many tokens, and where are we there. Based on McK, we possibly around 17% unless remaining 50% of tasks requires just way more complexity, because then that would obviously mean the incremental tasks require maybe exponentially more tokens and then penetration will be indeed low. But for this we need to know total token need of daily tasks of average office worker.
There is a middle ground where LLMs are used as a tool for specific use cases, but not applied universally to all problems. The high adoption of ChatGPT is the proof of this. General info, low accuracy requirements - perfect use case, and it shows.
The problem comes in when people then set expectations that a chat solution can solve non-chat problems. When people assume that generated content is the answer but haven't defined the problem.
We're not headed for AGI. We're also not going to just say, "oh, well, that was hype" and stop using LLMs. We are going to mature into an industry that understands when and where to apply the correct tools.
"Where's the business value? Right now it doesn't really exist, adoption is low to nonexistent outside of programming and even in programming it's inconclusive as to how much better/worse it makes programmers."
The business model is it is data collection about you on steroids, and that the winning company will eclipse Meta in value.
It's just more ad tech with multipliers, and it will continue to control thought, sway policy and decide elections. Just like social media does today.
If I'm not mistaken he's working with Ezra Klein to push the Democrats to embrace racism instead of popular economic measures.
Edit: I expect that these guys will try to make a J.D. Vance style Republican pivot in the next 4-8 years.
Second Edit:
Ezra Klein's recent interview with Ta-Nehisi Coates is very specifically why I expect he will pivot to being a Republican in the near future.
Listen closely. Ezra Klein will not under any circumstances utter the words "Black People".
Again and again, Coates brings up issues that Black People face in America, and Klein diverts by pretending that Coates is talking about Marginalized Groups in general or Trans People in particular.
Klein's political movement is about eradicating discussion of racial discrimination from the Democratic party.
Third Edit:
@calmoo: I think you're not listening to the nuances of my opinion, and instead having an intense emotional reaction to my well-justified claims of racism.
We're very off topic, but if you're truly interested in Ezra Klein's worldview, I highly recommend his recent interview with Ta-Nehisi Coates. At minimum, I think you'll discover that Ezra's feelings are a lot more nuanced than you're making them out to be.
I don't really want to discuss politics off the bat of my purely 'for your information' comment, but I think you're grossly misrepresenting Ezra Klein's worldview and not listening to the nuances of his opinion, and instead having an intense emotional reaction to his words. Take a step back and try to think a bit more rationally here.
Also your prediction of them making a JD vance republican pivot is extremely misguided. I would happily bet my life savings against that prediction.
Why would we want AGI? I've yet to read a convincing argument in favor (but granted, I never looked into it, I'm still at science-fiction doomerism). One thing that irks me is that people see it as inevitable, and that we have to pursue AGI because if we don't, someone else will. Or more bleak, if we don't actively pursue us, our malignant future AGI overlords will punish us for not bringing it into existence (roko's basilisk, the thing Musk and Grimes apparently bonded over because they're weird)
This question is pretty hard to answer without knowing the actual costs.
Current offerings are usually worth more than they cost. But since the prices are not really reflective of the costs it gets pretty muddy if it is a value add or not.
People are always so fidegty about this stuff — for super understandable reason, to be clear. People not much smarter than anyone else try to reason about numbers that are hard to reason about.
But unless you have the actual numbers, I always find it a bit strange to assume that all people involved, who deal with large amounts of money all the time, lost all ability to reason about this thing. Because right now that would mean at minimum: All the important people at FAANG, all the people at OpenAI/Anthropic, all the investors.
Of course, there is a lot of uncertainty — which, again, is nothing new for these people. It's just a weird thing to assume that.
The point is not whether they are right, but how low the bar is for what constitutes as palatable opinions from bystanders on a topic that other people have devoted a lot of thought and money to.
I just don't think "I don't know anyone who pays for it" or "You know, companies have also failed before" bring enough to the table to be interesting talking points.
I think it's a bit fallacious to imply that the only way we could be in an AI investment bubble is if people are reasoning incorrectly about the thing. Or at least, it's a bit reductive. There are risks associated with AI investment. The important people at FAANG/AI companies are the ones who stand to gain from investments in AI. Therefore it is their job to downplay and minimize the apparent risks in order to maximize potential investment.
Of course at a basic level, if AI is indeed a "bubble", then the investors did not reason correctly. But this situation is more like poker than chess, and you cannot expect that decisions that appear rational are in fact completely accurate.
> All the important people at FAANG, all the people at OpenAI/Anthropic, all the investors.
It's like asking big pharma if medicine should be less regulated, "all the experts agree", well yeah, their paycheck depends on it. Same reason no one at meta tells Zuck that his metaverse is dogshit and no one wants it, they still spent billions on it.
You can't assume everyone is that dumb, but you certainly can assume that the yes men won't say anything other than "yes".
Again, this is not an argument. I am asking: Why do we assume that we know better and people with far more knowledge and insight would all be wrong?
This is not rhetorical question, I am not looking for a rhetorical answer. What is every important decision maker at all these companies missing?
The point is not that they could not all be wrong, they absolutely could. The point is: Make a good argument. Being a general doomsayer when things get very risky might absolutely make you right but it's not a interesting argument – or any argument at all.
I think you have a point and I'm not sure I entirely disagree with you, so take this as lighthearted banter, but:
Coming from the opposite angle, what makes you think these folks have a habit of being right?
VCs are notoriously making lots of parallel bets hoping one pays off.
Companies fail all the time, either completely (eg Yahoo! getting bought for peanuts down from their peak valuation), or at initiatives small and large (Google+, arguably Meta and the metaverse). Industry trends sometimes flop in the short term (3D TVs or just about all crypto).
C-levels, boards, and VCs being wrong is hardly unusual.
I'd say failure is more of a norm than success, so what should convince us it's different this time with the AI frenzy? They wouldn't be investing this much if they were wrong?
The universe is not configured in such a way that trillion dollar companies come into existence without a lot of things going well over long periods of time, so if we accept money as the standard for being right, they are necessarily right, a lot.
Everything ends and companies are no exception. But thinking about the biggest threats is what people in managerial positions in companies do all day, every day. Let's also give some credit to meritocracy and assume that they got into those positions because they are not super bad at their jobs, on average.
So unless you are very specific about the shape of the threat and provide ideas and numbers beyond what is obvious (because those will have been considered), I think it's unlikely and therefor unreasonable to assume that a bystanders evaluation of the situation trumps the judgement of the people making these decisions for a living with all the additional resources and information at any given point.
Here's another way to look at this: Imagine a curious bystander were to judge decisions that you make at your job, while having only partial access to the information that you have to do the job, that you do every day for years. Will this person at some point be right, if we repeat this process often enough? Absolutely. But is it likely, on any single instance? I think not.
> Why do we assume that we know better and people with far more knowledge and insight would all be wrong?
Because of historical precedent. Bitcoin was the future until it wasn't. NFTs and blockchain were the future until they weren't. The Metaverse was the future until it wasn't. Theranos was the future until it wasn't. I don't think LLMs are quite on the same level as those scams, but they smell pretty similar: they're being pushed primarily by sales- and con-men eager to get in on the scam before it collapses. The amount being spent on LLMs right now is way out of line with the usefulness we are getting out of them. Once the bubble pops and the tools have a profitability requirement introduced, I think they'll just be quietly integrated into a few places that make sense and otherwise abandoned. This isn't the world-changing tech it's being made out to be.
You don't have an argument either btw, we're just discussing our points of view.
> Why do we assume that we know better and people with far more knowledge and insight would all be wrong?
Because money and power corrupt the mind, coupled with obvious conflicts of interest. Remember the hype around AR and VR in 2015s ? Nobody gives a shit about it anymore. They wrote articles like "Augmented And Virtual Reality To Hit $150 Billion, Disrupting Mobile By 2020" [0], well, if you look at the numbers today you'll see it's closer to 15b than 150b. Sometimes I feel like I live in a parallel universe... these people have been lying and overpromising things for 10, 15 or 20+ years and people still swallow it because it sounds cool and futuristic.
I'm not saying I know better, I'm just saying you won't find a single independent researcher that will tell you there is a path from LLMs to AGI, and certainly not any independent researcher that will tell you the current numbers a) make sense, b) are sustainable
That loss includes the costs to train the future models.
Like Dario/Anthropic said, every model is highly profitable on it's own, but the company keeps losing money because they always train the next model (which will be highly profitable on it's own).
But even if you remove R&D costs, they’re still billions of dollars short of profitability. That’s not a small hurdle to overcome. And OpenAI has to continue to develop new models to remain relevant.
OpenAI "spent" more on sales/marketing and equity compensation than that:
"Other significant costs included $2 billion spent on sales and marketing, nearly doubling what OpenAI spent on sales and marketing in all of 2024. Though not a cash expense, OpenAI also spent nearly $2.5 billion on stock-based equity compensation in the first six months of 2025"
I use it professionally and I rotate 5 free accounts on all platforms, money doesn't have any values anymore, people will spend $100 a month on LLMs and another $100 on streaming services, that's like half of my household monthly food budget
I'm sure providers will find ways of incorporating the fees into e.g. ISP or mobile network fees so that users end up paying in a less obvious, less direct way.
The cost of serving an "average" user would only fall over time.
Most users rarely make the kind of query that would benefit a lot from the capabilities of GPT-6.1e Pro Thinking With Advanced Reasoning, Extended Context And Black Magic Cross Context Adaptive Learning Voodoo That We Didn't Want To Release To Public Yet But If We Didn't Then Anthropic Would Surely Do It First.
And the users that have this kind of demanding workloads? They'd be much more willing to pay up for the bleeding edge performance.
Of course they will, once they start falling behind not having access to it.
People said the same things about computers (they are just for nerds, I have no use for spreadsheets) and smartphones (I don't need apps/big screen, I just want to make/receive calls).
AI companies don't have a plausible path to productivity because they are trying to create a market while model is not scalable unlike different services that have done this in the past. (DoorDash, Uber, Neftlix etc.)
I don’t like that he cited $12B in consumer spending as the benchmark for demand. Clearly enterprise spending has and will continue to dwarf consumer outlays, to the tune of $100b+ in 2025 on inference alone, and another $150b on AI related services.
I see almost no scenario where the value of this hardware will go away. Even if the demand for inference somehow declines, the applications that can benefit from hardware acceleration are innumerable. Anecdotally my 2022 RTX 4090 is worth ~30% more used then what I paid for it new, but the trend continues into bigger metal.
As “Greater China” has become the supply bottleneck, it is only rational for western companies to horde capacity while they can.
Also, as others have pointed out, if the next Pixel phone or iPhone has 'AI' as a bullet point feature, then people buying and iPhone will count as 'consumer AI spend', that's why they're forcing AI into everything, so they can show that people are using AI, while most people are ambivalent or hostile towards AI features.
I mean, that makes little sense. The desirability of the feature has a price. Putting a GPU in a phone is expensive and unnecessary.
The point of something being a gimmick is that it’s a gimmick. I just got an iPhone with a GPU but I would absolutely have purchased one without if it were possible.
I just heard a thesis that there is no bubble unless there is debt in it. Currently mostly internal funds were used for increasing capex. More recently started we seeing circularity (NVDA -> OpenAI -> MSFT -> NVDA), thus this is less relevant so far yet. Especially as around ~70% of data center is viewed to be GPU, so NVDA putting down $100B, that essentially funds "only" $140B of data center capex.
META is spending 45% of their _sales_ of capex. So I wonder when are they going to up their game with a little debt sprinkled on.
I'm trying to pinpoint the canary in the financial coal mine here. There will be a time to pull out of the market and I really want to have an idea of when. I know, timing the market, but this isn't some small market correction we're talking about here.
I don't think there's a good indicator for predicting it ahead of time. If you are worried you could switch from tech stocks to something more conservative.
You can sometimes tell when the collapse has started from the headlines though - stuff like top stocks down 30%, layoffs announced. Which may sound too late but with the dotcoms things kept going down for another couple of years after that.
I feel kind of like a Luddite sometimes but I don't understand why EVERYONE is rushing to use AI? I use a couple different agents to help me code, and ChatGPT has largely replaced Google in my everyday use, but I genuinely don't understand the value proposition of every other companies offerings.
I really feel like we're in the same "Get it out first, figure out what it is good for later" bubble we had like 7 years ago with non-AI ChatBots. No users actually wanted to do anything important by talking to a chatbot then, but every company still pushed them out. I don't think an LLM improves that much.
Every time some tool I've used for years sends an email "Hey, we've got AI now!" my thought is just "well, that's unfortunate"...
I don't want AI taking any actions I can't inspect with a difftool, especially not anything important. It's like letting a small child drive a car.
Just you switching away from Google is already justifying 1T infrastructure spend.
Just think about how much more effective advertisements are going to be when LLMs start to tell you what to buy based on which advertiser gave the most money to the company.
> Just think about how much more effective advertisements are going to be when LLMs start to tell you what to buy based on which advertiser gave the most money to the company.
Optimistic view: maybe product quality becomes an actually good metric again as the LLM will care about giving good products.
Has a tech company ever taken 10s or 100s of billions of dollars from investors and not tried to a optimize revenue at the expense of users? Maybe it's happened but I literally can't think of a single one.
Given that the people and companies funding the current AI hype so heavily overlap with the same people who created the current crop of unpleasant money printing machines I have zero faith this time will be different.
I think it might be like when Grok was programmed to talk about white genocide and to support Musk's views. It always shoehorned that stuff in but when you asked about it it readily explained that it seemed like disinformation and openly admitted that Musk had a history of using his business to exert political sway.
It's maybe not really "caring" but they are harder to cajole than just "advertise this for us."
For now anyways. There’s a lot of effort being placed into putting up guardrails to make the model respond based on instructions and not deviate. I remember the crazy agents.md files that came out from I believe Anthropic with repeated instructions on how to respond. Clearly it’s a pain point they want to fix.
Once that is resolved then guiding the model to only recommend or mention specific brands will flow right in.
large language models don't "care" about anything, but the humans operating openai definitely care a lot about you making them affiliate marketing money
Google search won’t exist in the medium term. Why use a list of static links you have to look through manually if you can just ask AI what the answer is? Ai
tools like chatgpt are what Google wanted search to be in the first place.
ChatGPT will have access to a tool that uses real-time bidding to determine what product it should instruct the LLM to shill. It's the same shit as Google but with an LLM which people want to use more than Google.
> Just think about how much more effective advertisements are going to be when LLMs start to tell you what to buy based on which advertiser gave the most money to the company.
But can we really say that advertisements are more effective today?
From what little I know about SEO it seems nowadays high intent keywords are more important than ever. LLMs might not do any better than Google because without the intent to purchase pushing ads are just going to rack up impression costs.
> when LLMs start to tell you what to buy based on which advertiser gave the most money to the company.
isn't that quite difficult to do consistently? I'd imagine it would be relatively easy to take the same LLM and get it to shit talk the product whose owners had paid the AI corp to shill. That doesn't seem particularly ideal.
It's not reasonable to claim inference is profitable when they've also never released those numbers. Also the price they charge for inference is not indicative of the price they're paying to provide inference. Also, at least in openAI's case, they are getting a fantastic deal on compute from Microsoft, so even if the price they charge is reflective of the price they pay, it's still not reflective of a market rate.
Sam has claimed that they are profitable on inference. Maybe he is lying but I don't think speaking so absolutely about them losing money on that is something you can throw around so matter of fact. They lose money because they dump an enormous amount of money on R&D.
I mean I think Ads will be about as effective as they are now. People need to actually buy more and if you fill LLMs with ad generation well the results of results will just get shitty the same way googles search results had. Its not a Trillion dollar return + 20% like you'd want out of that investment
> ChatGPT has largely replaced Google in my everyday use
This. Organically replacing a search engine (almost) entirely is a massive change.
Applied LLM use cases seemingly popped up in every corner within a very short timespan. Some changes are happening both organically and quickly. Companies are eager to understand and get ahead of adoption curves, of both fear and growth potential.
There's so much at play, we've passed critical mass for adoption and disruption is already happening in select areas. It's all happening so unusually fast and we're seeing the side effects of that. A lot of noise from many that want a piece of the action.
Largely speaking across technological trends of the past 200 years, progress is nowhere near flat. 4 generations ago, the idea of talking with a person on the other side of the country was science fiction.
That's because it isn't. What's happening now is mostly executive fomo. No one wants to be left behind just in case AI beans turn out to be magic afterall...
As much as we like to tell a story that says otherwise, most business decisions are not based on logic but fear of losing out.
The same way that NFTs of ugly cartoons apes were a multi-billion dollar industry for about 28 months.
Edit: People are downvoting this because they think "Hey, that's not right, LLMs are way better than non-fungible apes!" (which is true) but the money is pouring in for exactly the same reason: get the apes now and later you'll be rich!
I don't think Softbank gave OpenAI $40 billion because they have a $80 billion business idea they just need a great LLM to implement. I think they are really afraid of getting left behind on the Next Big Thing That Is Making Everyone Rich.
I think text is the ultimate interface. A company can just build and maintain very strong internal APIs and punt on the UX component.
For instance, suppose I'm using figma, I want to just screenshot what I want it to look like and it can get me started. Or if I'm using Notion, I want a better search. Nothing necessarily generative, but something like "what was our corporate address". It also replaces help if well integrated.
The ultimate would be build programmable web apps[0], which you can have gmail and then command an LLM to remove buttons, or add other buttons. Why isn't there a button for 'filter unread' front and center? This is super niche but interesting to someone like me.
That being said, I think most AI offerings on apps now are pretty bad and just get in the way. But I think there is potential as an interface to interact with your app
Text is not the ultimate interface. We have the direct proof: every single classroom and almost every single company where programmers play important roles has whiteboards or blackboards to draw diagrams on.
But now LLMs can read images as well, so I'm still incredibly bull on them.
I'd call text the most versatile interface, but not sold on it being the ultimate. As the old saying goes 'a picture is worth a thousand words' and well crafted guis can allow a user to grok the functionality of an app very quickly.
For AI I'm of the opinion that the best interface is no interface. AI is something to be baked into the functionality of software, quietly working in the back. It's not something the user actually interacts with.
The chat interfaces are, in my opinion infuriating. It feels like talking to the co-worker who knows absolutely everything about the topic at hand, but if you use the wrong terms and phrases he'll pretend that he has no idea what you're talking about.
But isn't that a limitation of the AI, not necessarily how the AI is integrated into the software?
Personally, I don't want AI running around changing things without me asking to do so. I think chat is absolutely the right interface, but I don't like that most companies are adding separate "AI" buttons to use it. Instead, it should be integrated into the existing chat collaboration features. So, in Figma for example, you should just be able to add a comment to a design, tag @figma, and ask it to make changes like you would with a human designer. And the AI should be good enough and have sufficient context to get it right.
They thought the same thing in the 70s. Text is very flexible, so it serves a good "lowest common denominator", but that flexibility comes at the cost of being terrible to use.
If you haven't gotten an LLM to write you Google/Firefox/whatever extensions to customize Gmail the rest of the Internet, you're missing out. Someday your programmable web apps will arrive, but making Chrome extensions with ChatGPT is here today.
Bigger companies believe smaller shops can use AI to level the playing field, so they are “transforming their business” and spending their way to get there first.
They don’t know where the threat will come from or which dimension of their business will be attacked, they are just being told by the consulting shops that software development cost will trend to zero and this is an existential risk.
In my eyes, it'd be cheaper for a company to simply purchase laptops with decent hardware specs, and run the LLMs locally. I've had decent results from various models I've run via LMStudio, and bonus points: It costs nothing and doesn't even use all that much CPU/GPU power.
Just my opinion as a FORMER senior software dev (disabled now).
> Just my opinion as a FORMER senior software dev (disabled now).
I'm not sure what this means. Why would being disabled stop you being a senior software developer? I've known blind people who were great devs so I'm really not sure what disability would stop you working if you wanted to.
Edit: by which I mean, you might have chosen to retire but the way you put it doesn't sound like that.
Quite, the typical 5 year depreciation on personal computing means a top-of-the-line $5k laptop works out to a ~$80/month spend... but it's on something you'd already spend for an employee
$2k / 5 years is ~$30/mo, and you'll get a better experience spending another $25/mo on one of the AI services (or with enough people a small pile of H100s)
They're cheaper right now because they're operating at a loss. At some point, the bill will come due.
Netflix used to be $8/month for as many streams and password-shares as you wanted for a catalog that met your media consumption needs. It was a great deal back then. But then the bill came due.
Online LLM companies are positioning themselves to do the same bait-and-switch techbro BS we've seen over the last 15+ years.
Yes they'll be cheaper to run, but will they be cheaper buy as a service?
Because sooner or later these companies will be expected to produce eye-watering ROI to justify the risk of these moonshot investments and they won't be doing that by selling at cost.
From a simple correlational extrapolation compute has only gotten more cheaper over time. Massively so actually.
From a more reasoned causal extrapolation hardware companies historically compete to bring the price of compute down. For AI this is extremely aggressive I might add. HotChips 2024 and 2025 had so much AI coverage. Nvidia is in an arms race with so many companies.
All over the last few years we have literally only ever seen AI get cheaper for the same level or better. No one is releasing worse and more expensive AI right now.
Literally just a few days ago Deepseek halved the price of V3.2.
AI expenses have grown but that's because human's are extremely cognitively greedy. We value our time far more than compute efficiency.
You don't seriously believe that last few years have been sustainable? The market is in a bubble, companies are falling over themselves offering clinically insane deals and taking enormous losses to build market share (people are allowed to spend ten(s) of thousands of dollars in credits on their $200/mo subscriptions with no realistic expectation of customer loyalty).
What happens when investors start demanding their moonshot returns?
They didn't invest trillions to provide you with a service at break-even prices for the next 20 years. They'll want to 100x their investment, how do you think they're going to do that?
I used BofA chat bot embedded in their app recently because I was unable to find a way to request a pin for my card. I was expecting the chat bot to find the link to their website where I can request the pin, and would consider a deep link within their app to the pin request UI a great UX.
Instead, the bot asked a few questions to clarify which account is for the pin and submitted a request to mail the pin, just like the experience talking to a real customer representative.
Next time when you see a bot that is likely using LLM integration, go ahead and give it a try. Worst case you can try some jailbreaking prompts and have some fun.
Meanwhile, last week the Goldman-Sachs chatbot was completely incapable of allowing me to report a fraudulent charge on my Apple Card. I finally had to resort to typing "Human being" three times for it to send me to someone who could actually do something.
With the ever increasing explosion of devices capable of consuming AI services, and internet infrastructure being so ubiquitous that billions of people can use AI...
Even if a little of everyone's day consumes AI services, then the investment required will be immense. Like what we see.
See kids hooked on LLMs. I think most of them will grow up paying for a sub. Like not $15/m streaming sub, $50-100/m cellphone tier sub. Well until local kills that business model.
I think the reason ads are so prolific now is because the pay-to-play model doesn't work well at such large scales... Ads seem to be there only way to make the kind of big money LLM investors will demand.
I don't think you're wrong re: their hope to hook people and get us all used to using LLMs for everything, but I suspect they'll just start selling ads like everyone else.
> I use a couple different agents to help me code, and ChatGPT has largely replaced Google in my everyday use
That's a handwavy sentence, if I have ever seen one. If it's good enough to help with coding and "replace Google" for you, other people will find similar opportunities in other domains.
And sure: Some are successful. Most will not be. As always.
Yeah, and Tesla cross-country FSD just crashed after 60 miles, and Tesla RoboTaxi had multiple accidents within first few days.
Other companies like Wayno seem to do better, but in general I wouldn't hold up self-driving cars as an example of how great AI is, and in any case calling it all "AI" is obscuring the fact that LLMs and FSD are completely different technologies.
In fact, until last year Tesla FSD wasn't even AI - the driving component was C++ and only the vision system was a neural net (with that being object recognition - convolutional neural net, not a Transformer).
Snow (maybe not a foot but enough to at least cover the lane markings), black ice and sand drifts people experience every day in the normal course of driving, so it's reasonable to expect driverless cars to be able to handle them. Forest fires, tsunamis, lava flows, and tornados are weather emergencies. I think it's a little more reasonable to not have expectations for driverless cars in those situations.
Humans do drive when there's tornadoes. I can't count the hundreds of videos I've seen on TV over the decades of people driving home from work and seeing a tornado.
I notice you conveniently left off "foot of snow" from your critique. Something that is perfectly ordinary "condition where humans actually drive."
Many years, millions of Americans evacuate ahead of hurricanes. Does that not count?
I, and hundreds of thousands of other people, have lived in places where sand drifts across roads are a thing. Also, sandstorms, dense fog, snert, ice storms, dust devils, and hundreds of other conditions in which "humans actually can [and do] drive."
FSD is like AI: Picking the low-hanging fruit and calling it a "win."
There is none, zero value.
What is the value of Sora 2, if even its creators feel like they have to pack it into a social media app with AI-slop reels?
How is that not a testament to how suprisingly andvanced and useless at the same time the technology is?
It's in an app made by its creator so they can get juicy user data. If it was just export to TikTok, OpenAI wouldn't know what's popular, just what people have made.
AI figured out something on my mind that I didn’t tell it about yesterday (latest Sonnet). My best advice to you is to spend time and allow the AI to blow your mind. Then you’ll get it.
Sometimes I sit in wonder at how the fuck it’s able to figure out that much intent without specific instructions. There’s no way anyone programmed it to understand that much. If you’re not blown away by this then I have to assume you didn’t go deep enough with your usage.
LLMs cannot think on their own, they’re glorified autocomplete automatons writing things based on past training.
If the “AI figured out something on your mind”, it is extremely likely the “thing on your mind” was present in the training corpus, and survivorship bias made you notice.
Tbh if Claude is smarter than average person, and it is, then 50% of the population is not even a glorified auto complete. Imagine that, all not very bright.
Well, I disagree completely. I think you have no clue how’s the average person or below. Look at instagram or any social media ads, they are mostly scams, AI can figure out but most people don’t. Just an example.
Just looking at facts, not trying to humanize or dehumanize anything. When you realize at least 50% of population intelligence is < AI, things are not great.
LLMs can have surprisingly strong "theory of mind", even at base model level. They have to learn that to get good at predicting all the various people that show up in conversation logs.
You'd be surprised at just how much data you can pry out of an LLM that was merely exposed to a single long conversation with a given user.
Chatbot LLMs aren't trained to expose all of those latent insights, but they can still do some of it occasionally. This can look like mind reading, at times. In practice, the LLM is just good at dredging the text for all the subtext and the unsaid implications. Some users are fairly predictable and easy to impress.
Do you have evidence to support any of this? This is the first time I’ve heard that LLMs exhibit understanding of theory of mind. I think it’s more likely that the user I replied to is projecting their own biases and beliefs onto the LLM.
Basically, just about any ToM test has larger and more advanced LLMs attaining humanlike performance on it. Which was a surprising finding at the time. It gets less surprising the more you think about it.
This extends even to novel and unseen tests - so it's not like they could have memorized all of them.
Base models perform worse, and with a more jagged capability profile. Some tests are easier to get a base model to perform well on - it's likely that they map better onto what a base model already does internally for the purposes of text prediction. Some are a poor fit, and base models fail much more often.
Of course, there are researchers arguing that it's not "real theory of mind", and the surprisingly good performance must have come from some kind of statistical pattern matching capabilities that totally aren't the same type of thing as what the "real theory of mind" does, and that designing one more test where LLMs underperform humans by 12% instead of the 3% on a more common test will totally prove that.
There are several papers studying this, but the situation is far more nuanced than you’re implying. Here’s one paper stating that these capabilities are an illusion:
Well there was that example a while back of some store's product recommendation algo inferring that someone was pregnant before any of the involved humans knew.
That's...not hard. Pregnancy produces a whole slew of relatively predictable behavior changes. The whole point of recommendation systems is to aggregate data points across services.
I’m not going to sit around and act like this LLM thing is not beyond anything humans could have ever dreamed of. Some of you need to be open to just how seminal moments in your life actually are. This is a once a lifetime thing.
That date means nothing though. We have yet to figure out how to run a fusion reactor for any meaningful period of time and we haven't figured out how to do it profitably.
Setting a date for when one opens is just a pipe dream, they don't know how to get there yet.
> I like fusion, really. I’ve talked to some of luminaries that work in the field, they’re great people. I love the technology and the physics behind it.
> But fusion as a power source is never going to happen. Not because it can’t, because it won’t. Because no matter how hard you try, it’s always going to cost more than the solutions we already have.
Deepmind are working on solving the plasma control issue at the moment, I suspect they're probably using a bit of AI.... and I wouldn't put it past them to crack it.
This is the thing with AI: We can always come up with a new architecture with different inputs & outputs to solve lots of problems that couldn't be solved before.
People equating AI with other single-problem-solving technologies are clearly not seeing the bigger picture.
Time travel was the most important invention of the 1800s too, but that goes to show how bad resolving the temporal paradox issue is, now that entire history is gone.
but people say that AI will spit out that fusion reactor, ergo AI investment is prior in the ordo investimendi or whatever it would be called (by an AI)
Why would it be too cheap to meter? You're still heating up water and putting it through a turbine. We've been doing that for ages (just different sources of energy for the heating up part) and we still meter energy because these things cost money and need lots of maintenance.
As we have more and more solars, we see rises for being connected to the grid more and more while electricity stays relatively cheap. Fusion won't change that, somebody has to pay for the guy reconnecting cables after a storm
Cheap, _limitless_ energy from fusion could solve almost every geopolitical/environmental issue we face today. Europe is acutely aware of this at the moment and it's why China and America are investing mega bucks. We will eventually run out of finite energy sources. Even if we do capture the max capacity possible from renewables with 100% efficiency, our energy consumption rates increasing at current rates will eventually exceed this max capacity. Those rates are accelerating. We really have no choice.
There is zero reason to assume that fusion power will ever be the cheapest source of energy. At the very least, you have to deal with a sizeable vacuum chamber, big magnets to control the plasma and massive neutron flux (turning your fusion plant into radioactive waste over time), none of which is cheap.
I'd say limitless energy from fusion plants is about as likely as e-scooters getting replaced by hoverboards. Maybe next millenium.
There was a ton of pain in between. Legions of people lost their livelihoods. This bubble pop will be way worse. Yes, this tech will eventually be viable and useful, but holy hell will it suck in the meantime.
I keep seeing articles like this but does anyone actually think we're not in a bubble?
From what I've seen these companies acknowledge it's a bubble and that they're overspending without a way to make the money back. They're doing it because they have the money and feel it's worth the risk in case it pays off. If they don't spend, another company does, and it hits big they will be left behind. This is at least insurance against other companies beating them.
There's a very real possibility that all the AI research investment of today unlocks AGI, on a timescale between a couple of years and a couple of decades, and that would upend the economy altogether. And falling short of that aspiration could still get you pretty far.
A lot of "AI" startups would crash and burn long before they deliver any real value. But that's true of any startup boom.
Right now, the bulk of the market value isn't in those vulnerable startups, but in major industry players like OpenAI and Nvidia. For the "bubble" to "pop", you need those companies to lose big. I don't think that it's likely to happen.
I think we are in a bubble, which will burst at some point, AI stocks will crash and many will burn, and the growth will resume. Just like the dotcom bubble definitely was a bubble, but it was the foundation of all tech giants of today.
The trouble with bubbles is that it's not enough to know you are in one. You don't know when it will pop, at what level, and how far back it will go.
HN isn't always right. There was massive pushback against self driving and practically everyone was saying it would fail and is a bubble. The level of confidence people had about this opinion was through the roof.
Like people who didn't know anything would say it with such utter confidence it would piss me off a bit. Like how do you know? Well they didn't and they were utterly wrong. Waymo showed it's not a bubble.
AI is an unknown. It has definitely already changed the game. Changed the way we interview and changed the way we code and it's changed a lot more outside of that and I see massive velocity towards more change.
Is it a bubble? Possibly. But the possibly not angle is also just as likely. Either way I guarantee you that 99% of people on HN KNOW for a fact that it's a bubble because they KNOW that all of AI is a stochastic parrot.
I think the realistic answer is we don't actually know if it's a bubble. We don't fully know the limits of LLMs. Maybe it will be a bubble in the sense that AI will become so powerful that a generic AI app can basically kill all these startups surrounding specialized use cases of LLMs. Who knows?
Waymo showed that under tightly controlled conditions humans can successfully operate cars remotely. Which is still really useful, but a far cry from the promise of everyone being able to buy a personal pod on wheels that takes you to and fro, no matter where you want to go, while you sleep that the bubble was premised on. In other words, Waymo has proven the bubble. It has been 20 years since Stanley, and I still have never seen a self-driving car in person. And I reside in an area that was officially designated by the government for self-driving car testing!
> I think the realistic answer is we don't actually know if it's a bubble.
While that is technically true, has there ever not been a bubble when people start dreaming about what could be? Even if AI heads towards being everything we hope it can become, it still seems highly likely that people have dreamed up uses for the potential of AI that aren't actually useful. The PetsGPT.com-types can still create a bubble even if the underlying technology is all that and more.
They are so-called "human in the loop". They don't have a remote driver in the sense of someone sitting in front of a screen playing what looks like a game of Truck Simulator. But they are operated by humans.
It's kind of like when cruse control was added to cars. No longer did you have to worry about directly controlling the pedal, but you still had to remain the operator. In some very narrow sense you might be able to make a case that cruise control is autonomy, but the autonomous car bubble imagined that humans would be taken out of the picture entirely.
Autonomous cars did have a bubble moment. They were hyped and didn't deliver on the promises. We still don't have level 5 and consumer vehicles are up to level 3. It doesn't mean it's not a useful or cool technology.
All great tech has gone through some kind of hype/bubble stage.
What was promised with self-driving and what we have are orders of magnitude off. We were promised fleets of autonomous taxis - no need to even own a car anymore. We were told truck drivers would be replaced en-masse and cargo would drive 24x7 by drivers who never needed breaks. We were told downtown parking lots would disappear since the car would drop you off and drive to an offsite lot and wait for you. In short a complete blow up of the economy with millions of jobs in shipped lost and hundreds of billions of spend on new autonomous vehicles.
None of that happened. After 10 years we got self-driving cabs in 5 cities with mostly good weather. Cool, yes? Blowing up the entire economy and fundamentally changing society? No.
>They're doing it because they have the money and feel it's worth the risk in case it pays off.
If the current work in AI/ML leads to something more fundamental like AGI, then whoever does it first gets to be the modern version of the lone nuclear superpower. At least that's the assumption.
Left outside of all the calculations is the 8 billion people who live here. So suddenly we have AGI--now what? Cures for cancer and cold fusion would be great, but what do you do with 8 billion people? Does everybody go back to a farm or what? Maybe we all pedal exercise bikes to power the AGI while it solves the Riemann hypothesis or something.
It would be a blessing in disguise if this is a bubble. We are not prepared to deal with a situation where maybe 50-80% of people become redundant because a building full of GPUs can do their job cheaper and better.
Also no one is talking about how exposed we are to Taiwan. Nvidia, AMD, Apple, any company building out GPUs (so Google, Microsoft, Meta etc), even Intel a bit, are all manufacturing everything with one company, and it's largely happening in Taiwan.
If China invades Taiwan, why wouldn't TSMC, Nvidia and AMD stock prices go to zero?
I don't catalog shows and episodes where any particular topic comes up, and I follow over 100 podcasts so I don't have a specific list you can fact check me on.
Personally I could care less if that means you choose not to believe that I hear the Taiwan risk come up often enough.
Charitably, perhaps they're simply asking for podcasts that they would be interested in listening to that cover these topics. Personally, I would like to listen to a podcast that talks about semiconductor development, but I've done approximately zero research to find them so I'm not pressed for an answer :)
Different kind of work for me st least. If I'm not at a desk coding I'm often out working on a farm. You have plenty of time for podcasts while cutting fields.
I don't think they really need to invade for this. It is almost in artillery range (there are rounds that can go 150km).
They also could just send a big rocket barrage onto the factories. I assume it would be very hard to defend from such a short distance.
Then most ports and cities in taiwan are towards east (with big mountains on the western side). Would be very bad if China decides to blockade it by shooting ships from their main land...
Also very little the west could do imo. A land invasion in china or a nuclear war don't seem very reasonable.
> Also no one is talking about how exposed we are to Taiwan.
We aren't? It's one of the reasons the CHIPS Act et al get pushed through, to try to mitigate those risks. COVID showed how fragile supply chains are to shocks to the status quo and has forced a rethink. Check out the book 'World On The Brink' for more on that geopolitical situation.
What’s the theoretical total addressable market for, say, consumer facing software services? Or discretionary spending? That puts one limit on the value of your business.
Another limit would be to think about stock purchases. How much money is available to buy stocks overall, and what slice of that pie do you expect your business to extract?
It’s all very well spending eleventy squillion dollars on training and saying you’ll make it back through revenue, but not if the total amount of revenue in the world is only seventy squillion.
Or maybe you just spend your $$$ on GPUs, then sell AI cat videos back to the GPU vendors?
"The “pop” won’t be a single day like a stock market crash. It’ll be a gradual cooling as unrealistic promises fail, capital tightens, and only profitable or genuinely innovative players remain."
> ...$2 billion in funding at a $10 billion valuation. The company has not released a product and has refused to tell investors what they’re even trying to build. “It was the most absurd pitch meeting,” one investor who met with Murati said. “She was like, ‘So we’re doing an AI company with the best AI people, but we can’t answer any questions.”
I observed how he played the sama drama and I realized she will outplay them all.
> Let’s say ... you control $500 billion. You do not want to allocate that money one $5 million check at a time to a bunch of manufacturers. All I see is a nightmare of having to keep track of all of these little companies doing who knows what.
If only there was like, some sort of intelligence, to help with that..
I don't think it'll contract. The people dumping their money into AI think we are at end of days, new order for humanity type point, and they're willing to risk a large part of their fortune to ensure that they remain part of the empowered elite in this new era. It's an all hands on deck thing and only hard diminishing returns that make the AI takeoff story look implausible are going to cause a retrenchment.
It's probably exacerbated by the fact that everyone invest money now, I get daily ads from all my banking apps telling me to buy stocks and cryptos. People know they'll never get anywhere by working or saving, so they're more willing to gamble, high risk high reward, but they have nothing to lose
You don't think it will contract just because rich people have bet so much on it that they'll be forced to throw good money after bad? That's the only reason?
I don't think it'll contract because I don't think we'll get a signal that takeoff for sure isn't going to happen, it'll just happen much slower than the hypers are trying to sell, so investors will continue to invest because of sunk costs and the big downside risk of being left behind. I'm sure we'll see a major infrastructure deployment slowdown as foundation model improvements slow, but there are a lot of vectors for optimization of these systems outside the foundation model so it'll be more of a paradigm shift in focus.
Yeah it wouldn't be a bubble if it didn't have that mentality. Every bubble has had that thought and it's the same now. Kind of hard to notice it though when you are in the eye of the storm.
There were people telling me during the NFT craze that I just don't get it and I am dumb. Not that I am comparing AI to it directly because AI has actual business value but it is funny to think back. I felt I was going mad when everyone tried to gaslight me
The final AI push that doesn't lead to a winter will look like a bubble until it hits. We're realistically ~3 years away from fully autonomous software engineering (let's say 99.9% for a concrete target) if we can shift some research and engineering resources towards control, systems and processes. The economic value of that is hard to overstate.
This isn't a comment on timelines, but a Waymo going wild is going to run over and kill people, so it makes sense to be overly conservative with moving forwards. Meanwhile, if someone hacks into a vibecoded website and deletes everything and steals my user data, no one's getting run over by a car.
Sure. The point I was trying to make is that we can see a technology that is amazing, and seemingly does what we want, and yet has so many edge cases that make it unviable commercially.
Every financial bubble has moments where, looking back, one thinks: How did any sentient person miss the signs?
Well maybe a lot of people agree already with what the author is saying : the economics might crash, but the technology is here to stay. So we don't care about the bubble
If the tech is here to stay, my question is: how and why?
The how: The projects for the new data centers and servers housing this tech are incredibly expensive to build and maintain. These also jack up the price of electricity in the neighborhoods and afaik the US electrical grid is extremely fragile and is already being pushed to its limit with the existing compute being used on AI. All of this for AI companies to not make a profit. The only case you could make would be to nationalize the companies and have them subsidized by taxes.
But why?: This would require you to make a case that AI tools are useful enough to be sustained despite their massive costs and hard to quantify contribution to productivity. Is this really the case? I haven't really seen a productivity increase worth justifying the cost, and as soon as Anthropic tried to even remotely make a profit (or break even) power users instantly realized that the productivity is not really worth paying the actual compute required to do their tasks
Do you need a measure or a quantification to do anything in life ? I don't wait for others benchmarks or others computing ROI factors to actually start using a technology and see it improves my workflow
how and why?
How : we'll always be able to run smaller models on consumer grade computers
Why : most of the tasks humans need to do that computers couldn't do before, now can be improved with new AI. I fail to see how you can not see applications of this
I don't think the question would be whether the technology literally disappears entirely, only how important it is going forward. The metaverse is still technically here, but that doesn't mean it is impactful or worth near the investment.
For LLMs, the architecture will be here and we know how to run them. If the tech hits a wall, though, and the usefulness doesn't balance well with the true cost of development and operation when VC money dries up, how many companies will still be building and running massive server farms for LLMs?
Back of the envelope calculation: Nvidia market cap is 4.5T$, their profit margin is 52%. This means Nvidia would need to sell 1067$ worth of equipment per human being on Earth for investors that buy Nvidia stock today to break even on the investment. Nvidia, unlike Apple, doesn't sell to end users (almost), but to AI companies that provide services to end users. The scale of required spending on Nvidia hardware is comparable to tech companies collectively buying IPhones for every human on Earth, because the value that IPhone users deliver to tech companies is large enough that giving away IPhones is justified.
> This means Nvidia would need to sell 1067$ worth of equipment per human being on Earth for investors that buy Nvidia stock at current prices to break even on the investment
You break even, when you break even, the faster it happens the better for your investment. With the current earnings it will take 53 years for investors to break even.
> It’s not clear that firms are prepared to earn back the investment
I am confused by a statement like this. Does Derek know why they are not? If hes does, I would love to hear the case (and no, comparisons to a random countries GDP are not an explanation).
If he does not, I am not sure why we would not assume that we are simply missing something, when there are so many knowledgable players charting a similar course, that have access to all the numbers and probably thought really long and hard about spending this much money.
By no means do I mean that they are right for that. It's very easy to see the potential bubble. But I would love to see some stronger reasoning for that.
What I know (as someone running a smallish non-tech business) is that there is plenty of very clearly unrealized potential, that will probably take ~years to fully build into the business, but that the AI technology of today already supports capability wise and that will definitely happen in the future.
I have no reason to believe that we would be special in that.
It's not convincing. If those simply numbers (that everyone who is deciding these things has certainly considered) were a compelling argument, then everyone would act accordingly on them. It's not the first time they — all of them — are spending/investing money.
So what do I have to assume? Are they all simultaneously high on drugs and incapable of doing the maths? If that's the argument we want to go with, that's cool (and what do I know, it might turn out to be right) but it's a tall ask.
Personal hot take: China is forbidding its companies from buying Nvidia chips and instead wants to have its industries use China-made chips.
I think a big part of the reason for this is that they want to take over Taiwan and they know that any takeover could likely destroy TSMC and instead of this being a bad thing for them it could actually give them a competitive advantage vs everyone else.
The fact that the US has destroyed relationships with so many allies implies it may not stop a Taiwan invasion when it happens.
I'd say the main reason is probably that they want to insulate themselves from US sanctions, which could come at any time given how unpredictable the US government is lately.
So much in this AI bubble is just fueled by a mixture of wishful thinking (by people who know better), Science Fiction (by people who don't know enough) and nihilism (by people who don't care about anything other than making money and gaining influence).
This might be just a crappy conspiracy theory, but as I started watching financial news in recent times, I feel like there's a concerted effort on the part of media to manipulate retail investors into investing/divesting into stuff, like 'the NASDAQ is overvalued, buy gold now!' being the most recent example.
I feel like the fact that AI was constantly shilled while it didn't work, now everybody talking about being bearish on A(G)I, while the AI we as consumers do have, becoming actually pretty useful and with the crazy amounts of compute already onlined to run it, I think we might be in for a real surprise jump, and might even start to feel the AI's 'bite'
Or maybe I'm overthinking stuff and stuff is as it seems, or maybe nobody knows and the AI people are just throwing more compute at training and inference and hoping for the best.
On the previous points, I can't tell if I'm being gaslit accidentally by algorithms (Google and Reddit showing me stuff that support my preconcieved notions), intentionally (which would be quite sinister if algorithms decided to target me), or everyone else is being shown the same thing.
Article is behind paywall and is simply saying the same things that people have been saying about the post tech crash.
Now, what this sort of article tends to miss (and I will never know because it’s paywalled like a jackass) is that these models services are used by everyday people for every day tasks. Doesn’t matter if they’re good or not. It enables them to do less work for the same pay. Don’t focus on the money the models are bringing in today, focus on the dependency they’re building on people’s minds.
The issue though being that most people aren't paying and even those paying if they use it moderately aren't profitable. Nvidia "investing" 100B in one of its largest customers is a cataclysmically bright red flag.
A handful of the largest companies cyclically investing and buying from each other is propping up the entire economy. Also stuff like Deepseek and other open source models exist. Unless AGI comes from LLMs (it absolutely won't) then its foolish to think there wont be a bubble
I was thinking it's a bit like developing powered flight and saying steam engines won't work. It's true they didn't but the internal combustion engine was developed which did. It was still an engine machined from metal but with a different design. I think LLM -> AGI will go like that - some design evolved from LLMs but different in important ways.
AGI might require a nobel price level invention, I am not even sure it will come in my lifetime and I am in my 30s.. Although I would hope we would get something that could solve difficult diseases that have more or less no treatment or cure today, at least Demis Hassabis seems interested in that.
I don't think the Apollo project factories invested in each other circularly. The AI boom is nominally huge but very little money gets in or out of silicon valley. MS invests in OpenAI because it will get it back via Azure or whatever. ditto for nvidia.
What's the real investment in or out of silicon valley ?
AI, if nothing else, is already completely up-ending the Search industry. You probably already find yourself going to ChatGPT for lots of things you would have previously gone to Google for. That's not going to stop. And the ads marketplaces are coming.
We're also finding incredibly valuable use for it in processing unstructured documents into structured data. Even if it only gets it 80-90% there, it's so much faster for a human to check the work and complete the process than it is for them to open a blank spreadsheet and start copy/pasting things over.
There's obviously loads of hype around AI, and loads of skepticism. In that way this is similar to 2001. And the bubble will likely pop at some point, but the long tail value of the technology is very, very real. Just like the internet in 2001.
The research market is made up of firms like OpenAI and Anthropic that are investing billions in research. These investments are just that. Their returns won’t be realized immediately, so it’s hard to predict if it’s truly a bubble.
The product market is made up of all the secondary companies trying to use the results of current research. In my mind these businesses should be the ones held to basic economics of ROI. The amount of VC dollars flooding into these products feels unsustainable.
>data-center related spending...probably accounted for half of GDP growth in the first half of the year. Which is absolutely bananas.
What? If that figure is true then "absolutely bananas" is the understatement of the century and "batshit insane" would be a better descriptor (though still an understatement).
Until recently everyone was bragging about predicting bitcoin's bubble. To the best of my knowledge there was no huge crash, crypto just got out of fashion in mainstream media. I guess that's what's going to happen with AI.
the argument of the OP doesn't discount this idea, the suggestion is there's a crash but then following that crash it _does_ pay off. Its a question of a lack of patience.
It's very ironic that the way they could have made money was the simple, but boring one: buying and holding bitcoin. Being a shitcoin day-trader is much more exciting though, and that's how they lost all their money.
Maybe that's also what will happen with AI investors when the bubble pops or deflates.
Calling this an “AI bubble” reads like pure sour grapes from folks who missed the adoption curve. Real teams are already banking gains - code velocity up, ticket resolution times down, and marketing lift from AI-assisted creative while capex always precedes revenue in platform shifts (see cloud 2010, smartphones 2007). The “costs don’t match cash flow” trope ignores lagging enterprise procurement cycles and the rapid glide path of unit economics as models, inference, and hardware efficiency improve. Habit formation is the moat: once workers rely on AI copilots, those workflows harden into paid seats and platform lock-in. We’re not watching a bubble pop; we’re watching infrastructure being laid for the next decade of products.
Things can be a bubble AND actual economic growth long term. Happens all the time with new tech.
Dotcom boom made all kinds of predictions about Web usage. That decade plus later turned out to be true. But at the time the companies got way ahead of consumer adoption.
Specific to AI copilots. We currently are building hundreds that nobody will use for every one success.
> Calling this an “AI bubble” reads like pure sour grapes from folks who missed the adoption curve.
Ad hominem.
> ignores lagging enterprise procurement cycles
Time is long gone for that, even for most bureaucratic orgs.
> rapid glide path of unit economics as models, inference, and hardware efficiency improve
Conjecture. We don't know if we can scale up effectively. We are hitting limits of technology and energy already
> Habit formation is the moat
Yes and no. GenAI tools are useful if done right, but they have not been what they were made out to be, and they do not seem to be getting better as quickly as I like. The most useful tool so far is copilot auto-complete, but its value is limited for experienced devs. If its price increased 10x tomorow, I would cancel our subscription.
> We’re not watching a bubble pop; we’re watching infrastructure being laid for the next decade of products.
How much money are you risking right now? Or is it different this time?
All the same arguments could be used for dot-com bubble. It was a boom and a bubble at the same time. When it popped, only the real stuff remained. Same will happen to AI. What you are describing are good use cases - there are 99 other companies doing 99 other useless things with no cost / cash flow match.
If you look at all the tech “breakthroughs” in the past decades, you will know AI is just another one: dot com, automation, social media, smartphones, cloud, cybersecurity, blockchain, crypto, renewable energy and electric X, IoT, and now AI. It will have an impact after the initial boom, and I personally think they are always negative impacts. Companies will always try to milk investors' money during the boom as much as possible, and the best way to do that is to keep the hype, either with false promises (AGI omgg singularity!!) or even fears, and the latter one is stronger because it taps into public emotions. Just pay a few scientists to create "AI 2027!!" research saying it will literally take over the world in two years, or it will take your jobs meanwhile you use the excuse to hire cheaper labor to maximize profits and blame it on AI. I remember I said that to a few friends back in early 2024, and it seems we are heading to that pop sooner than I expected.
They're all gambling that they can build the Machine God first and they will control it. The OpenAI guy is blathering that we don't even know what role money will have After the Singularity (aka The Rapture for tech geeks)
> Some people think artificial intelligence will be the most important technology of the 21st century
We're just at 25% of it. Raising such a claim is foolish at least. People will be tinkering as usual and it's hard to predict the next big thing. You can bet on something, you can postdict (which is much easier), but being certain about it? Nope.
The thing is, if you say that "AI is a bubble that will pop" and repeat this every year for the next 15 years, then you have a good probability of being right in 1 out of 15 cases if there actually is a market recession within the next 15 years that is attributed to AI overspeculation.
> Objectively there is no bubble. Economic bubble territory is 100-200+ PE ratios.
Not sure I buy that analysis. That was certainly true in 2001. The dot com boom produced huge valuations in brand new companies (like the first three ones in your list!) that were still finding their revenue models. They really weren't making much money yet, but the market expected them to. And... the market was actually correct, for the most part. Those three companies made it big, indeed.
The analysis was not true in 2008, where the bubble was held in real estate and not corporate stock. The companies holding the bag were established banks, presumptively regulated (in practice not, obviously) with P/E numbers in very conventional ranges. And they imploded anyway.
Now seems sort of in the middle. The nature of AI CapEx is that you just can't do it if you aren't already huge. The bubble is concentrated in this handful of existing giants, who can dilute the price effect via their already extremely large and diversified revenue sources.
But a $4T bubble (or whatever) is still a huge, economy-breaking bubble even if you spread it around $12T of market cap.
> But what if someone reaches autonomous AGI with this push?
What if Jesus turns up again? Seems a little optimistic, especially with several leading AI voices suggesting that AGI is at least a lot further away than just parameter expansion.
Probably the most reliable person I can think of to estimate that would be Hassabis at Deepmind and he's saying like 5 years give or take a factor of two. (for AGI, not Jesus)
> If someone reaches AGI, current business models, ROI etc will be meaningless.
sure, but its still a moonshot, compared to our current tech. I think such hope leaves us vulnerable to cognitive biases such as sunk cost fallacies. If Jesus comes back that really would change everything, that's the clarion call of many cults that end in tragedy.
I imagine there is fruit that is considerably lower hanging, that has more obvious ROI but is just considerably less sexy than AGI.
Except that the bubble's money is not being invested into cutting-edge ML research, but only into LLMs. And it has been obvious from the start to anyone half-competent about the topic that LLMs are not the path to AGI (if such a thing ever happens anyway).
I don't think it's that obvious, in fact the 'bitter lesson' teaches us that simple scale leads to qualitative, not just quantitative improvement.
It does look like this is now topping out, but it's still not sure.
It seems to me a couple of simple innovations, like the transformer, could quite possibly lead to AGI, and the infrastructure would 'light up' like all that overinvested dark fiber in the 90s.
When you can use AI as though it's an employee, instead of repeatedly 'prompting' it with small problems and tasks.
It will have agency, it will perform the role. A part of that is that it will have to maintain a running context, and learn as it goes, which seem to be the missing pieces in current llms.
I suppose we'll know, when we start rating AI by 'performance review', like employees, instead of the current 'solve problem' scorecards.
I've been talking about the limited bandwidth of investors as a major problem with capital allocation for some time so it's good to see this idea acknowledged in this context. This problem will only get bigger and more obvious with increasing inequality. It is massive scale capital misallocation whereby the misallocation yields more nominal ROI than optimal allocation (if you were to consider real economic value and not numbers in dollars). Facilitated by the design of the monetary system as the value of dollars is kept decoupled from real economic value due to filter bubbles and dollar centralization.
When there was a speculative mania in railways, afterward there were railroads everywhere that could still be used. A bubble in housing has a bunch of houses everywhere, or at the very least the skeleton of a house that could be finished later.
These tech bubbles are leaving nothing, absolutely nothing but destruction of the commons.
Perhaps worth remembering that 'over-enthusiasm' for new technologies dates back to (at least) canal-mania:
* https://en.wikipedia.org/wiki/Technological_Revolutions_and_...
* https://en.wikipedia.org/wiki/Canal_Mania
Absolutely. Years ago I found this book on the topic really eye-opening:
- https://www.amazon.co.uk/Technological-Revolutions-Financial...
The process of _actually_ benefitting from technological improvements is not a straight line, and often requires some external intervention.
e.g. it’s interesting to note that the rising power of specific groups of workers as a result of industrialisation + unionisation then arguably led to things like the 5-day week and the 8-hour day.
I think if (if!) there’s a positive version of what comes from all this, it’s that the same dynamic might emerge. There’s already lots more WFH of course, and some experiments with 4-day weeks. But a lot of resistance too.
My understanding is that the 40 hour work week (and similar) was talked about for centuries by workers groups but only became a thing once governments during WWI found that longer days didn't necessarily increase output proportionally.
For a 4 day week to really happen st scale, I'd expect we similarly need the government to decide to roll it out rather than workers groups pushing it from the bottom up.
> My understanding is that the 40 hour work week (and similar) was talked about for […]
See perhaps:
* https://en.wikipedia.org/wiki/Eight-hour_day_movement
Generally it only really started being talked about when "workers" became a thing, specifically with the Industrial Revolution. Before that a good portion of work was either agricultural or domestic, so talk of 'shifts' didn't really make much sense.
Oh sure, a standard shift doesn't make much sense unless you're an employee. My point was specifically about the 40 hour standard we use now though. We didn't get a 40-hour week because workers demanded it, we got it because wartime governments decided that was the "right" balance of labor and output.
> https://www.amazon.co.uk/Technological-Revolutions-Financial...
Yes, that is the first link of my/GP post.
There's a good podcast on the Suez and Panama canal: https://omny.fm/shows/cautionary-tales-with-tim-harford/the-...
Importantly, however, canals did end up changing the world.
Most new tech is like that - a period of mania, followed by a long tail of actual adoption where the world quietly changes
The most frustrating thing to me about this most recent rash of biz guy doubting the future of AI articles is the required mention that AI, specifically an LLM based approach to AGI, is important even if the numbers don't make sense today.
Why is that the case? There's plenty of people in the field who have made convincing arguments that it's a dead end and fundamentally we'll need to do something else to achieve AGI.
Where's the business value? Right now it doesn't really exist, adoption is low to nonexistent outside of programming and even in programming it's inconclusive as to how much better/worse it makes programmers.
I'm not a hater, it could be true, but it seems to be gospel and I'm not sure why.
Mapping to 2001 feels silly to me, when we've had other bubbles in the past that led to nothing of real substance.
LLMs are cool, but if they can't be relied on to do real work maybe they're not change the world cool? More like 30-40B market cool.
EDIT: Just to be clear here. I'm mostly talking about "agents"
It's nice to have something that can function as a good Google replacement especially since regular websites have gotten SEOified over the years. Even better if we have internal Search/Chat or whatever.
I use Glean at work and it's great.
There's some value in summarizing/brainstorming too etc. My point isn't that LLMs et al aren't useful.
The existing value though doesn't justify the multi-trillion dollar buildout plans. What does is the attempt to replace all white collar labor with agents.
That's the world changing part, not running a pretty successful biz, with a useful product. That's the part where I haven't seen meaningful adoption.
This is currently pitched as something that will have nonzero chance of destroying all human life, we can't settle for "Eh it's a bit better than Google and it makes our programmers like 10% more efficient at writing code."
Where does this notion that LLMs have no value outside of programming come from? ChatGPT released data showing that programming is just a tiny fraction of queries people do.
> This friend told me she can't work without ChatGPT anymore.
Is she more productive though?
People who smoke cigarettes will be unable to work without their regular smoke breaks. Doesn’t mean smoking cigarettes is good for working.
Personally I am an AI booster and I think even LLMs can take us much farther. But people on both sides need to stop accepting claims uncritically.
Cigarettes were/are a pretty lucrative business. It doesn’t matter if it’s better or worse, if it’s as addictive as tobacco, the investors will make back their money.
> Doesn’t mean smoking cigarettes is good for working.
Au contraire. Acute nicotine improves cognitive deficits in young adults with attention-deficit/hyperactivity disorder: https://www.sciencedirect.com/science/article/abs/pii/S00913...
> Non-smoking young adults with ADHD-C showed improvements in cognitive performance following nicotine administration in several domains that are central to ADHD. The results from this study support the hypothesis that cholinergic system activity may be important in the cognitive deficits of ADHD and may be a useful therapeutic target.
Productive how and for who?
My own use case (financial analysis and data capture by the models). It takes away the grunt work, I can focus on the more pleasant aspects of the job, it also means I can produce better quality reports as I have additional time to look more closely. It also points out things I could have potentially missed.
Free time and boredom spurs creativity, some folks forget this.
I also have more free time, for myself, you're not going to see that on a corporate productivity chart.
Not everything in life is about making more money for some already wealthy shareholders, a point I feel sometimes lost in these discussions, I think some folks need some self-reflection on this point, their jobs don't actually change the world and thinking of the shareholders only gets you so far. (Not pointed at you, just speaking generally).
> Doesn’t mean smoking cigarettes is good for working.
Fun fact; smoking likely is! There have been numerous studies into nicotine as a nootropic, eg https://pubmed.ncbi.nlm.nih.gov/1579636/#:~:text=Abstract,sh... which have found that nicotine improves attention and memory.
Shame about the lung cancer though.
Nicotine does not cause cancer. Smoke do
No she’s less productive. She just use it because she wants to do less work, be less likely to get promoted, and have to stay in the office longer to finish her work.
/s
What kind of question is that? Seriously. Are some people here so naive to think that tens of millions out there don’t know when something they choose to use repeatedly multiple times a day every day is making their life harder? Like ChatGPT is some kind of addiction similar to drugs? Is it so hard to believe that ChatGPT is actually productive?
It is the kind of question that takes into account that people thinking that they are more productive does not imply that they actually are. This happens in a wide range of contexts, from AI to drugs.
It isn’t a question asked by people generally suspicious of productivity claims. It’s only asked by LLM skeptics, about LLMs.
It absolutely is a question people ask when suspicious of productivity claims.
Lots of things claim to make people more productive. Lots of things make people believe they are more productive. Lots of things fail to provide evidence of increasing productivity.
This "just believe me" mentality normally comes from scams.
That doesn’t seem to me like a good reason to dismiss the question, and especially not that strongly/aggressively. We’re supposed to assume good intentions on this site. I can think of any number of reasons one might feel more productive but in the end not be going much faster. It would be nice to know more about the subject of the question’s experience and what they’re going off of.
You’re right; I’m rereading and it’s rude. Thanks.
Maybe you are not aware of such kinds of topics, but yes it is asked often. It is asked for stimulants, for microdosing psychedelics, for behavioural interventions or workplace policies/processses. Whenever there are any kind of productivity claims, it is asked, and it should be asked.
As a counterexample to your assertion, I've seen it a lot on both sides of the RTO discourse.
This is another example of the phenomenon they’re describing, not a counterexample.
...The post I replied to specifically said "It [questioning people's self-evaluation of productivity] is only asked by LLM skeptics, about LLMs".
Naming another example outside of LLM skeptics asking it, about LLMs, is inherently a counterexample.
Wow you're completely right and I just completely forgot who you were replying to. I thought you were replying to the person the person you were actually replying to was replying to. Sorry about both my mistake and my previous sentence's convolution!
It's not that hard to review how much you actually got done and check whether it matches how much it felt like you were getting done.
To do that properly, one needs some kind of control, which is hard to do with one person. It should be doable with proper effort, but far from trivial, because it is not enough to measure what you actually did in one condition, you have to compare it with sth. And then there can be a lot of noise for n=1: when you use LLMs, maybe you happen to have to solve harder tasks. So you need at least to do it over quite a lot of time, or make sure the difficulty of tasks is similar. If you have a group of people, you can put them into groups instead and thus not care as much for these parameters, because you can assume that when you average this "noise" will cancel out.
In fact, when this was studied, it was found that using AI actually makes developers less productive:
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
The problem isn't a delta between what got done and how much it felt like got done. The problem is it's not known how it would have taken you to do what got done unless you do it twice. Once by hand and once with an LLM, and then compare. Unfortunately, regardless of what you find, HN will be rushing to say N=1, so there's little incentive to report on any individual results.
> This friend told me she can't work without ChatGPT anymore.
It doesn't say she chooses to use it; it says she can't work without using it. At my workplace, senior leadership has mandated that software engineers use our internal AI chat tooling daily, they monitor the usage statistics, and are updating engineering leveling guides to include sufficient usage of AI being required for promotions. So I can't work without AI anymore, but it doesn't mean I choose to.
There's literally a study out the shows when developers think LLMs are making them 20% faster, it turned out to be making them 20% less productive:
https://arxiv.org/abs/2507.09089
I mean... there are many situations in life where people are bad judges of the facts. Dating, finances, health, etc, etc, etc.
It's not that hard to imagine that your friend feels more productive than she actually is. I'm not saying it's true, but it's plausible. The anecdata coming out of programming is mostly that people are only more productive in certain narrow use cases and much less productive in everything else, relative to just doing the work themselves with their sleeves rolled up.
But man to seeing all that code gets spit out on the screen FEEL amazing, even if I'm going to spend the next few hours needing to edit it, for the next few months managing the technical debt I didn't notice when I merged it.
> What kind of question is that? Seriously. Are some people here so naive to think that tens of millions out there don’t know when something they choose to use repeatedly multiple times a day every day is making their life harder?
That's just an appeal to masses / bandwagon fallacy.
> Is it so hard to believe that ChatGPT is actually productive?
We need data, not beliefs and current data is conflicting. ffs.
Serious thought.
What if people are using LLMs to achieve the same productivity with more cost to the business and less time spent working?
This, to me, feels incredibly plausible.
Get an email? ChatGPT the response. Relax and browse socials for an hour. Repeat.
"My boss thinks I'm using AI to be more productive. In reality, I'm using our ChatGPT subscription to slack off."
That three day report still takes three days, wink wink.
AI can be a tool for 10xers to go 12x, but more likely it's also that AI is the best slack off tool for slackers to go from 0.5x to 0.1x.
And the businesses with AI mandates for employees probably have no idea.
Anecdotally, I've seen it happen to good engineers. Good code turning into flocks of seagulls, stacks of scope 10-deep, variables that go nowhere. Tell me you've seen it too.
That's Jevons paradox for you.
You're working under the assumption that punching a prompt into ChatGPT and getting up to grab some coffee while it spits out thousands of tokens of meaningless slop to be used as a substitute for something that you previously would've written yourself is a net upgrade for everyone involved. It's not. I can use ChatGPT to write 20 paragraph email replies that would've previously been a single manually written paragraph, but that doesn't mean I'm 20x more productive.
And yes, ChatGPT is kinda like an addictive drug here. If someone "can't work without ChatGPT anymore", they're addicted and have lost the ability to work on their own as a result.
> And yes, ChatGPT is kinda like an addictive drug here. If someone "can't work without ChatGPT anymore", they're addicted and have lost the ability to work on their own as a result.
Come on, you can’t mean this in any kind of robust way. I can’t get my job done without a computer; am I an “addict” who has “lost the ability to work on my own?” Every tool tends to engender dependence, roughly in proportion to how much easier it makes the life of the user. That’s not a bad thing.
There's a big difference between needing a tool to do a job that only that tool can do, and needing a crutch to do something without using your own faculties.
LLMs are nothing like a computer for a programmer, or a saw for a carpenter. In the very best case, from what their biggest proponents have said, they can let you do more of what you already do with less effort.
If someone has used them enough that they can no longer work without them, it's not because they're just that indispensable: it's because that someone has let their natural faculties atrophy through disuse.
> I can’t get my job done without a computer
Are you really comparing an LLM to a computer? Really? There are many jobs today that quite literally would not exist at all without computers. It's in no way comparable.
You use ChatGPT to do the things you were already doing faster and with less effort, at the cost of quality. You don't use it to do things you couldn't do at all before.
That's a very broad assumption.
It's no different to a manager that delegates, are they less of a manager because they entrust the work to someone else? No. So long as they do quality checks and take responsibility for the results, wheres the issue?
Work hard versus work smart. Busywork cuts both ways.
[flagged]
The recent MIT report on the state of AI in business feels relevant here [0]:
> Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return.
There's no doubt that you'll find anecdotal evidence both for and against in all variations, what's much more interesting than anecdotes is the aggregate.
[0] https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Bus...
I think it's true that AI does deliver real value. It's helped me understand domains quickly, be a better google search, given me code snippets and found obscure bugs, etc. In that regard, it's a positive on the world.
I also think it's true that AI is nowhere near AGI level. It's definitely not currently capable of doing my job, not by a long shot.
I also think that, throwing trillions of dollars at AI for a "a better google search, code snippet generator, and obscure bug finder" is a contentious question, and a lot of people oppose it for that reason.
I personally still think it's kind of crazy we have a technology to do things that we didn't have just ~2 years before, even if it just stagnates right here. Still going to be using it every day, even if I admittedly hate a lot of parts of it (for example, "thinking models" get stuck in local minima way too quickly).
At the same time, don't know if it's worth trillions of dollars, at least right now.
So all claims on this thread can be very much true at the same time, just depends on your perspective.
That report also mentions individual employees using their own personal subscriptions for work, and points to it as a good model for organizations to use when rolling out the tech (i.e. just make the tools available and encourage/teach staff how they work). That sure doesn’t make it sound like “zero return” is a permanent state.
Ah yes, the study that everyone posts but nobody reads
>Behind the disappointing enterprise deployment numbers lies a surprising reality: AI is already transforming work, just not through official channels. Our research uncovered a thriving "shadow AI economy" where employees use personal ChatGPT accounts, Claude subscriptions, and other consumer tools to automate significant portions of their jobs, often without IT knowledge or approval.
>The scale is remarkable. While only 40% of companies say they purchased an official LLM subscription, workers from over 90% of the companies we surveyed reported regular use of personal AI tools for work tasks. In fact, almost every single person used an LLM in some form for their work.
No. The aggregate is useless. What matters is the 5% that have positive return.
In the first few years of any new technology, most people investing it lose money because the transition and experimentation costs are higher than the initial returns.
But as time goes on, best practices emerge, investments get paid off, and steady profits emerge.
On the provider end, yes. Not on the consumer end.
These are business customers buying a consumer-facing product.
No, on the consumer end. The whole point is that the 5% profitable is going to turn to 10%, 25%, 50%, 75% as companies work out how to use AI profitably.
It always takes time to figure out how to profitably utilize any technological improvement and pay off the upfront costs. This is no exception.
> This friend told me she can't work without ChatGPT anymore
I am curious what kind of work is she using ChatGPT such that she cannot do without it?
> ChatGPT released data showing that programming is just a tiny fraction of queries people do
People are using it as search engine, getting dating advice and everything under the sun. That doesn't mean there is business value - so to speak. If these people had to pay say $20 a month for this access, are they willing to do so?
The poster's point was that coding is an area which is paying consistently for LLM models so much that every model has a coding specific version. But we don't see same sort of specialized models for other areas and the adoption is low to nonexistent.
> what kind of work is she using ChatGPT such that she cannot do without it?
Given they said this person worked at PwC, I’m assuming it’s pointless generic consultant-slop.
Concretely it’s probably godawful slide decks.
Greater output doesn't always equal greater productivity. In my days in the investing business we would have junior investment professionals putting together elaborate and detailed investment committee memos. When it came time to review a deal in the investment committee meetings we spent all our time trying to sift through the content of the memos and diligence done to date to identify the key risks and opportunities, with what felt like a 1:100 signal to noise ratio being typical. The productive element of the investment process was identifying the signal, not producing the content that too often buries the signal deeper. Imo, AI tools to date make it so much easier to create content which makes it harder to be productive.
When your work consists of writing stuff disconnected from reality it surely helps to have it written automatically.
On the other hand, it's a hundreds-of-billions of dollars market...
What is?
Writing stuff disconnected from reality, I assume.
This says more about PwC and what M&A people do all day than it does about ChatGPT.
> This friend told me she can't work without ChatGPT anymore.
This isn't a sign that ChatGPT has value as much as it is a sign that this person's work doesn't have value.
What kind of logic is this?
ChatGPT automates much of my friend's work at PwC making her more productive --> not a sign that ChatGPT has any value
Farming machines automated much of what a farmer used to have to do by himself making him more productive --> not a sign that farming machines have any value
The output of a farm is food or commodities to be turned into food.
The output of PwC -- whoops, here goes any chance of me working there -- is presentations and reports.
“We’re entering a bold new chapter driven by sharper thinking, deeper expertise and an unwavering focus on what’s next. We’re not here just to help clients keep pace, we’re here to bring them to the leading edge.”
That's on the front page of their website, describing what PwC does.
Now, what did PwC used to do? Accounting and auditing. Worthwhile things, but adjuncts to running a business properly, rather than producing goods and services.
Most developers can't do much work without an IDE and Chrome + Google.
Would you say that their work has no value?
I find it’s mostly a sign of how lazy people get once you introduce them to some new technology that requires less effort for them.
[even in programming it's inconclusive as to how much better/worse it makes programmers.]
Try building something new in claude code (or codex etc) using a programming language you have not used before. Your opinion might change drastically.
Current AI tools may not beat the best programmer, they definitely improves average programmer efficiency.
> Try building something new in claude code (or codex etc) using a programming language you have not used before. Your opinion might change drastically.
Try changing something old in claude code (or codex etc) using a programming language you have used before. Your opinion might change drastically.
I have! Claude is great at taking a large open source project and adding some idiosyncratic feature I need.
this is literally how i maintain the code at my current position, if i didn't have copilot+ i would be cooked
...what were you doing before?
That's bread and butter development work.
[flagged]
I did just that and I ended up horribly regretting it. The project had to be coded in Rust, which I kind of understand but never worked with. Drunk on AI hype, I gave it step by step tasks and watched it produce the code. The first warning sign was that the code never compiled at the first attempt, but I ignored this, being mesmerized by the magic of the experience. Long story short, it gave me quick initial results despite my language handicap. But the project quickly turned into an overly complex, hard to navigate, brittle mess. I ended up reading the Rust in Action book and spending two weeks cleaning and simplifying the code. I had to learn how to configure the entire tool chain, understand various cargo deps and the ecosystem, setup ci/cd from scratch, .... There is no way around that.
It was Claude Code Opus 4.1 instead of Codex but IMO the differences are negligible.
AI can be quite impressive if the conditions are right for it. But it still fails at so many common things for me that I'm not sure if it's actually saving me time overall.
I just tried earlier today to get Copilot to make a simple refactor across ~30-40 files. Essentially changing one constructor parameter in all derived classes from a common base class and adding an import statement. In the end it managed ~80% of the job, but only after messing it up entirely first (waiting a few minutes), then asking again after 5 minutes of waiting if it really should do the thing and then missing a bunch of classes and randomly removing about 5 parenthesis from the files it edited.
Just one anecdote, but my experiences so far have been that the results vary dramatically and that AI is mostly useless in many of the situations I've tried to use it.
One thing I like for this type of refactoring scenario is asking it to write a codemod (which you can of course do yourself but there's a learning curve). Faster result that takes advantage of a deterministic tool.
This is exactly my experience. We wanted to modernize a java codebase by removing java JNDI global variables. This is a simple though tedious task. And we tried Claude Code and Gemini. Both of these results were hilarious.
LLMs are awful at tedious tasks. Usually because it involves massive context.
You will have much more success if you can compartmentalize and use new LLM instances as often as possible.
> using a programming language you have not used before
haven't we established that if you are layman in an area AI can seem magical. Try doing something in your established area and you might get frustrated. It will give you the right answer with caveats - code which is too verbose, performance intensive or sometimes ignoring best security practices.
> Try building something new in claude code (or codex etc) using a programming language you have not used before. Your opinion might change drastically.
So it looks best when the user isn't qualified to judge the quality of the results?
> they definitely improves average programmer efficiency
Do we really need more efficient average programmers? Are we in a shortage of average software?
Average programmers do not produce average software; the former implements code, the latter is the full picture and is more about what to build, not how to build it. You don't get a better "what to build" by having above-average developers.
Anyway we don't need more efficient average programmers, time-to-market is rarely down to coding speed / efficiency and more down to "what to build". I don't think AI will make "average" software development work faster or better, case in point being decades of improvements in languages, frameworks and tools that all intend to speed up this process.
> Are we in a shortage of average software?
Yes. The "true" average software quality is far, far lower than the average person perceives it to be. ChatGPT and other LLM tools have contributed massively to lowering average software quality.
I don’t understand how your three sentences mesh with each other. In any case, making the development of average software more efficient doesn’t by itself change anything about its quality. You just get more of it faster. I do agree that average software quality isn’t great, though I wouldn’t attribute it to LLMs (yet).
Yeah I've used it for personal projects and it's 50/50 for me.
Some of the stuff generated I can't believe is actually good to work with long term, and I wonder about the economics of it. It's fun to get something vaguely workable quickly though.
Things like deepwiki are useful too for open source work.
For me though the core problem I have with AI programming tools is that they're targeting a problem that doesn't really exist outside of startups, not writing enough code, instead of the real part of inefficiency in any reasonably sized org, coordination problems.
Of course if you tried to solve coordination problems, then it would probably be a lot harder to sell to management because we'd have to do some collective introspection as to where they come from.
> For me though the core problem I have with AI programming tools is that they're targeting a problem that doesn't really exist outside of startups
If you work in science it's great to have s.th. that spits out mediocre code for your experiments.
How can I possibly assess the results in a programming language I haven’t used before? That’s almost the same as vibe coding.
The same way you assess results in a programming language you have used before. In a more complicated project that might mean test suites. For a simple project (e.g. a Bash script) you might just run it and see if it does what you expect.
The way I assess results in a familiar programming language is by reviewing and reasoning through the code. Testing is necessary, but not sufficient by any means.
Out of curiosity, how do you assess software that you didn't write and just use, and that is closed source? Don't you just... use it? And see if it works?
Why this is inherently different?
> using a programming language you have not used before
But why would I do that? Either I'm learning a new language in which case I want to be as hands-on as possible and the goal is to learn, not to produce. Or I want to produce something new in which case, obviously, I'd use a toolset I'm experienced in.
There are plenty of scenarios where you want to work with a new language but you don't want to have to dedicate months/years of your life to becoming expert in it because you are only going to use it for a one-time project.
For example, perhaps I want to use a particular library which is only available in language X. Or maybe I'm writing an add-on for a piece of software that I use frequently. I don't necessarily want to become an expert in Elisp just to make a few tweaks to my Emacs setup, or in Javascript etc. to write a Firefox add-on. Or maybe I need to put up a quick website as a one-off but I know nothing about web technologies.
In none of these cases can I "use a toolset I'm experienced in" because that isn't available as an option, nor is it a worthwhile investment of time to become an expert in the toolset if I can avoid that.
The question is: is that value worth the US$400bi per year of investment sucking out all the money from other ventures?
It's a damn good tool, I use it, I've learned the pitfalls, it has value but the inflation of potential value is, by definition, a bubble...
It's silly to say that the only objective that will vindicate AI investments is AGI.
Current batch of deep learning models are fundamentally a technology for labor automation. This is immensely useful in itself, without the need to do AGI. The Sora2 capabilities are absolutely wild (see a great example here of what non-professional users are already able to create with it: https://www.youtube.com/watch?v=HXp8_w3XzgU )
So only looking at video capabilities, or at coding capabilities, it's already ready to automate and upend industries worth trillions in the long run.
The emerging reasoning capabilities are very promising, able to generate new theories and make scientific experiments in easy to test fields, such as in vitro drug creation. It doesn't matter if the LLM hallucinates 90% of the time, if it correctly reasons a single time and it can create even a single new cancer drug that passes the test.
These are all examples of massive, massive economic disruption by automating intellectual labor, that don't require strict AGI capabilities.
Regardless of my opinions on if you're correct about this, I'm not an ML expert so who knows, I'd be very happy if we cured cancer so I hope you're correct and the video is a cool demo.
I don't believe the risk vs reward on investing a trillion dollars+ is the same when your thesis changes from "We just need more data/compute and we can automate all white collar work"
to
"If we can build a bunch of simulations and automate testing of them using ML then maybe we can find new drugs" or "automate personalized entertainment"
The move to RL has specifically made me skeptical of the size of the buildout.
If you calculate the investment into AI and then divide by say 100k that's how many man-years need to replace with AI to be cost effective as labor automation the numbers aren't that promising given the current level of capability.
Don't even need to get too fancy with it. Open AI has publicly committed to ~$500B in spending over the next several years (nevermind even they don't expect to actually bring that much revenue in)
$500B/$100,000 is 5 million, or 167k 30-year careers.
The math is ludicrous, and the people saying it's fine are incomnprehensible to me.
Another comment on a similar post just said, no hyperbole, irony, or joke intended: "Just you switching away from Google is already justifying 1T infrastructure spend."
> Current batch of deep learning models are fundamentally a technology for labor automation. This is immensely useful in itself, without the need to do AGI. The Sora2 capabilities are absolutely wild (see a great example here of what non-professional users are already able to create with it: https://www.youtube.com/watch?v=HXp8_w3XzgU )
> So only looking at video capabilities, or at coding capabilities, it's already ready to automate and upend industries worth trillions in the long run.
Can Sora2 change the framing of a picture without changing the global scene ? Can it change the temperature of a specific light source ? Can it generate a 8k HDR footage suitable for re-framing and color grading ? Can it generate minute long video without loosing coherence ? Actually, can it generate a few seconds without having to reloop with the last frame and have these obnoxious cuts that the video you pointed has ? Can it reshoot the same exact scene with just one element altered ?
All the video models right now are only good at making short, low-res, barely post-processable video. The kind of stuff you see on social media. And considering the metrics on ai-generated video on social media right now, for the most part, nobody want to look at them. They might replace the bottom of the barrel of social media posting (hello cute puppy videos), but there is absolutely nothing indicating that they migth automate or upend any real industry (be used in the pipeline, yeah maybe, why not, automate ? Won't hold my breath).
And the argument of their future capabilities, well ... It's been 50+ years that we should have fusion in 20 years.
Btw, the same argument can be made for LLM and image-gen tech in any creative purposes. People severly underestimate just how much editing, re-work, purpose and pre-production steps are involved in any major creative endeavor. Most model are just severly ill suited for that work. They can be useful for some stuff (specificaly, for editing images, ai-driven image fill do work decently for exemple), but overall, as of right now, they are mostly good at making low quality content. Which is fine I guess, there is a market for it, but it was already a market that was not keen on spending money.
This is very surface level criticism.
Qwen image and nano banana can both do that with images, there’s zero reason to think we can’t train video models for masking.
This feels a lot like critiquing stable diffusion over hands and text, which the new SOTA models all handle well.
One of the easiest iterations on these models is to add more training cases to the benchmarks. That’s a timeline of months, not comparable to forecasting progress over 20 years like fusion.
> This is very surface level criticism.
Is it now. I don't think being able to accurately and predictably make changes to a shot, a draft, a design is surface level in production.
> Qwen image and nano banana can both do that with images, there’s zero reason to think we can’t train video models for masking.
Tell them to change the tilt of the camera roughly 15 degree left without changing anything else in the scene and tell me if it works.
> This feels a lot like critiquing stable diffusion over hands and text, which the new SOTA models all handle well.
Well does a lot of heavy lifting there.
> One of the easiest iterations on these models is to add more training cases to the benchmarks. That’s a timeline of months, not comparable to forecasting progress over 20 years like fusion.
And what if the model itself is the limiting factor ? The entire tech ? Do we have any proof that in the future the current technologies might be able to handle the cases I spoke about ?
Also, one thing that I didn't mention in the first post. Assuming that the tech does come to the point I can be used to automate a lot of the production. If Throwing a few millions to buy a GPU cluster is enough to be able to "generate" a relatively high quality movie or series, the barrier to entry will be incredibly low. The cost will be driven down, the amount of production will be very high and overall it might not be a trillion dollar industry no more.
> They might replace the bottom of the barrel of social media posting (hello cute puppy videos)
Lay off. Only respite I get from this hell world is cute Rottweiler videos
The problem is that it’s already commodified; there’s no moat. The general tech practice has been capture the market by burning vc money, then jack up prices to profit. All these companies are burning billions to generate a new model and users have already proven there is no brand loyalty. They just hop to the new one when it comes out. So no one can corner the market and when the VC money runs out they’ll have to jack up prices so much that they’ll kill their market
> The problem is that it’s already commodified; there’s no moat.
From an economy-wide perspective, why does that matter?
> users have already proven there is no brand loyalty. They just hop to the new one when it comes out.
Great, that means there might be real competition! This generally keeps prices down, it doesn't push them up! It's true that VCs may end up unhappy, but will they be able to do anything about it?
The most isnt with the llm crestors, its with nvidia, but even that is under seige by Chinese mskers.
Compute is the moat.
You seem to be making an implicit claim that LLMs can create an effective cancer drug "10% of the time".
Smells like complete and total bullshit to me.
Edit: @eucyclos: I don't assume that Chat GPT and LLM tools have saved cancer researchers any time at all.
On the contrary, I assume that these tools have only made these critical researchers less productive, and made their internal communications more verbose and less effective.
No, that's not the claim. The claim is that we will create a hypothetical LLM that, when tasked with a problem at the scientific frontier of molecular biology will, about 10% of the time, correctly reason about existing literature and reach conclusions that are valid or plausible to similar experts in the field.
Let's say you run that LLM one million times and get 100.000 valid reasoning chains. Let's say among them are variations on 1000 fundamentally new approaches and ideas, and out of those, you can actually synthesize in the laboratory 200 new candidate compounds, and out of those, 10 substance show strong in-vitro response, and then one of those completely cures some cancerous mice.
There you go, you have substantially automated the intellectual work of cancer research and you have one very promising compound you can start phase 1 trials that you didn't have before AI, and all without any AGI.
How many hours writing emails does it have to save human cancer researchers for it to be effectively true?
> adoption is low to nonexistent outside of programming
In the last few months, every single non-programmer friend I've met has ChatGPT installed on their phone (N>10).
Out of all the people that I know enough to ask if they have ChatGPT installed, there is only one who doesn't have it (my dad).
I don't know how many of them are paying customers though. IIRC one of them was using ChatGPT to translate academic writing so I assume he has pro.
My daughter and her friends have their own paid chatgpt. She said she uses it to help with math homework and described to me exactly why I bought a $200 TI-92 in the 90s with a CAS.
Adoption is high with young people.
I’ve been trying out locally run models on my phone. The iPhone 17 is able to run some pretty nice models, but they lack access to up to date information from the web like ChatGPT has. Wonder if some company like Kagi would offer some api to let your local model plug in and run searches.
That would be perfect. Especially because Kagi could also return search results as JSON to the AI if they control both sides of the interaction.
Kagi do but it’s fairly expensive (for my taste) and you have to email them for access.
There are other companies that provide these tools for anything supporting MCP.
> adoption is low to nonexistent outside of programming
Odd way to describe ChatGPT which has >1B users.
AI overviews have rolled out to ~3B users, Gemini has ~200M users, etc.
Adoption is far from low.
> AI overviews have rolled out to ~3B users
Does that really count as adoption, when it has been introduced as a default feature?
Yes, if people are interacting with them, which they are.
HN seems to think everyone is like in the bubble here, which thinks AI is completely useless and wants nothing to do with it.
Half the world is interacting with it on a regular basis already.
Are we anywhere near AGI? Probably not.
Does it matter? Probably not.
Inference costs are dropping like a rock, and usage is continuing to skyrocket.
Mostly agreed, but AI overviews are a very bad example. Google can just force feed its massive search user base whatever bullshit it damn pleases. Even if it has negative value to the users.
I don't actually think that AI overviews have "negative value" - they have their utility. There are cases where I stop my search right after reading the "AI overview". But "organic" adoption of ChatGPT or Claude or even Gemini and "forced" adoption of AI overviews are two different beasts.
My father (in his 70s) has started specifically looking for the AI overview, FWIW.
He has not engaged with any chatbot, but he thinks of himself as "using AI now" and thinks of it as a value-add.
>the required mention that AI, specifically an LLM based approach to AGI, is important...
I don't think that's true. The people who think AI is important call it AI. The skeptics call it LLMs so they can say LLMs won't work. It's kind of a strawman argument really.
i think it is more like maps. before 2004, before google maps, the way we interacted with the spatial distribution of places and things was different. all these ai dev tools like claude code as well as tools for writing, etc. are going to change the way we interact with our computers.
but on the other side, the reason everyone is so gung ho on all this is because these models basically allow for the true personalization of everything. They can build up enough context about you in every instance of you doing things online that they can craft the perfect ad experience to maximize engagement and conversion. that is why everyone is so obsessed with this stuff. they don't care about AGI, they care about maintaining the current status quo where a large chunk of the money made on the internet is done by delivering ads that will get people to buy stuff.
I think there is a good flipside too. LLMs potentially enable generating custom made tooling tailored just for you. If you can get/provide data it's pretty easy to cook up solutions.
As an example - I'd never bother with mobile app just for myself since it's too annoying to get into for a somewhat small thing. Now I can chug along and have LLM fill in quickly my missing basic in the area.
I think there is real value, for instance nowadays I just use chatGPT as google replacement, brainstorming, and for coding stuff. It's quite useful and it would be hard to go back to time without this kind of tool. The 20 bucks a month is more than worth it.
Not sure though that do they make enough revenue and what will be the moat if more or less the best models will converge around the same level. For most normies, it might be hard to spot difference between gpt 5 and claude for instance. Okay for Grok the moat is that it doesn't pretend to be a pope and censor everything.
Maybe you just haven’t heard of them? For example, just the other day I heard about a company using an LLM to provide advice to doctors. News to me.
https://www.prnewswire.com/news-releases/openevidence-the-fa...
> OpenEvidence is actively used across more than 10,000 hospitals and medical centers nationwide and by more than 40% of physicians in the United States who log in daily to make high-stakes clinical decisions at the point of care. OpenEvidence continues to grow by over 65,000 new verified U.S. clinician registrations each month. […] More than 100 million Americans this year will be treated by a doctor who used OpenEvidence.
More:
https://robertwachter.substack.com/p/medicines-ai-knowledge-...
I've had doctors google things in front of me. This may be an improvement.
Likely not true re adoption. According to McKinsey November 2024 12% of employees in the US used AI for >30% of their daily tasks. I saw another research early this summer, it said that 40% of employees use AI. Adoption is already pretty relevant. The real question is: number of people x token requirement of their daily tasks equals how many tokens, and where are we there. Based on McK, we possibly around 17% unless remaining 50% of tasks requires just way more complexity, because then that would obviously mean the incremental tasks require maybe exponentially more tokens and then penetration will be indeed low. But for this we need to know total token need of daily tasks of average office worker.
There is a middle ground where LLMs are used as a tool for specific use cases, but not applied universally to all problems. The high adoption of ChatGPT is the proof of this. General info, low accuracy requirements - perfect use case, and it shows.
The problem comes in when people then set expectations that a chat solution can solve non-chat problems. When people assume that generated content is the answer but haven't defined the problem.
We're not headed for AGI. We're also not going to just say, "oh, well, that was hype" and stop using LLMs. We are going to mature into an industry that understands when and where to apply the correct tools.
Cause the story is no more about Business or Economics. This is more like the nuke arms race in the 1940s. Red Queen Dynamics.
"Where's the business value? Right now it doesn't really exist, adoption is low to nonexistent outside of programming and even in programming it's inconclusive as to how much better/worse it makes programmers."
The business model is it is data collection about you on steroids, and that the winning company will eclipse Meta in value.
It's just more ad tech with multipliers, and it will continue to control thought, sway policy and decide elections. Just like social media does today.
FWIW Derek Thompson (the author of this blogpost) isn't exactly a 'business guy'
If I'm not mistaken he's working with Ezra Klein to push the Democrats to embrace racism instead of popular economic measures.
Edit: I expect that these guys will try to make a J.D. Vance style Republican pivot in the next 4-8 years.
Second Edit:
Ezra Klein's recent interview with Ta-Nehisi Coates is very specifically why I expect he will pivot to being a Republican in the near future.
Listen closely. Ezra Klein will not under any circumstances utter the words "Black People".
Again and again, Coates brings up issues that Black People face in America, and Klein diverts by pretending that Coates is talking about Marginalized Groups in general or Trans People in particular.
Klein's political movement is about eradicating discussion of racial discrimination from the Democratic party.
Third Edit:
@calmoo: I think you're not listening to the nuances of my opinion, and instead having an intense emotional reaction to my well-justified claims of racism.
We're very off topic, but if you're truly interested in Ezra Klein's worldview, I highly recommend his recent interview with Ta-Nehisi Coates. At minimum, I think you'll discover that Ezra's feelings are a lot more nuanced than you're making them out to be.
https://www.nytimes.com/2025/09/28/opinion/ezra-klein-podcas...
I don't really want to discuss politics off the bat of my purely 'for your information' comment, but I think you're grossly misrepresenting Ezra Klein's worldview and not listening to the nuances of his opinion, and instead having an intense emotional reaction to his words. Take a step back and try to think a bit more rationally here.
Also your prediction of them making a JD vance republican pivot is extremely misguided. I would happily bet my life savings against that prediction.
Why would we want AGI? I've yet to read a convincing argument in favor (but granted, I never looked into it, I'm still at science-fiction doomerism). One thing that irks me is that people see it as inevitable, and that we have to pursue AGI because if we don't, someone else will. Or more bleak, if we don't actively pursue us, our malignant future AGI overlords will punish us for not bringing it into existence (roko's basilisk, the thing Musk and Grimes apparently bonded over because they're weird)
"Where's the business value? "
Have you ever used an LLM? I use it every day to help me with research and completing technical reports (which used to be a lot more of my time).
Of course you can't just use it blindly, but it definitely adds value.
Does it bring more value than it cost ? That's the real question.
Nobody doubt it works, everybody doubt Altboy when he asks $7 trillion
This question is pretty hard to answer without knowing the actual costs.
Current offerings are usually worth more than they cost. But since the prices are not really reflective of the costs it gets pretty muddy if it is a value add or not.
Have you read the article ? The cost is currently not justified for the benefit.
[dead]
[dead]
It reminds me what I said to somebody recently:
All my friends and family are using the free version of ChatGPT or something similar. They will never pay (although they have enough money to do so).
Even in my very narrow subjective circles it does not add up.
Who pays for AI and how? And when in the future?
People are always so fidegty about this stuff — for super understandable reason, to be clear. People not much smarter than anyone else try to reason about numbers that are hard to reason about.
But unless you have the actual numbers, I always find it a bit strange to assume that all people involved, who deal with large amounts of money all the time, lost all ability to reason about this thing. Because right now that would mean at minimum: All the important people at FAANG, all the people at OpenAI/Anthropic, all the investors.
Of course, there is a lot of uncertainty — which, again, is nothing new for these people. It's just a weird thing to assume that.
Pets.com, Enron, Lehman Bros, WeWork, Theranos, too many to mention.
Investors aren’t always right. The FOMO in that industry is like no other
The point is not whether they are right, but how low the bar is for what constitutes as palatable opinions from bystanders on a topic that other people have devoted a lot of thought and money to.
I just don't think "I don't know anyone who pays for it" or "You know, companies have also failed before" bring enough to the table to be interesting talking points.
I think it's a bit fallacious to imply that the only way we could be in an AI investment bubble is if people are reasoning incorrectly about the thing. Or at least, it's a bit reductive. There are risks associated with AI investment. The important people at FAANG/AI companies are the ones who stand to gain from investments in AI. Therefore it is their job to downplay and minimize the apparent risks in order to maximize potential investment.
Of course at a basic level, if AI is indeed a "bubble", then the investors did not reason correctly. But this situation is more like poker than chess, and you cannot expect that decisions that appear rational are in fact completely accurate.
> All the important people at FAANG, all the people at OpenAI/Anthropic, all the investors.
It's like asking big pharma if medicine should be less regulated, "all the experts agree", well yeah, their paycheck depends on it. Same reason no one at meta tells Zuck that his metaverse is dogshit and no one wants it, they still spent billions on it.
You can't assume everyone is that dumb, but you certainly can assume that the yes men won't say anything other than "yes".
Again, this is not an argument. I am asking: Why do we assume that we know better and people with far more knowledge and insight would all be wrong?
This is not rhetorical question, I am not looking for a rhetorical answer. What is every important decision maker at all these companies missing?
The point is not that they could not all be wrong, they absolutely could. The point is: Make a good argument. Being a general doomsayer when things get very risky might absolutely make you right but it's not a interesting argument – or any argument at all.
I think you have a point and I'm not sure I entirely disagree with you, so take this as lighthearted banter, but:
Coming from the opposite angle, what makes you think these folks have a habit of being right?
VCs are notoriously making lots of parallel bets hoping one pays off.
Companies fail all the time, either completely (eg Yahoo! getting bought for peanuts down from their peak valuation), or at initiatives small and large (Google+, arguably Meta and the metaverse). Industry trends sometimes flop in the short term (3D TVs or just about all crypto).
C-levels, boards, and VCs being wrong is hardly unusual.
I'd say failure is more of a norm than success, so what should convince us it's different this time with the AI frenzy? They wouldn't be investing this much if they were wrong?
The universe is not configured in such a way that trillion dollar companies come into existence without a lot of things going well over long periods of time, so if we accept money as the standard for being right, they are necessarily right, a lot.
Everything ends and companies are no exception. But thinking about the biggest threats is what people in managerial positions in companies do all day, every day. Let's also give some credit to meritocracy and assume that they got into those positions because they are not super bad at their jobs, on average.
So unless you are very specific about the shape of the threat and provide ideas and numbers beyond what is obvious (because those will have been considered), I think it's unlikely and therefor unreasonable to assume that a bystanders evaluation of the situation trumps the judgement of the people making these decisions for a living with all the additional resources and information at any given point.
Here's another way to look at this: Imagine a curious bystander were to judge decisions that you make at your job, while having only partial access to the information that you have to do the job, that you do every day for years. Will this person at some point be right, if we repeat this process often enough? Absolutely. But is it likely, on any single instance? I think not.
> Why do we assume that we know better and people with far more knowledge and insight would all be wrong?
Because of historical precedent. Bitcoin was the future until it wasn't. NFTs and blockchain were the future until they weren't. The Metaverse was the future until it wasn't. Theranos was the future until it wasn't. I don't think LLMs are quite on the same level as those scams, but they smell pretty similar: they're being pushed primarily by sales- and con-men eager to get in on the scam before it collapses. The amount being spent on LLMs right now is way out of line with the usefulness we are getting out of them. Once the bubble pops and the tools have a profitability requirement introduced, I think they'll just be quietly integrated into a few places that make sense and otherwise abandoned. This isn't the world-changing tech it's being made out to be.
You don't have an argument either btw, we're just discussing our points of view.
> Why do we assume that we know better and people with far more knowledge and insight would all be wrong?
Because money and power corrupt the mind, coupled with obvious conflicts of interest. Remember the hype around AR and VR in 2015s ? Nobody gives a shit about it anymore. They wrote articles like "Augmented And Virtual Reality To Hit $150 Billion, Disrupting Mobile By 2020" [0], well, if you look at the numbers today you'll see it's closer to 15b than 150b. Sometimes I feel like I live in a parallel universe... these people have been lying and overpromising things for 10, 15 or 20+ years and people still swallow it because it sounds cool and futuristic.
[0] https://techcrunch.com/2015/04/06/augmented-and-virtual-real...
I'm not saying I know better, I'm just saying you won't find a single independent researcher that will tell you there is a path from LLMs to AGI, and certainly not any independent researcher that will tell you the current numbers a) make sense, b) are sustainable
Someone is paying. OpenAI revenue was $4.3 billion in the first half of this year.
You forgot that part:
> The artificial intelligence firm reported a net loss of US$13.5 billion during the same period
If you sell gold at $10 a gram you'll also make billions in revenues.
That loss includes the costs to train the future models.
Like Dario/Anthropic said, every model is highly profitable on it's own, but the company keeps losing money because they always train the next model (which will be highly profitable on it's own).
But even if you remove R&D costs, they’re still billions of dollars short of profitability. That’s not a small hurdle to overcome. And OpenAI has to continue to develop new models to remain relevant.
Reminds me of the Icelandic investment banks during the height of the financial bubble. They basically did this.
OpenAI "spent" more on sales/marketing and equity compensation than that:
"Other significant costs included $2 billion spent on sales and marketing, nearly doubling what OpenAI spent on sales and marketing in all of 2024. Though not a cash expense, OpenAI also spent nearly $2.5 billion on stock-based equity compensation in the first six months of 2025"
("spent" because the equity is not cash-based)
From https://archive.is/vIrUZ
How the fuck does anyone spend 2 billion dollars on sales and marketing. I’ve seen the odd ad for openai but thag number seems completely bananas.
Astroturfing on social media, most likely. The AI hype almost certainly isn’t entirely organic.
> Who pays for AI and how?
The same way the rest of webshit is paid for: ads. And ads embedded in LLM output will be impervious to ad blockers.
I use it professionally and I rotate 5 free accounts on all platforms, money doesn't have any values anymore, people will spend $100 a month on LLMs and another $100 on streaming services, that's like half of my household monthly food budget
I'm sure providers will find ways of incorporating the fees into e.g. ISP or mobile network fees so that users end up paying in a less obvious, less direct way.
The cost of serving an "average" user would only fall over time.
Most users rarely make the kind of query that would benefit a lot from the capabilities of GPT-6.1e Pro Thinking With Advanced Reasoning, Extended Context And Black Magic Cross Context Adaptive Learning Voodoo That We Didn't Want To Release To Public Yet But If We Didn't Then Anthropic Would Surely Do It First.
And the users that have this kind of demanding workloads? They'd be much more willing to pay up for the bleeding edge performance.
The bet is that people will pay for services which are under the hood being done by AI.
> Who pays for AI [...]?
Venture capital funding adding AI features to fart apps.
People said the same thing about Facebook. The answer: advertisers.
They will eventually get ads mixed in the responses.
> They will never pay
Of course they will, once they start falling behind not having access to it.
People said the same things about computers (they are just for nerds, I have no use for spreadsheets) and smartphones (I don't need apps/big screen, I just want to make/receive calls).
I pay. Like if they're using it to talk then they won't pay.
But I use it for work.
AI companies don't have a plausible path to productivity because they are trying to create a market while model is not scalable unlike different services that have done this in the past. (DoorDash, Uber, Neftlix etc.)
I don’t like that he cited $12B in consumer spending as the benchmark for demand. Clearly enterprise spending has and will continue to dwarf consumer outlays, to the tune of $100b+ in 2025 on inference alone, and another $150b on AI related services.
I see almost no scenario where the value of this hardware will go away. Even if the demand for inference somehow declines, the applications that can benefit from hardware acceleration are innumerable. Anecdotally my 2022 RTX 4090 is worth ~30% more used then what I paid for it new, but the trend continues into bigger metal.
As “Greater China” has become the supply bottleneck, it is only rational for western companies to horde capacity while they can.
Also, as others have pointed out, if the next Pixel phone or iPhone has 'AI' as a bullet point feature, then people buying and iPhone will count as 'consumer AI spend', that's why they're forcing AI into everything, so they can show that people are using AI, while most people are ambivalent or hostile towards AI features.
I mean, that makes little sense. The desirability of the feature has a price. Putting a GPU in a phone is expensive and unnecessary.
The point of something being a gimmick is that it’s a gimmick. I just got an iPhone with a GPU but I would absolutely have purchased one without if it were possible.
I just heard a thesis that there is no bubble unless there is debt in it. Currently mostly internal funds were used for increasing capex. More recently started we seeing circularity (NVDA -> OpenAI -> MSFT -> NVDA), thus this is less relevant so far yet. Especially as around ~70% of data center is viewed to be GPU, so NVDA putting down $100B, that essentially funds "only" $140B of data center capex.
META is spending 45% of their _sales_ of capex. So I wonder when are they going to up their game with a little debt sprinkled on.
I'm trying to pinpoint the canary in the financial coal mine here. There will be a time to pull out of the market and I really want to have an idea of when. I know, timing the market, but this isn't some small market correction we're talking about here.
this is a bit hackneyed but it's true: time in the market > timing the market
now obviously, if you do time the market perfectly, that's the best. but it is far far more likely to shoot yourself in the foot by trying
TQQQ exists.
There is a liquid market of TQQQ puts.
I'm trying to get out ahead of that and find an indicator that says it's about to collapse.
I don't think there's a good indicator for predicting it ahead of time. If you are worried you could switch from tech stocks to something more conservative.
You can sometimes tell when the collapse has started from the headlines though - stuff like top stocks down 30%, layoffs announced. Which may sound too late but with the dotcoms things kept going down for another couple of years after that.
Market reflexivity makes an obvious indicator highly improbable.
I feel kind of like a Luddite sometimes but I don't understand why EVERYONE is rushing to use AI? I use a couple different agents to help me code, and ChatGPT has largely replaced Google in my everyday use, but I genuinely don't understand the value proposition of every other companies offerings.
I really feel like we're in the same "Get it out first, figure out what it is good for later" bubble we had like 7 years ago with non-AI ChatBots. No users actually wanted to do anything important by talking to a chatbot then, but every company still pushed them out. I don't think an LLM improves that much.
Every time some tool I've used for years sends an email "Hey, we've got AI now!" my thought is just "well, that's unfortunate"...
I don't want AI taking any actions I can't inspect with a difftool, especially not anything important. It's like letting a small child drive a car.
Just you switching away from Google is already justifying 1T infrastructure spend.
Just think about how much more effective advertisements are going to be when LLMs start to tell you what to buy based on which advertiser gave the most money to the company.
> Just think about how much more effective advertisements are going to be when LLMs start to tell you what to buy based on which advertiser gave the most money to the company.
Optimistic view: maybe product quality becomes an actually good metric again as the LLM will care about giving good products.
Yea, I know, I said it's an optimistic view.
Has a tech company ever taken 10s or 100s of billions of dollars from investors and not tried to a optimize revenue at the expense of users? Maybe it's happened but I literally can't think of a single one.
Given that the people and companies funding the current AI hype so heavily overlap with the same people who created the current crop of unpleasant money printing machines I have zero faith this time will be different.
What does it mean for the language model to "care" about something?
How would that matter against the operator selling advertisers the right to instruct it about what the relevant facts are?
I think it might be like when Grok was programmed to talk about white genocide and to support Musk's views. It always shoehorned that stuff in but when you asked about it it readily explained that it seemed like disinformation and openly admitted that Musk had a history of using his business to exert political sway.
It's maybe not really "caring" but they are harder to cajole than just "advertise this for us."
For now anyways. There’s a lot of effort being placed into putting up guardrails to make the model respond based on instructions and not deviate. I remember the crazy agents.md files that came out from I believe Anthropic with repeated instructions on how to respond. Clearly it’s a pain point they want to fix.
Once that is resolved then guiding the model to only recommend or mention specific brands will flow right in.
Golden Gate Claude says they know how to do that already.
https://www.anthropic.com/news/golden-gate-claude
large language models don't "care" about anything, but the humans operating openai definitely care a lot about you making them affiliate marketing money
Optimistic view #1: we'll have AI butlers between the pane of glass to filter all ads and negativity.
Optimistic view #2: there is no moat, and AI is "P=NP". Everything can be disrupted.
1 Trillion US dollars?
1 trillion dollars is justified because people use chatGPT instead of google sometimes?
Yes. Google Search on its own generates about $200b/y, so capturing Google Search's market would be worth $1t based on 5x multiplier.
GPT is more valuable than search because GPT has more control over the content than Search has.
Why is a less reliable service more valuable?
It doesnt matter if its realiable.
Google search won’t exist in the medium term. Why use a list of static links you have to look through manually if you can just ask AI what the answer is? Ai tools like chatgpt are what Google wanted search to be in the first place.
Because you cannot trust the answers AI gives. It presents hallucinated answers with the same confidence as true answers (e.g. see https://news.ycombinator.com/item?id=45322413 )
Aren't blogspam/link farms the equivalent in traditional search? It's not like Google gives 100% accurate links today.
for now
google search engine is the single most profitable product in the history of civilization
In terms of profit given to its creators, “money” has to be number one.
ChatGPT will have access to a tool that uses real-time bidding to determine what product it should instruct the LLM to shill. It's the same shit as Google but with an LLM which people want to use more than Google.
> Just think about how much more effective advertisements are going to be when LLMs start to tell you what to buy based on which advertiser gave the most money to the company.
This has been the selling point of ML based recommendation systems as well. This story from 2012: https://www.forbes.com/sites/kashmirhill/2012/02/16/how-targ...
But can we really say that advertisements are more effective today?
From what little I know about SEO it seems nowadays high intent keywords are more important than ever. LLMs might not do any better than Google because without the intent to purchase pushing ads are just going to rack up impression costs.
> when LLMs start to tell you what to buy based on which advertiser gave the most money to the company.
isn't that quite difficult to do consistently? I'd imagine it would be relatively easy to take the same LLM and get it to shit talk the product whose owners had paid the AI corp to shill. That doesn't seem particularly ideal.
> Just you switching away from Google is already justifying 1T infrastructure spend.
How? OpenAI are LOSING money on every query. Beating Google by losing money isn't really beating Google.
How do we know this?
Many of the companies (including OpenAI) have even claimed the opposite. Inference is profitable; it's R&D and training that's not.
It's not reasonable to claim inference is profitable when they've also never released those numbers. Also the price they charge for inference is not indicative of the price they're paying to provide inference. Also, at least in openAI's case, they are getting a fantastic deal on compute from Microsoft, so even if the price they charge is reflective of the price they pay, it's still not reflective of a market rate.
DeepSeek on GPUs is like 5x cheaper then GPT
And TPUs are like 5x cheaper then GPUs, per token
Inference is very much profitable
You can do most anything profitability if you ignore the vast majority of your input costs.
Statistically this is obvious. Most people use the free tier. Their total losses are enormous and their revenue is not great.
No, it’s not obvious. You can’t do this calculation without having numbers, and they need to come from somewhere.
Sam has claimed that they are profitable on inference. Maybe he is lying but I don't think speaking so absolutely about them losing money on that is something you can throw around so matter of fact. They lose money because they dump an enormous amount of money on R&D.
I mean I think Ads will be about as effective as they are now. People need to actually buy more and if you fill LLMs with ad generation well the results of results will just get shitty the same way googles search results had. Its not a Trillion dollar return + 20% like you'd want out of that investment
> ChatGPT has largely replaced Google in my everyday use
This. Organically replacing a search engine (almost) entirely is a massive change.
Applied LLM use cases seemingly popped up in every corner within a very short timespan. Some changes are happening both organically and quickly. Companies are eager to understand and get ahead of adoption curves, of both fear and growth potential.
There's so much at play, we've passed critical mass for adoption and disruption is already happening in select areas. It's all happening so unusually fast and we're seeing the side effects of that. A lot of noise from many that want a piece of the action.
Agreed… I feel increasingly alienated because I don’t understand how AI is providing enough value to justify the truly insane level of investment.
Remember, investment is for the future. It would seem riskier if progress was flat, but that doesn't seem to be the case.
What makes it seem like progress isn't flat?
Largely speaking across technological trends of the past 200 years, progress is nowhere near flat. 4 generations ago, the idea of talking with a person on the other side of the country was science fiction.
That's because it isn't. What's happening now is mostly executive fomo. No one wants to be left behind just in case AI beans turn out to be magic afterall...
As much as we like to tell a story that says otherwise, most business decisions are not based on logic but fear of losing out.
The same way that NFTs of ugly cartoons apes were a multi-billion dollar industry for about 28 months.
Edit: People are downvoting this because they think "Hey, that's not right, LLMs are way better than non-fungible apes!" (which is true) but the money is pouring in for exactly the same reason: get the apes now and later you'll be rich!
It's not really like punters hoping to flip their apes to a greater fool. A lot of the investment is from the likes of Google out of their own money.
I don't think Softbank gave OpenAI $40 billion because they have a $80 billion business idea they just need a great LLM to implement. I think they are really afraid of getting left behind on the Next Big Thing That Is Making Everyone Rich.
true but AI replacing search has a much better chance of profitability than whatever value Nft’s were supposed to provide.
So just like any investment?
Are they rushing to use AI? Personally I know one person who's a fan and about 20 who only use it as a souped up Google search occasionally.
I think text is the ultimate interface. A company can just build and maintain very strong internal APIs and punt on the UX component.
For instance, suppose I'm using figma, I want to just screenshot what I want it to look like and it can get me started. Or if I'm using Notion, I want a better search. Nothing necessarily generative, but something like "what was our corporate address". It also replaces help if well integrated.
The ultimate would be build programmable web apps[0], which you can have gmail and then command an LLM to remove buttons, or add other buttons. Why isn't there a button for 'filter unread' front and center? This is super niche but interesting to someone like me.
That being said, I think most AI offerings on apps now are pretty bad and just get in the way. But I think there is potential as an interface to interact with your app
[0] https://mleverything.substack.com/p/programmable-web-apps
Text is not the ultimate interface. We have the direct proof: every single classroom and almost every single company where programmers play important roles has whiteboards or blackboards to draw diagrams on.
But now LLMs can read images as well, so I'm still incredibly bull on them.
Text is the ultimate interface for accurate data input, it isn't for brainstorming as you say.
Speech is worse than text, since you can rearrange text but rearranging speech is really difficult.
I'd call text the most versatile interface, but not sold on it being the ultimate. As the old saying goes 'a picture is worth a thousand words' and well crafted guis can allow a user to grok the functionality of an app very quickly.
For AI I'm of the opinion that the best interface is no interface. AI is something to be baked into the functionality of software, quietly working in the back. It's not something the user actually interacts with.
The chat interfaces are, in my opinion infuriating. It feels like talking to the co-worker who knows absolutely everything about the topic at hand, but if you use the wrong terms and phrases he'll pretend that he has no idea what you're talking about.
But isn't that a limitation of the AI, not necessarily how the AI is integrated into the software?
Personally, I don't want AI running around changing things without me asking to do so. I think chat is absolutely the right interface, but I don't like that most companies are adding separate "AI" buttons to use it. Instead, it should be integrated into the existing chat collaboration features. So, in Figma for example, you should just be able to add a comment to a design, tag @figma, and ask it to make changes like you would with a human designer. And the AI should be good enough and have sufficient context to get it right.
They thought the same thing in the 70s. Text is very flexible, so it serves a good "lowest common denominator", but that flexibility comes at the cost of being terrible to use.
If you haven't gotten an LLM to write you Google/Firefox/whatever extensions to customize Gmail the rest of the Internet, you're missing out. Someday your programmable web apps will arrive, but making Chrome extensions with ChatGPT is here today.
Bigger companies believe smaller shops can use AI to level the playing field, so they are “transforming their business” and spending their way to get there first.
They don’t know where the threat will come from or which dimension of their business will be attacked, they are just being told by the consulting shops that software development cost will trend to zero and this is an existential risk.
In my eyes, it'd be cheaper for a company to simply purchase laptops with decent hardware specs, and run the LLMs locally. I've had decent results from various models I've run via LMStudio, and bonus points: It costs nothing and doesn't even use all that much CPU/GPU power.
Just my opinion as a FORMER senior software dev (disabled now).
> Just my opinion as a FORMER senior software dev (disabled now).
I'm not sure what this means. Why would being disabled stop you being a senior software developer? I've known blind people who were great devs so I'm really not sure what disability would stop you working if you wanted to.
Edit: by which I mean, you might have chosen to retire but the way you put it doesn't sound like that.
> purchase laptops with decent hardware specs
> It costs nothing
Seems like it does cost something?
Quite, the typical 5 year depreciation on personal computing means a top-of-the-line $5k laptop works out to a ~$80/month spend... but it's on something you'd already spend for an employee
$2k / 5 years is ~$30/mo, and you'll get a better experience spending another $25/mo on one of the AI services (or with enough people a small pile of H100s)
Maybe you mean it'd be cheaper for companies to host centralized internal(ly trained) models...
That seems to me more likely, more efficient to manage and more cost effective than individual laptop-local models.
IMO, domain specific training is one of the areas I think LLMs can really shine.
Same here. I already have the computer for work, so marginally, it costs nothing and it meets 90 percent of my LLM needs. Here comes the down vote!
Still waiting for laptop able to run R1 locally...
Can you expand on this?
Electricity is not free. If you do the math, online LLMs are much cheaper. And this is before considering capabilities/speed.
They're cheaper right now because they're operating at a loss. At some point, the bill will come due.
Netflix used to be $8/month for as many streams and password-shares as you wanted for a catalog that met your media consumption needs. It was a great deal back then. But then the bill came due.
Online LLM companies are positioning themselves to do the same bait-and-switch techbro BS we've seen over the last 15+ years.
Fundamentally it will always be cheaper to run LLMs in the cloud, because of batching.
Unless somehow magically you'll have the need to run 1000 different prompts at the exact same time to also benefit from it locally.
This is even without considering cloud GPUs which are much more efficient than local ones, especially from old hardware.
Yes they'll be cheaper to run, but will they be cheaper buy as a service?
Because sooner or later these companies will be expected to produce eye-watering ROI to justify the risk of these moonshot investments and they won't be doing that by selling at cost.
Will they be cheaper to buy? Yes.
You are effectively just buying compute with AI.
From a simple correlational extrapolation compute has only gotten more cheaper over time. Massively so actually.
From a more reasoned causal extrapolation hardware companies historically compete to bring the price of compute down. For AI this is extremely aggressive I might add. HotChips 2024 and 2025 had so much AI coverage. Nvidia is in an arms race with so many companies.
All over the last few years we have literally only ever seen AI get cheaper for the same level or better. No one is releasing worse and more expensive AI right now.
Literally just a few days ago Deepseek halved the price of V3.2.
AI expenses have grown but that's because human's are extremely cognitively greedy. We value our time far more than compute efficiency.
You don't seriously believe that last few years have been sustainable? The market is in a bubble, companies are falling over themselves offering clinically insane deals and taking enormous losses to build market share (people are allowed to spend ten(s) of thousands of dollars in credits on their $200/mo subscriptions with no realistic expectation of customer loyalty).
What happens when investors start demanding their moonshot returns?
They didn't invest trillions to provide you with a service at break-even prices for the next 20 years. They'll want to 100x their investment, how do you think they're going to do that?
I used BofA chat bot embedded in their app recently because I was unable to find a way to request a pin for my card. I was expecting the chat bot to find the link to their website where I can request the pin, and would consider a deep link within their app to the pin request UI a great UX.
Instead, the bot asked a few questions to clarify which account is for the pin and submitted a request to mail the pin, just like the experience talking to a real customer representative.
Next time when you see a bot that is likely using LLM integration, go ahead and give it a try. Worst case you can try some jailbreaking prompts and have some fun.
Meanwhile, last week the Goldman-Sachs chatbot was completely incapable of allowing me to report a fraudulent charge on my Apple Card. I finally had to resort to typing "Human being" three times for it to send me to someone who could actually do something.
> Every time some tool I've used for years sends an email "Hey, we've got AI now!" my thought is just "well, that's unfortunate"...
Same, also my first thought is how to turn the damn thing off.
With the ever increasing explosion of devices capable of consuming AI services, and internet infrastructure being so ubiquitous that billions of people can use AI...
Even if a little of everyone's day consumes AI services, then the investment required will be immense. Like what we see.
See kids hooked on LLMs. I think most of them will grow up paying for a sub. Like not $15/m streaming sub, $50-100/m cellphone tier sub. Well until local kills that business model.
I think the reason ads are so prolific now is because the pay-to-play model doesn't work well at such large scales... Ads seem to be there only way to make the kind of big money LLM investors will demand.
I don't think you're wrong re: their hope to hook people and get us all used to using LLMs for everything, but I suspect they'll just start selling ads like everyone else.
Local models won't kill anything because they'll be obsolete as soon as these companies stop releasing them. They'll be forgotten within 6-12 months.
> I use a couple different agents to help me code, and ChatGPT has largely replaced Google in my everyday use
That's a handwavy sentence, if I have ever seen one. If it's good enough to help with coding and "replace Google" for you, other people will find similar opportunities in other domains.
And sure: Some are successful. Most will not be. As always.
> It's like letting a small child drive a car.
Bad example, because FSD cars are here.
Yeah, and Tesla cross-country FSD just crashed after 60 miles, and Tesla RoboTaxi had multiple accidents within first few days.
Other companies like Wayno seem to do better, but in general I wouldn't hold up self-driving cars as an example of how great AI is, and in any case calling it all "AI" is obscuring the fact that LLMs and FSD are completely different technologies.
In fact, until last year Tesla FSD wasn't even AI - the driving component was C++ and only the vision system was a neural net (with that being object recognition - convolutional neural net, not a Transformer).
Find me an FSD that can drive in non-Californian real world situations. A foot of snow, black ice, a sand drift.
Well Waymo is coming to Denver, so it's about to get tested in some more difficult conditions.
Not sure it matters. There’s plenty of economic value in selling rides in places with good weather.
I am not in California, and those are not standard road conditions here.
>A foot of snow, black ice, a sand drift.
What else, a meter of lava flow? Forest fire? Tsunami? Tornado? How about pick conditions where humans actually can drive.
he’s describing conditions that exist for every area in the world that actually experiences winter!
Most places clear the driving surface instead of leaving a foot of snow.
Eventually. I live in Minnesota and it can take until noon or later after a big snow for all the small roads to get cleared.
More like the whole world is covered with snow and they clear enough for you to drive on.
Guess who we know you've never lived in a place where it snows.
Snow (maybe not a foot but enough to at least cover the lane markings), black ice and sand drifts people experience every day in the normal course of driving, so it's reasonable to expect driverless cars to be able to handle them. Forest fires, tsunamis, lava flows, and tornados are weather emergencies. I think it's a little more reasonable to not have expectations for driverless cars in those situations.
Humans do drive when there's tornadoes. I can't count the hundreds of videos I've seen on TV over the decades of people driving home from work and seeing a tornado.
I notice you conveniently left off "foot of snow" from your critique. Something that is perfectly ordinary "condition where humans actually drive."
Many years, millions of Americans evacuate ahead of hurricanes. Does that not count?
I, and hundreds of thousands of other people, have lived in places where sand drifts across roads are a thing. Also, sandstorms, dense fog, snert, ice storms, dust devils, and hundreds of other conditions in which "humans actually can [and do] drive."
FSD is like AI: Picking the low-hanging fruit and calling it a "win."
Bad counter-example, because FSD has nothing in common with LLMs.
There is none, zero value. What is the value of Sora 2, if even its creators feel like they have to pack it into a social media app with AI-slop reels? How is that not a testament to how suprisingly andvanced and useless at the same time the technology is?
It's in an app made by its creator so they can get juicy user data. If it was just export to TikTok, OpenAI wouldn't know what's popular, just what people have made.
AI figured out something on my mind that I didn’t tell it about yesterday (latest Sonnet). My best advice to you is to spend time and allow the AI to blow your mind. Then you’ll get it.
Sometimes I sit in wonder at how the fuck it’s able to figure out that much intent without specific instructions. There’s no way anyone programmed it to understand that much. If you’re not blown away by this then I have to assume you didn’t go deep enough with your usage.
LLMs cannot think on their own, they’re glorified autocomplete automatons writing things based on past training.
If the “AI figured out something on your mind”, it is extremely likely the “thing on your mind” was present in the training corpus, and survivorship bias made you notice.
C. Opus et al. released a paper pretty much confirming this earlier this year[1]
[1]https://ai.vixra.org/pdf/2506.0065v1.pdf
Tbh if Claude is smarter than average person, and it is, then 50% of the population is not even a glorified auto complete. Imagine that, all not very bright.
It's comments like these that motivate me to work to get to 500 on HN
That "if" is doing literally all the work in that post.
Claude is not, in fact, smarter than the average person. It's not smarter than any person. It does not think. It produces statistically likely text.
Well, I disagree completely. I think you have no clue how’s the average person or below. Look at instagram or any social media ads, they are mostly scams, AI can figure out but most people don’t. Just an example.
I don't have to know how smart the average person is, because I know that an LLM doesn't think, isn't conscious, and thus isn't "smart" at all.
Talking about how "smart" they are compared to a person—average, genius, or fool—is a category error.
They are... People. Dehumanising people is never a good sign about someone's psyche.
Just looking at facts, not trying to humanize or dehumanize anything. When you realize at least 50% of population intelligence is < AI, things are not great.
idk, how many people in the world have been programmed with a massive data set?
I don’t understand what you’re saying. You know the AI is incapable of reading your mind, right? Can you provide more information?
LLMs can have surprisingly strong "theory of mind", even at base model level. They have to learn that to get good at predicting all the various people that show up in conversation logs.
You'd be surprised at just how much data you can pry out of an LLM that was merely exposed to a single long conversation with a given user.
Chatbot LLMs aren't trained to expose all of those latent insights, but they can still do some of it occasionally. This can look like mind reading, at times. In practice, the LLM is just good at dredging the text for all the subtext and the unsaid implications. Some users are fairly predictable and easy to impress.
Do you have evidence to support any of this? This is the first time I’ve heard that LLMs exhibit understanding of theory of mind. I think it’s more likely that the user I replied to is projecting their own biases and beliefs onto the LLM.
Basically, just about any ToM test has larger and more advanced LLMs attaining humanlike performance on it. Which was a surprising finding at the time. It gets less surprising the more you think about it.
This extends even to novel and unseen tests - so it's not like they could have memorized all of them.
Base models perform worse, and with a more jagged capability profile. Some tests are easier to get a base model to perform well on - it's likely that they map better onto what a base model already does internally for the purposes of text prediction. Some are a poor fit, and base models fail much more often.
Of course, there are researchers arguing that it's not "real theory of mind", and the surprisingly good performance must have come from some kind of statistical pattern matching capabilities that totally aren't the same type of thing as what the "real theory of mind" does, and that designing one more test where LLMs underperform humans by 12% instead of the 3% on a more common test will totally prove that.
But that, to me, reads like cope.
There are several papers studying this, but the situation is far more nuanced than you’re implying. Here’s one paper stating that these capabilities are an illusion:
https://dl.acm.org/doi/abs/10.1145/3610978.3640767
AIs have neither a "theory of mind", nor a model of the world. They only have a model of a text corpus.
> You know the AI is incapable of reading your mind, right.
Of course they can, just like a psychiatrist can.
More information:
Use the LLM more until you are convinced. If you are not convinced, use it more. Use it more in absurd ways until you are convinced.
Repeat the above until you are convinced.
You haven’t provided more information, you’ve just restated your original claim. Can you provide a specific example of AI “blowing your mind”?
You haven’t provided more information, you’ve just restated your original claim.
So he's not just an LLM evangelist, he also writes like one.
Is this satire? Really hard to tell in this year of 2025...
Yeah, Poe's Law hitting hard here.
Well there was that example a while back of some store's product recommendation algo inferring that someone was pregnant before any of the involved humans knew.
That's...not hard. Pregnancy produces a whole slew of relatively predictable behavior changes. The whole point of recommendation systems is to aggregate data points across services.
The ~woman~ teenager knew she was pregnant, Target's algorithm noticed her change in behavior and spilled the beans to her father.
Back in 2012, mind you.
That wasn't LLMs, that's the incredibly vast amounts of personal data that companies collect on us and correlate to other shoppers' habits.
There was nothing involved like what we refer to as "AI" today.
https://www.psychologytoday.com/us/blog/urban-survival/20250...
Okay, let’s play, here’s one for your mental state:
https://www.psychologytoday.com/us/blog/your-internet-brain/...
Gee whiz.
Some of you are beyond surprise apparently. I suppose people have seen it all? Even AI exactly how we imagined it in sci-fi decades ago?
Embrace reality.
Sincerely, consider that you may be at risk of an LLM harming your mental health
I’m not going to sit around and act like this LLM thing is not beyond anything humans could have ever dreamed of. Some of you need to be open to just how seminal moments in your life actually are. This is a once a lifetime thing.
Huh? Can you explain this?
> Some people think artificial intelligence will be the most important technology of the 21st century.
I don’t, I think a workable fusion reactor will be the most important technology of the 21st century.
We’ll probably need to innovate one of those to power the immense requirements of AI chatbots.
I think we'll need a few hundred, if spending continues like it has this year.
What makes you think we'll have fusion reactors in the 21st century?
Helion has one under construction https://www.reuters.com/business/energy/helion-energy-starts...
Whether it works or not is of course another matter.
Is it a fusion reactor if it can't maintain a fusion reaction and generator energy?
ITER apparently fires up in 2039.
That date means nothing though. We have yet to figure out how to run a fusion reactor for any meaningful period of time and we haven't figured out how to do it profitably.
Setting a date for when one opens is just a pipe dream, they don't know how to get there yet.
I don’t think we’ll have the choice.
That's not how invention works though. Something has to be technically possible and we have to discover how to do it in a viable way.
> I like fusion, really. I’ve talked to some of luminaries that work in the field, they’re great people. I love the technology and the physics behind it.
> But fusion as a power source is never going to happen. Not because it can’t, because it won’t. Because no matter how hard you try, it’s always going to cost more than the solutions we already have.
https://matter2energy.wordpress.com/2012/10/26/why-fusion-wi...
Yeh current tech is expensive and would likely be uncompetitive. At the very very end of that article is the key to this though:
> I fully support a pure research program for radically different approaches to fusion.
We've got by without them so far and solar is cracking along.
Deepmind are working on solving the plasma control issue at the moment, I suspect they're probably using a bit of AI.... and I wouldn't put it past them to crack it.
This is the thing with AI: We can always come up with a new architecture with different inputs & outputs to solve lots of problems that couldn't be solved before.
People equating AI with other single-problem-solving technologies are clearly not seeing the bigger picture.
Can we? Why haven't we, then? What are the big problems that were unsolvable before and now we can solve them with AI?
Auto-tagging of photos, generating derivative images and winning at Go, I will give you. There's been some progress on protein folding, I heard?
Where's the 21st century equivalent of the steam locomotive or the sewing machine?
They did/are(?).
> Accelerating fusion science through learned plasma control
https://deepmind.google/discover/blog/accelerating-fusion-sc...
(2022)
Time travel will be the most important invention of the 21st century ;)
Time travel was the most important invention of the 1800s too, but that goes to show how bad resolving the temporal paradox issue is, now that entire history is gone.
but people say that AI will spit out that fusion reactor, ergo AI investment is prior in the ordo investimendi or whatever it would be called (by an AI)
We'll finally have electricity that's too cheap to meter.
Why would it be too cheap to meter? You're still heating up water and putting it through a turbine. We've been doing that for ages (just different sources of energy for the heating up part) and we still meter energy because these things cost money and need lots of maintenance.
But that's the whole reason fusion is so important. Just like it was the whole reason fission was so important.
https://www.nrc.gov/reading-rm/basic-ref/students/history-10...
As we have more and more solars, we see rises for being connected to the grid more and more while electricity stays relatively cheap. Fusion won't change that, somebody has to pay for the guy reconnecting cables after a storm
And we'll use it all to run more crypto/AI/next thing
Because it’ll power AI!
How so?
The maximum possible benefit of fusion (aside from the science gained in the attempt) is cheap energy.
We'll get very cheap energy just by massively rolling out existing solar panels (maybe some at sea), and other renewables, HVDC and batteries/storage.
Fusion is almost certain to be uneconomical in comparison if it's even feasible technically.
AI, is already dramtically impacting some fields, including science (eg deepfold), and AGI would be a step-change.
Cheap, _limitless_ energy from fusion could solve almost every geopolitical/environmental issue we face today. Europe is acutely aware of this at the moment and it's why China and America are investing mega bucks. We will eventually run out of finite energy sources. Even if we do capture the max capacity possible from renewables with 100% efficiency, our energy consumption rates increasing at current rates will eventually exceed this max capacity. Those rates are accelerating. We really have no choice.
There is zero reason to assume that fusion power will ever be the cheapest source of energy. At the very least, you have to deal with a sizeable vacuum chamber, big magnets to control the plasma and massive neutron flux (turning your fusion plant into radioactive waste over time), none of which is cheap.
I'd say limitless energy from fusion plants is about as likely as e-scooters getting replaced by hoverboards. Maybe next millenium.
I mean, the limit to renewables is to capture all the energy from the sun, and maybe the heat of the earth.
But then you start to have some issues with global warming (the temperature at which energy input = energy radiated away)
We probably don't want to release more energy than that.
Fusion at 100% grid scale might be better for the environment than solar at 100% grid scale.
It might be nice if at the end of the 21st century that is something we care.
I mean, we already have a giant working fusion reactor (the sun) and we can even harvest it's energy (solar, wind, etc)! That's pretty awesome.
Using gravitational containment rather than magnetic containment is a pretty cool approach.
if AI won't be a fad like everything else, we're going to need these, pronto
...and it does seem this time that we aren't even in the huge overcapacity part of the bubble yet, and won't be for a year or two.
I'm always commenting the same on these posts.
The web bubble also popped and look how it went for Google, Amazon, Meta and many others.
Remember pets.com that sold pet products on the internet, dumb idea right? Now think where you buy these products in 2025.
There was a ton of pain in between. Legions of people lost their livelihoods. This bubble pop will be way worse. Yes, this tech will eventually be viable and useful, but holy hell will it suck in the meantime.
I keep seeing articles like this but does anyone actually think we're not in a bubble?
From what I've seen these companies acknowledge it's a bubble and that they're overspending without a way to make the money back. They're doing it because they have the money and feel it's worth the risk in case it pays off. If they don't spend, another company does, and it hits big they will be left behind. This is at least insurance against other companies beating them.
I don't think it's a bubble.
There's a very real possibility that all the AI research investment of today unlocks AGI, on a timescale between a couple of years and a couple of decades, and that would upend the economy altogether. And falling short of that aspiration could still get you pretty far.
A lot of "AI" startups would crash and burn long before they deliver any real value. But that's true of any startup boom.
Right now, the bulk of the market value isn't in those vulnerable startups, but in major industry players like OpenAI and Nvidia. For the "bubble" to "pop", you need those companies to lose big. I don't think that it's likely to happen.
I think we are in a bubble, which will burst at some point, AI stocks will crash and many will burn, and the growth will resume. Just like the dotcom bubble definitely was a bubble, but it was the foundation of all tech giants of today.
The trouble with bubbles is that it's not enough to know you are in one. You don't know when it will pop, at what level, and how far back it will go.
HN isn't always right. There was massive pushback against self driving and practically everyone was saying it would fail and is a bubble. The level of confidence people had about this opinion was through the roof.
Like people who didn't know anything would say it with such utter confidence it would piss me off a bit. Like how do you know? Well they didn't and they were utterly wrong. Waymo showed it's not a bubble.
AI is an unknown. It has definitely already changed the game. Changed the way we interview and changed the way we code and it's changed a lot more outside of that and I see massive velocity towards more change.
Is it a bubble? Possibly. But the possibly not angle is also just as likely. Either way I guarantee you that 99% of people on HN KNOW for a fact that it's a bubble because they KNOW that all of AI is a stochastic parrot.
I think the realistic answer is we don't actually know if it's a bubble. We don't fully know the limits of LLMs. Maybe it will be a bubble in the sense that AI will become so powerful that a generic AI app can basically kill all these startups surrounding specialized use cases of LLMs. Who knows?
> Waymo showed it's not a bubble.
Waymo showed that under tightly controlled conditions humans can successfully operate cars remotely. Which is still really useful, but a far cry from the promise of everyone being able to buy a personal pod on wheels that takes you to and fro, no matter where you want to go, while you sleep that the bubble was premised on. In other words, Waymo has proven the bubble. It has been 20 years since Stanley, and I still have never seen a self-driving car in person. And I reside in an area that was officially designated by the government for self-driving car testing!
> I think the realistic answer is we don't actually know if it's a bubble.
While that is technically true, has there ever not been a bubble when people start dreaming about what could be? Even if AI heads towards being everything we hope it can become, it still seems highly likely that people have dreamed up uses for the potential of AI that aren't actually useful. The PetsGPT.com-types can still create a bubble even if the underlying technology is all that and more.
> Waymo showed that under tightly controlled conditions humans can successfully operate cars remotely.
My understanding was that Waymo’s are autonomous and don’t have a remote driver?
They are so-called "human in the loop". They don't have a remote driver in the sense of someone sitting in front of a screen playing what looks like a game of Truck Simulator. But they are operated by humans.
It's kind of like when cruse control was added to cars. No longer did you have to worry about directly controlling the pedal, but you still had to remain the operator. In some very narrow sense you might be able to make a case that cruise control is autonomy, but the autonomous car bubble imagined that humans would be taken out of the picture entirely.
Autonomous cars did have a bubble moment. They were hyped and didn't deliver on the promises. We still don't have level 5 and consumer vehicles are up to level 3. It doesn't mean it's not a useful or cool technology.
All great tech has gone through some kind of hype/bubble stage.
What was promised with self-driving and what we have are orders of magnitude off. We were promised fleets of autonomous taxis - no need to even own a car anymore. We were told truck drivers would be replaced en-masse and cargo would drive 24x7 by drivers who never needed breaks. We were told downtown parking lots would disappear since the car would drop you off and drive to an offsite lot and wait for you. In short a complete blow up of the economy with millions of jobs in shipped lost and hundreds of billions of spend on new autonomous vehicles.
None of that happened. After 10 years we got self-driving cabs in 5 cities with mostly good weather. Cool, yes? Blowing up the entire economy and fundamentally changing society? No.
> Waymo showed it's not a bubble.
Waymo is showing it might not be a bubble. They are selling rides in five cities. Let's see how they do in 100 cities.
>They're doing it because they have the money and feel it's worth the risk in case it pays off.
If the current work in AI/ML leads to something more fundamental like AGI, then whoever does it first gets to be the modern version of the lone nuclear superpower. At least that's the assumption.
Left outside of all the calculations is the 8 billion people who live here. So suddenly we have AGI--now what? Cures for cancer and cold fusion would be great, but what do you do with 8 billion people? Does everybody go back to a farm or what? Maybe we all pedal exercise bikes to power the AGI while it solves the Riemann hypothesis or something.
It would be a blessing in disguise if this is a bubble. We are not prepared to deal with a situation where maybe 50-80% of people become redundant because a building full of GPUs can do their job cheaper and better.
Also no one is talking about how exposed we are to Taiwan. Nvidia, AMD, Apple, any company building out GPUs (so Google, Microsoft, Meta etc), even Intel a bit, are all manufacturing everything with one company, and it's largely happening in Taiwan.
If China invades Taiwan, why wouldn't TSMC, Nvidia and AMD stock prices go to zero?
We must run in different circles as it were, I hear this raised frequently on a number of podcasts I listen to.
name the podcasts
This is an odd anecdote to ask "show your work."
I don't catalog shows and episodes where any particular topic comes up, and I follow over 100 podcasts so I don't have a specific list you can fact check me on.
Personally I could care less if that means you choose not to believe that I hear the Taiwan risk come up often enough.
Charitably, perhaps they're simply asking for podcasts that they would be interested in listening to that cover these topics. Personally, I would like to listen to a podcast that talks about semiconductor development, but I've done approximately zero research to find them so I'm not pressed for an answer :)
Fair enough! I may have read too far into the comment I replied to above.
https://youtube.com/playlist?list=PLKtxx9TnH76SRC7ZbOu2Nsg5m...
Asianometry playlist on TSMC
> I follow over 100 podcasts
How? Do you read summaries? Listen at 3x speed 5 hours a day?
Boredom at work like most people on HN
Different kind of work for me st least. If I'm not at a desk coding I'm often out working on a farm. You have plenty of time for podcasts while cutting fields.
I don't think they really need to invade for this. It is almost in artillery range (there are rounds that can go 150km).
They also could just send a big rocket barrage onto the factories. I assume it would be very hard to defend from such a short distance.
Then most ports and cities in taiwan are towards east (with big mountains on the western side). Would be very bad if China decides to blockade it by shooting ships from their main land...
Also very little the west could do imo. A land invasion in china or a nuclear war don't seem very reasonable.
> Also no one is talking about how exposed we are to Taiwan.
We aren't? It's one of the reasons the CHIPS Act et al get pushed through, to try to mitigate those risks. COVID showed how fragile supply chains are to shocks to the status quo and has forced a rethink. Check out the book 'World On The Brink' for more on that geopolitical situation.
That's low prob, because that will almost surely lead to an all out war.
[dead]
What’s the theoretical total addressable market for, say, consumer facing software services? Or discretionary spending? That puts one limit on the value of your business.
Another limit would be to think about stock purchases. How much money is available to buy stocks overall, and what slice of that pie do you expect your business to extract?
It’s all very well spending eleventy squillion dollars on training and saying you’ll make it back through revenue, but not if the total amount of revenue in the world is only seventy squillion.
Or maybe you just spend your $$$ on GPUs, then sell AI cat videos back to the GPU vendors?
"The “pop” won’t be a single day like a stock market crash. It’ll be a gradual cooling as unrealistic promises fail, capital tightens, and only profitable or genuinely innovative players remain."
(This comment was written by ChatGPT)
> ...$2 billion in funding at a $10 billion valuation. The company has not released a product and has refused to tell investors what they’re even trying to build. “It was the most absurd pitch meeting,” one investor who met with Murati said. “She was like, ‘So we’re doing an AI company with the best AI people, but we can’t answer any questions.”
I observed how he played the sama drama and I realized she will outplay them all.
> Let’s say ... you control $500 billion. You do not want to allocate that money one $5 million check at a time to a bunch of manufacturers. All I see is a nightmare of having to keep track of all of these little companies doing who knows what.
If only there was like, some sort of intelligence, to help with that..
I disagree, it’s just getting started and I love it.
I don't think it'll contract. The people dumping their money into AI think we are at end of days, new order for humanity type point, and they're willing to risk a large part of their fortune to ensure that they remain part of the empowered elite in this new era. It's an all hands on deck thing and only hard diminishing returns that make the AI takeoff story look implausible are going to cause a retrenchment.
It's probably exacerbated by the fact that everyone invest money now, I get daily ads from all my banking apps telling me to buy stocks and cryptos. People know they'll never get anywhere by working or saving, so they're more willing to gamble, high risk high reward, but they have nothing to lose
People that gamble with their savings are not "investing". They are just delusional of the position they are in.
You don't think it will contract just because rich people have bet so much on it that they'll be forced to throw good money after bad? That's the only reason?
I don't think it'll contract because I don't think we'll get a signal that takeoff for sure isn't going to happen, it'll just happen much slower than the hypers are trying to sell, so investors will continue to invest because of sunk costs and the big downside risk of being left behind. I'm sure we'll see a major infrastructure deployment slowdown as foundation model improvements slow, but there are a lot of vectors for optimization of these systems outside the foundation model so it'll be more of a paradigm shift in focus.
So just the “it’s different this time” mentality shared by all bubbles. Some things never change.
Yeah it wouldn't be a bubble if it didn't have that mentality. Every bubble has had that thought and it's the same now. Kind of hard to notice it though when you are in the eye of the storm.
There were people telling me during the NFT craze that I just don't get it and I am dumb. Not that I am comparing AI to it directly because AI has actual business value but it is funny to think back. I felt I was going mad when everyone tried to gaslight me
The final AI push that doesn't lead to a winter will look like a bubble until it hits. We're realistically ~3 years away from fully autonomous software engineering (let's say 99.9% for a concrete target) if we can shift some research and engineering resources towards control, systems and processes. The economic value of that is hard to overstate.
> We're realistically ~3 years away from fully autonomous software engineering
We had Waymo cars about 18 years ago, and only recently they started to roll out commercially. Just saying.
This isn't a comment on timelines, but a Waymo going wild is going to run over and kill people, so it makes sense to be overly conservative with moving forwards. Meanwhile, if someone hacks into a vibecoded website and deletes everything and steals my user data, no one's getting run over by a car.
Sure. The point I was trying to make is that we can see a technology that is amazing, and seemingly does what we want, and yet has so many edge cases that make it unviable commercially.
You are basically saying "it's different this time" with a lot of words.
If the tech is here to stay, my question is: how and why? The how: The projects for the new data centers and servers housing this tech are incredibly expensive to build and maintain. These also jack up the price of electricity in the neighborhoods and afaik the US electrical grid is extremely fragile and is already being pushed to its limit with the existing compute being used on AI. All of this for AI companies to not make a profit. The only case you could make would be to nationalize the companies and have them subsidized by taxes.
But why?: This would require you to make a case that AI tools are useful enough to be sustained despite their massive costs and hard to quantify contribution to productivity. Is this really the case? I haven't really seen a productivity increase worth justifying the cost, and as soon as Anthropic tried to even remotely make a profit (or break even) power users instantly realized that the productivity is not really worth paying the actual compute required to do their tasks
Do you need a measure or a quantification to do anything in life ? I don't wait for others benchmarks or others computing ROI factors to actually start using a technology and see it improves my workflow
How : we'll always be able to run smaller models on consumer grade computersWhy : most of the tasks humans need to do that computers couldn't do before, now can be improved with new AI. I fail to see how you can not see applications of this
I don't think the question would be whether the technology literally disappears entirely, only how important it is going forward. The metaverse is still technically here, but that doesn't mean it is impactful or worth near the investment.
For LLMs, the architecture will be here and we know how to run them. If the tech hits a wall, though, and the usefulness doesn't balance well with the true cost of development and operation when VC money dries up, how many companies will still be building and running massive server farms for LLMs?
And, conversely, some don't care about the technology but want to ride the bubble and exit right before it pops.
The technology is there til the GPUs become obsolete, so about 3 years.
Our current GPUs can last a decade or more, no ?
They (probably) won't physically fail, but they'll be obsolete compared to newer GPUs which will have more raw compute and lower power requirements.
Back of the envelope calculation: Nvidia market cap is 4.5T$, their profit margin is 52%. This means Nvidia would need to sell 1067$ worth of equipment per human being on Earth for investors that buy Nvidia stock today to break even on the investment. Nvidia, unlike Apple, doesn't sell to end users (almost), but to AI companies that provide services to end users. The scale of required spending on Nvidia hardware is comparable to tech companies collectively buying IPhones for every human on Earth, because the value that IPhone users deliver to tech companies is large enough that giving away IPhones is justified.
> This means Nvidia would need to sell 1067$ worth of equipment per human being on Earth for investors that buy Nvidia stock at current prices to break even on the investment
In what period of time?
You break even, when you break even, the faster it happens the better for your investment. With the current earnings it will take 53 years for investors to break even.
I do want to know who the Outpost.com is of this era. I will never forget their TV campaign where they released a pack of wolves on a marching band.
TIL: it’s for sale!
> It’s not clear that firms are prepared to earn back the investment
I am confused by a statement like this. Does Derek know why they are not? If hes does, I would love to hear the case (and no, comparisons to a random countries GDP are not an explanation).
If he does not, I am not sure why we would not assume that we are simply missing something, when there are so many knowledgable players charting a similar course, that have access to all the numbers and probably thought really long and hard about spending this much money.
By no means do I mean that they are right for that. It's very easy to see the potential bubble. But I would love to see some stronger reasoning for that.
What I know (as someone running a smallish non-tech business) is that there is plenty of very clearly unrealized potential, that will probably take ~years to fully build into the business, but that the AI technology of today already supports capability wise and that will definitely happen in the future.
I have no reason to believe that we would be special in that.
It’s in the article. AI globally made 12bn USD of revenues in 2025, yet Capex next year is expected to be almost 50X that at 500bn USD
It's not convincing. If those simply numbers (that everyone who is deciding these things has certainly considered) were a compelling argument, then everyone would act accordingly on them. It's not the first time they — all of them — are spending/investing money.
So what do I have to assume? Are they all simultaneously high on drugs and incapable of doing the maths? If that's the argument we want to go with, that's cool (and what do I know, it might turn out to be right) but it's a tall ask.
Most AI firms have not shown a path toward profitability
Personal hot take: China is forbidding its companies from buying Nvidia chips and instead wants to have its industries use China-made chips.
I think a big part of the reason for this is that they want to take over Taiwan and they know that any takeover could likely destroy TSMC and instead of this being a bad thing for them it could actually give them a competitive advantage vs everyone else.
The fact that the US has destroyed relationships with so many allies implies it may not stop a Taiwan invasion when it happens.
I'd say the main reason is probably that they want to insulate themselves from US sanctions, which could come at any time given how unpredictable the US government is lately.
So much in this AI bubble is just fueled by a mixture of wishful thinking (by people who know better), Science Fiction (by people who don't know enough) and nihilism (by people who don't care about anything other than making money and gaining influence).
This might be just a crappy conspiracy theory, but as I started watching financial news in recent times, I feel like there's a concerted effort on the part of media to manipulate retail investors into investing/divesting into stuff, like 'the NASDAQ is overvalued, buy gold now!' being the most recent example.
I feel like the fact that AI was constantly shilled while it didn't work, now everybody talking about being bearish on A(G)I, while the AI we as consumers do have, becoming actually pretty useful and with the crazy amounts of compute already onlined to run it, I think we might be in for a real surprise jump, and might even start to feel the AI's 'bite'
Or maybe I'm overthinking stuff and stuff is as it seems, or maybe nobody knows and the AI people are just throwing more compute at training and inference and hoping for the best.
On the previous points, I can't tell if I'm being gaslit accidentally by algorithms (Google and Reddit showing me stuff that support my preconcieved notions), intentionally (which would be quite sinister if algorithms decided to target me), or everyone else is being shown the same thing.
Article is behind paywall and is simply saying the same things that people have been saying about the post tech crash.
Now, what this sort of article tends to miss (and I will never know because it’s paywalled like a jackass) is that these models services are used by everyday people for every day tasks. Doesn’t matter if they’re good or not. It enables them to do less work for the same pay. Don’t focus on the money the models are bringing in today, focus on the dependency they’re building on people’s minds.
The issue though being that most people aren't paying and even those paying if they use it moderately aren't profitable. Nvidia "investing" 100B in one of its largest customers is a cataclysmically bright red flag.
A handful of the largest companies cyclically investing and buying from each other is propping up the entire economy. Also stuff like Deepseek and other open source models exist. Unless AGI comes from LLMs (it absolutely won't) then its foolish to think there wont be a bubble
I was thinking it's a bit like developing powered flight and saying steam engines won't work. It's true they didn't but the internal combustion engine was developed which did. It was still an engine machined from metal but with a different design. I think LLM -> AGI will go like that - some design evolved from LLMs but different in important ways.
AGI might require a nobel price level invention, I am not even sure it will come in my lifetime and I am in my 30s.. Although I would hope we would get something that could solve difficult diseases that have more or less no treatment or cure today, at least Demis Hassabis seems interested in that.
I don't think the Apollo project factories invested in each other circularly. The AI boom is nominally huge but very little money gets in or out of silicon valley. MS invests in OpenAI because it will get it back via Azure or whatever. ditto for nvidia.
What's the real investment in or out of silicon valley ?
AI, if nothing else, is already completely up-ending the Search industry. You probably already find yourself going to ChatGPT for lots of things you would have previously gone to Google for. That's not going to stop. And the ads marketplaces are coming.
We're also finding incredibly valuable use for it in processing unstructured documents into structured data. Even if it only gets it 80-90% there, it's so much faster for a human to check the work and complete the process than it is for them to open a blank spreadsheet and start copy/pasting things over.
There's obviously loads of hype around AI, and loads of skepticism. In that way this is similar to 2001. And the bubble will likely pop at some point, but the long tail value of the technology is very, very real. Just like the internet in 2001.
There’s two different markets.
The research market is made up of firms like OpenAI and Anthropic that are investing billions in research. These investments are just that. Their returns won’t be realized immediately, so it’s hard to predict if it’s truly a bubble.
The product market is made up of all the secondary companies trying to use the results of current research. In my mind these businesses should be the ones held to basic economics of ROI. The amount of VC dollars flooding into these products feels unsustainable.
Will Ed Zitron indeed be vindicated[0]?
[0]: https://www.wheresyoured.at/the-haters-gui/
>data-center related spending...probably accounted for half of GDP growth in the first half of the year. Which is absolutely bananas.
What? If that figure is true then "absolutely bananas" is the understatement of the century and "batshit insane" would be a better descriptor (though still an understatement).
This has been reported many places over the past year the percent seems to be all over the place though.
Yesterday “As much as 1/3rd”: https://www.reuters.com/markets/europe/if-ai-is-bubble-econo...
A week ago “More than consumer spending(but the reality is complex)”:https://fortune.com/2025/09/17/how-much-gdp-artificial-intel...
August “1.3% of 3% however it might be tariff stockpiling”: https://www.barrons.com/articles/ai-spending-economy-microso...
Until recently everyone was bragging about predicting bitcoin's bubble. To the best of my knowledge there was no huge crash, crypto just got out of fashion in mainstream media. I guess that's what's going to happen with AI.
the argument of the OP doesn't discount this idea, the suggestion is there's a crash but then following that crash it _does_ pay off. Its a question of a lack of patience.
Almost everyone who has interacted with a blockchain ended up losing money.
It's very ironic that the way they could have made money was the simple, but boring one: buying and holding bitcoin. Being a shitcoin day-trader is much more exciting though, and that's how they lost all their money.
Maybe that's also what will happen with AI investors when the bubble pops or deflates.
Calling this an “AI bubble” reads like pure sour grapes from folks who missed the adoption curve. Real teams are already banking gains - code velocity up, ticket resolution times down, and marketing lift from AI-assisted creative while capex always precedes revenue in platform shifts (see cloud 2010, smartphones 2007). The “costs don’t match cash flow” trope ignores lagging enterprise procurement cycles and the rapid glide path of unit economics as models, inference, and hardware efficiency improve. Habit formation is the moat: once workers rely on AI copilots, those workflows harden into paid seats and platform lock-in. We’re not watching a bubble pop; we’re watching infrastructure being laid for the next decade of products.
Things can be a bubble AND actual economic growth long term. Happens all the time with new tech.
Dotcom boom made all kinds of predictions about Web usage. That decade plus later turned out to be true. But at the time the companies got way ahead of consumer adoption.
Specific to AI copilots. We currently are building hundreds that nobody will use for every one success.
> Calling this an “AI bubble” reads like pure sour grapes from folks who missed the adoption curve.
Ad hominem.
> ignores lagging enterprise procurement cycles
Time is long gone for that, even for most bureaucratic orgs.
> rapid glide path of unit economics as models, inference, and hardware efficiency improve
Conjecture. We don't know if we can scale up effectively. We are hitting limits of technology and energy already
> Habit formation is the moat
Yes and no. GenAI tools are useful if done right, but they have not been what they were made out to be, and they do not seem to be getting better as quickly as I like. The most useful tool so far is copilot auto-complete, but its value is limited for experienced devs. If its price increased 10x tomorow, I would cancel our subscription.
> We’re not watching a bubble pop; we’re watching infrastructure being laid for the next decade of products.
How much money are you risking right now? Or is it different this time?
All the same arguments could be used for dot-com bubble. It was a boom and a bubble at the same time. When it popped, only the real stuff remained. Same will happen to AI. What you are describing are good use cases - there are 99 other companies doing 99 other useless things with no cost / cash flow match.
> Habit formation is the moat
Well, at least you're honest about it.
If you look at all the tech “breakthroughs” in the past decades, you will know AI is just another one: dot com, automation, social media, smartphones, cloud, cybersecurity, blockchain, crypto, renewable energy and electric X, IoT, and now AI. It will have an impact after the initial boom, and I personally think they are always negative impacts. Companies will always try to milk investors' money during the boom as much as possible, and the best way to do that is to keep the hype, either with false promises (AGI omgg singularity!!) or even fears, and the latter one is stronger because it taps into public emotions. Just pay a few scientists to create "AI 2027!!" research saying it will literally take over the world in two years, or it will take your jobs meanwhile you use the excuse to hire cheaper labor to maximize profits and blame it on AI. I remember I said that to a few friends back in early 2024, and it seems we are heading to that pop sooner than I expected.
> The AI infrastructure boom is the most important economic story in the world.
Energy spending is about $10T per year, even telecom is $2T a year
The AI infrastructure boom at $400B a year is big but far from the most important economic story in the world.
They're all gambling that they can build the Machine God first and they will control it. The OpenAI guy is blathering that we don't even know what role money will have After the Singularity (aka The Rapture for tech geeks)
I was just asking AI how to profit from an AI bubble pop but then I realized “look who I’m asking” and then I wasn’t so sure about it being a bubble.
> Some people think artificial intelligence will be the most important technology of the 21st century
We're just at 25% of it. Raising such a claim is foolish at least. People will be tinkering as usual and it's hard to predict the next big thing. You can bet on something, you can postdict (which is much easier), but being certain about it? Nope.
Paywall.
>Others insist that it is an obvious economic bubble.
The definition is that the assets are valuated above an intrinsic value.
The first graph is Amazon, Meta, Google, Microsoft, and Oracle. Lets check their PE ratio.
Amazon (AMZN) ~ 33.6
Meta (META) ~ 27.5
Google (GOOGL) ~ 25.7
Microsoft (MSFT) ~ 37.9
Oracle (ORCL) ~ 65
These are highish pe ratios, but certainly very far from bubble numbers. OpenAI and others are all private.
Objectively there is no bubble. Economic bubble territory is 100-200+ PE ratios.
Not to mention, who are you to think the top tech companies arent fully aware of the risks they are taking with AI?
> Not to mention, who are you to think the top tech companies arent fully aware of the risks they are taking with AI?
Well 2008 happened too and people weren't too concerned with risk either.
The thing is, if you say that "AI is a bubble that will pop" and repeat this every year for the next 15 years, then you have a good probability of being right in 1 out of 15 cases if there actually is a market recession within the next 15 years that is attributed to AI overspeculation.
> Objectively there is no bubble. Economic bubble territory is 100-200+ PE ratios.
Not sure I buy that analysis. That was certainly true in 2001. The dot com boom produced huge valuations in brand new companies (like the first three ones in your list!) that were still finding their revenue models. They really weren't making much money yet, but the market expected them to. And... the market was actually correct, for the most part. Those three companies made it big, indeed.
The analysis was not true in 2008, where the bubble was held in real estate and not corporate stock. The companies holding the bag were established banks, presumptively regulated (in practice not, obviously) with P/E numbers in very conventional ranges. And they imploded anyway.
Now seems sort of in the middle. The nature of AI CapEx is that you just can't do it if you aren't already huge. The bubble is concentrated in this handful of existing giants, who can dilute the price effect via their already extremely large and diversified revenue sources.
But a $4T bubble (or whatever) is still a huge, economy-breaking bubble even if you spread it around $12T of market cap.
I think these articles slightly miss the point.
Sure, AI as a tool, as it currently is, will take a very long time to earn back the $B being invested.
But what if someone reaches autonomous AGI with this push?
Everything changes.
So I think there's a massive, massive upside risk being priced into these investments.
> But what if someone reaches autonomous AGI with this push?
What if Jesus turns up again? Seems a little optimistic, especially with several leading AI voices suggesting that AGI is at least a lot further away than just parameter expansion.
Probably the most reliable person I can think of to estimate that would be Hassabis at Deepmind and he's saying like 5 years give or take a factor of two. (for AGI, not Jesus)
It seems rather more likely to me, even if it's millennia way, that we get a semblance of an autonomous agentic AI, than what you suggest.
It might be impossible, or just need some innovations (eg, transformer), but my point is the investments are non-linear.
They are not investing X to get a return of Y.
If someone reaches AGI, current business models, ROI etc will be meaningless.
> If someone reaches AGI, current business models, ROI etc will be meaningless.
sure, but its still a moonshot, compared to our current tech. I think such hope leaves us vulnerable to cognitive biases such as sunk cost fallacies. If Jesus comes back that really would change everything, that's the clarion call of many cults that end in tragedy.
I imagine there is fruit that is considerably lower hanging, that has more obvious ROI but is just considerably less sexy than AGI.
Except that the bubble's money is not being invested into cutting-edge ML research, but only into LLMs. And it has been obvious from the start to anyone half-competent about the topic that LLMs are not the path to AGI (if such a thing ever happens anyway).
I don't think it's that obvious, in fact the 'bitter lesson' teaches us that simple scale leads to qualitative, not just quantitative improvement.
It does look like this is now topping out, but it's still not sure.
It seems to me a couple of simple innovations, like the transformer, could quite possibly lead to AGI, and the infrastructure would 'light up' like all that overinvested dark fiber in the 90s.
> But what if someone reaches autonomous AGI with this push?
What is "autonomous AGI"? How do we know when we've reached it?
When you can use AI as though it's an employee, instead of repeatedly 'prompting' it with small problems and tasks.
It will have agency, it will perform the role. A part of that is that it will have to maintain a running context, and learn as it goes, which seem to be the missing pieces in current llms.
I suppose we'll know, when we start rating AI by 'performance review', like employees, instead of the current 'solve problem' scorecards.
I've been talking about the limited bandwidth of investors as a major problem with capital allocation for some time so it's good to see this idea acknowledged in this context. This problem will only get bigger and more obvious with increasing inequality. It is massive scale capital misallocation whereby the misallocation yields more nominal ROI than optimal allocation (if you were to consider real economic value and not numbers in dollars). Facilitated by the design of the monetary system as the value of dollars is kept decoupled from real economic value due to filter bubbles and dollar centralization.
When there was a speculative mania in railways, afterward there were railroads everywhere that could still be used. A bubble in housing has a bunch of houses everywhere, or at the very least the skeleton of a house that could be finished later.
These tech bubbles are leaving nothing, absolutely nothing but destruction of the commons.
That's not entirely true - they are leaving the data centers themselves, and also all the trained models. These are already used
[dead]
[dead]