Are We in an A.I. Bubble? I Suspect So

(gideons.substack.com)

57 points | by paulpauper 10 hours ago ago

73 comments

  • kristianp 7 hours ago ago

    My concern is that the "Magnificent 7" stocks are heavily into AI (except Apple) and because of their huge market caps, the most popular etfs have a large component of Mag 7. Even the "global" Vanguard fund is about 20% Mag 7 (1), for example (and the next one on their list is TSMC). The Vanguard S&P 500 etf is 36% Mag 7 (2).

    If there's an AI pop, a most people and most huge funds will lose money, driving liquidity out of the global financial system.

    I've started investing into a mix of Europe etf and global etf, instead of just the global etf, because the global ETF is so exposed to US computer companies.

    (1) https://investor.vanguard.com/investment-products/etfs/profi...

    (2) https://investor.vanguard.com/investment-products/etfs/profi...

    • binary132 4 hours ago ago

      almost as though extreme market consolidation is a bad thing

  • datadrivenangel 9 hours ago ago

    "So are we in an A.I. bubble? It sure looks like it to me. That doesn’t mean we won’t get large economic advances (and disruptions) out of A.I. "

    This is the most plausible looking path forward: LLMs + conventional ML + conventional software inverts how our economy operates over the next few decades, but over the next few years a lot of people are going to lose a lot of money when the singularity is actually a sigmoid curve.

    • 4 hours ago ago
      [deleted]
    • AlecSchueler 8 hours ago ago

      What is an inverted economy?

      • malfist 8 hours ago ago

        Poor people pay rich people to work?

        • arthurcolle 8 hours ago ago

          Renters paying mortgage holders to pay their mortgage in exchange for conceptual gains like "freedom"

          We've been there for a long fucking time

          • Spivak 7 hours ago ago

            If you're in a market where landlords can charge the full mortgage you desperately need more housing. Rent around here costs slightly more than the interest on the mortgage.

            • dontlaugh 6 hours ago ago

              I don’t think there’s anywhere in the UK where a mortgage is more than the rent for the same property. Renters are generally the breadwinners in their landlord’s families.

            • arthurcolle 7 hours ago ago

              Every city is like this

      • walleeee 7 hours ago ago

        ambiguous enough to work no matter who you're talking to

      • jrflowers 7 hours ago ago

        It’s when tuna finally starts snatching man from the sea

    • Lionga 8 hours ago ago

      Its not a sigmoid curve but rather the typical hype cycle curve. LLMs will surely not "invert" the economy (whatever that means). LLMs will just be tiny part of it like any other tool

  • airstrike 8 hours ago ago

    > Even my moderate view, though, is premised on the assumption that A.I. will continue to advance up the steep part of the sigmoid curve for a while before hitting one or another physical constraint that creates a new inflection point and slows advances further.

    I feel like we've pretty much already hit a fundamental barrier in compute that is unlikely to be overcome in the near future barring a profound, novel algorithmic approach or an entirely novel computing model.

    • lucianbr 8 hours ago ago

      Are there any events in the last, maybe 6 months, that seem to be still on the steep part of the sigmoid curve? I'm probably not well informed, but I can't think of any. GPT5 launch sure does not feel like a steep advance. What else was there?

    • dwaltrip 5 hours ago ago

      $400 billion is being spent on new data centers this year alone (per yesterday's hard fork episode), for better or worse.

  • OptionOfT 7 hours ago ago

    I think in the future we'll see the productivity boosts negated by more burnouts. Sometimes meaningless work is OK to recharge the brain.

    That is if the work produced is actually useful, and more and more do we see that unless we're hyper-specific, we don't get what we want. We have to painfully iterate with non-deterministic stuff hoping that it gets it right.

  • estearum 8 hours ago ago

    IMO there seems to be more consensus among even the biggest hypemen that this is a bubble. But to protect their bag (and their LP money) they have to find some way that the silly investments they made weren't totally irresponsible.

    So here's the line of thinking we'll see more of:

    "Yes it's a bubble, but so were the railroads, and yet plenty of people made out big time! The railroads themselves were left over and hugely valuable!"

    Then the slightly more astute observer would say, well hold on, that's not quite analogous because the depreciation on AI buildout is way faster than in railroads.

    Then the even more astute observer would say, even that downplays the stupidity: the actual value creation from railroads was in the land itself.

    There is no analogous dynamic in AI-land, and so will probably be far more broadly catastrophic to the bubble blowers.

    • throwaway31131 5 hours ago ago

      Maybe not a railroad but I imagine the data centers won't be worth nothing...

      I imagine the AI bubble bust will be like the .com bubble bust. Of the 20 (or whatever number) of companies that have a shot, something like 3 will survive and do well. The problem is we don't know which 3.

      Many, many, dot-com era companies died during the .com bubble and tons of money was lost, but not everything. For example: Amazon.com, eBay.com, Intuit, etc.

      • estearum 5 hours ago ago

        The data centers will depreciate in value way faster than railroads do, and infinitely faster than the land underneath the railroads do.

        The AI boom is way way way bigger than dotcom was.

    • SantalBlush 7 hours ago ago

      I think this bubble makes sense. Any investor who believes AI is the future--that is, that AI will disrupt entire industries and replace labor in quite a few ways--must have a slice of the AI pie. They know full well that most of their AI investments will go bust, but that doesn't matter as long as they have a significant piece of one of the few companies left standing when the bubble bursts. So they take shots at anything that looks like it could be one of those survivors. When it all pops, they will have some of the winners and they will write off the losers.

    • skydhash 8 hours ago ago

      Also railroads were deprecated because of the advantages of other transports, like the flexibility of cars and trucks, and the speed of planes. So not a bubble that pops up, just something that got replaced, like CRTs by led panels or fax by email.

      • estearum 8 hours ago ago

        Railroads are not deprecated. Huge amounts of freight moves by rail in the US still.

        There definitely was a speculative bubble, but it left behind a lot of real value, but that real value was not in the railroad business nor in the railroads themselves but the gigantic amount of land grants and development that supported the bubble.

        • skydhash 7 hours ago ago

          Depreciation is the wrong word (not a native speaker). I meant moved into the background and not that much in the general public eyes (like oil drills and weather stations).

          • binary132 4 hours ago ago

            For what it’s worth, depreciation and deprecation mean very different things. :)

          • 7 hours ago ago
            [deleted]
  • mkbelieve 8 hours ago ago

    Of course we are and I suspect there are also an absolutely absurd number of Ponzi schemes underway as well.

  • Lauris100 8 hours ago ago

    I do feel that AI has been overhyped a bit for now, but what happens when we scale our electricity and GPU production 10x, 100x, go nuclear etc and can 100x AI models - lets see. Its too early to tell really.

    • Q6T46nT668w6i3m 8 hours ago ago

      Evidence suggests we’re data rather than compute scarce. Nowadays models are trained using other models and we’ve started to see domain collapse.

      • Balinares 7 hours ago ago

        The amusing thing is that it takes several orders of magnitude less data to bring up a human to reasonably competent adulthood, which means that there is something fundamentally flawed in the brute-force approach to training LLMs, if the goal is to get to human-equivalent competency.

        Also the fact that 30B models, while less capable than 300B+ models, are not quite one whole order of magnitude less capable, suggests that all things being equal, capability scales sub-linearly to parameter count. It's even more flagrant with 4B models, honestly. The fact that those are serviceable at all is kind of amazing.

        Both factors add up to the hunch that a point of diminishing returns must soon be met, if it hasn't already. But as long as no one asks where all the money went I suppose we can keep going for a while still. Just a few more trillions bro, we're so close.

        • janalsncm 7 hours ago ago

          I suspect there’s a good deal baked into a human brain we’re not fully aware of. So babies aren’t starting from zero, they have billions of years of evolution to bootstrap from.

          For example, language might not be baked in, but the software for quickly learning human languages certainly is.

          To take a simple example, spiders aren’t taught by their mothers how to spin webs. They do it instinctively. So that means the software for spinning a web is in the spider’s DNA.

      • skydhash 7 hours ago ago

        I don't think we can even do a "Handbook of LLM techniques" at this point and have something thick enough to raise a monitor. It's all in the Data. First they started with copyrighted material, then public sources (maybe private sources as well), and now they are hammering every server in existence to get moooorre..

    • dns_snek 8 hours ago ago

      > what happens when we scale our electricity and GPU production 10x, 100x

      Nothing interesting without some fundamental breakthrough IMO. Model/agent providers add another level of "thinking" that uses 10x the energy for 10% gain on benchmarks.

      • richardw 8 hours ago ago

        They also typically dilute human input with self-talk. You can steer them in a direction, but the internal conversation can convince itself otherwise. It’s not frequent but it’s very frustrating when it happens.

  • WalterSear 8 hours ago ago

    If other engineers are getting the productivity boost that I am from AI, then we are just scratching the surface of it's effect on the economy.

    And if they aren't, then they will be soon enough.

    • majormajor 8 hours ago ago

      I'm waiting to see the output.

      There should be some amazing new end-user-facing software, or features in existing software, or reduced amounts of bugs in software, any day now...?

      • srcreigh 7 hours ago ago

        Here's what I've been able to do:

        https://pxehost.com

        It's a cross platform PXE server. You just run it (root-less, no config) and the other computers on your LAN can boot up via PXE and (via netboot.xyz and iPXE) automatically download a Linux installer

        The tool itself and the website both started their life as extremely functional 1-shots from GPT-5. pxehost was in a ChatGPT chat, and pxehost.com began as a 1-shot in a Codex CLI.

        To me it's really cool that something like pxehost exists, but the fact that it began life as a fully working prototype from a single ChatGPT response is insane.

        • nativeit 6 hours ago ago

          That’s almost certainly because it actually started life as a human-developed GitHub repo.

          Not suggesting it wasn’t useful, or that it’s not remarkably convenient. Just easy to forget just how much went into providing it, in terms of the human labor involved with its training data, financial investments, and raw resources. In the broader context, it’s an unbelievably inefficient way to get to where we got. But as long we are here, I guess we should enjoy it.

          • 5 hours ago ago
            [deleted]
          • antonvs an hour ago ago

            > That’s almost certainly because it actually started life as a human-developed GitHub repo.

            That claim would have much more force if you could point to a repo. Otherwise, it just seems like blind bias.

      • WalterSear 7 hours ago ago

        My beta starts end of October, if you are interested.

        https://youtube.com/@groove-tronic

        I'm disabled (which is why I've been forced into starting my own venture) and can only work a few hours a day, but I'm still more productive than I have been in 25 years, even when I had a team of engineers and a designer working for me.

        The code is also cleaner - because refactoring is cheap now.

        And I'm working in a language (C++) that I had barely touched when I started.

        • majormajor 6 hours ago ago

          That does look very cool but I have no knowledge of the industry so can't really judge if there's any big advances compared to SoA preexisting tools.

          The 0-to-1 learning curve reduction is definitely very real, but the open question for me is what's next.

        • yahoozoo 7 hours ago ago

          Very cool. Love the Amen break.

      • janalsncm 7 hours ago ago

        Is ChatGPT not the killer app? It’s pretty damn amazing compared to what we had 10 years ago.

        • nativeit 5 hours ago ago

          I will cede the very true point that ChatGPT was indeed very unhelpful in 2015.

      • rafaelmn 7 hours ago ago

        I mean the obvious proof that claims are false is AI tooling companies that are no better at standard software development than the rest of industry (at best).

        Given their access to models, tooling and insane funding/talent - they still suck in standard software engineer like the rest of us, if not more because of the pace.

        All the AI integrations so far have been a joke, PoC level quality software. Talk to me when AI helps them rebuild core products into something more impressive.

      • xnx 8 hours ago ago

        This has clearly already happened in image creation and editing.

        • majormajor 6 hours ago ago

          No, there are models that do amazing work on images, but that's not the same as massive increases in coding into non-model-driven features in image creation or editing.

          • xnx 6 hours ago ago

            > or features in existing software

            Ah, I misunderstood. I guess I don't care much if AI was used to create the code that enabled a revolutionary feature vs. AI being the revolutionary feature.

            • majormajor 6 hours ago ago

              I don't in terms of tools I use.

              But I do in terms of "is it a industry-changing advance in software development, the profession" which is what this thread seemed to be about.

        • skydhash 7 hours ago ago

          Point us then to the GIMP, Krita, or ImageMagick of AI.

      • Lionga 8 hours ago ago

        I see so many on HN say they got 10 times more productive but nobody ever has anything to show.

        • Balinares 7 hours ago ago

          I can readily believe that undiscriminating enthusiasts are getting 10x as much code out.

          Why that might not translate to an increase in software quality and feature count is left as an exercise to the weary senior reader.

          • WalterSear 7 hours ago ago

            I've been coding for 25+ years. My code has never been cleaner.

            With AI, refactoring is cheap and safe.

        • WalterSear 7 hours ago ago

          I don't think that my product demos will tell you much about how effective AI has been, but here:

          https://youtube.com/@groove-tronic

          Now you can't say nobody has anything to show.

          This does real-time DSP (time stretching, effects, analog emulation), in a language I didn't know when I started (C++), that I started a couple of months ago.

          I could not have created this without AI, cannot make progress without AI. When I use up my Pro account limits, I'm done for the day - I'm too slow without it for continuing to make sense.

          • weikju 4 hours ago ago

            > I could not have created this without AI, cannot make progress without AI. When I use up my Pro account limits, I'm done for the day - I'm too slow without it for continuing to make sense.

            And this is why untold billions are sunk into AI. Creates a dependency on their services and removes agency from end users.

            Edit: yes I’m aware it adds agency in the sense that some people can do things they couldn’t do before… as long as that’s permitted/surveilles, which is where the agency is lost.

    • bossyTeacher 7 hours ago ago

      Productivity boost ain't free though. Once the transformer tech stops being subsidised and you have to pay for it, we will see if the productivity gains are worth the subscription cost

      • WalterSear 7 hours ago ago

        It doesn't entirely negate what you are saying, but Anthropic says they are already cash positive on inference.

    • dgfitz 8 hours ago ago

      Slinging code is not really the point. Making money off said code is the point. Making more money than you spend maintaining it is really the point.

      I don’t think over the last 5-8 years there has been a shortage of code-slingers, as evidenced by all the tech layoffs. Using LLMs to generate more code does not equal productivity. There’s that famous story about the -2000 LOC commit, etc.

      • WalterSear 7 hours ago ago

        It's not about an engineer shortage - it's about exponentially cheaper and faster engineering.

        I could not have attempted to create the application I have created without it.

        As far as money-making - I wager it's helped me a great deal more in regards to preparation for marketing and sales, since I barely knew where to start with that. But I don't have the tangible proof yet, since I'm just starting that process up.

      • throwaway31131 8 hours ago ago

        Also, the big challenges in software organizations I deal with are getting a group of a dozen or so “code-slingers” to work effectively as a team. That’s really hard and it’s been the fundamental management problem for the entire time I’ve been in the industry, which is decades. Not enough LOC has never really been a thing.

        • WalterSear 7 hours ago ago

          AI solves that problem with brute force: if you need 5x fewer engineers working on an application, your team will find coordination 5x easier.

        • skydhash 8 hours ago ago

          Also the many communications roundtrips to know what to code-sling. That's just as difficult as doing code-slinging well as a group.

        • delusional 7 hours ago ago

          I've only been in industry for 7 years. So far, the problem I've seen at my two employers have been stopping the "code-slingers" from making absolutely worthless systems that create a ton on noise in the organization, and redirecting their efforts towards long term valuable and productive ends.

          From observation and induction, it seems very easy to contribute net-negative value in a software development position, and delivering long-term net positive value really requires a lot of discipline from the developers, or a lot of wrangling from non-technical members.

          PS: Discipline here does not mean "being picky about what things to accept" it means being picky about what things you make up. From observation, a lot of the most valuable work is super fucking boring, and it takes discipline not to make up a more intellectually stimulating task.

  • measurablefunc 7 hours ago ago

    The definition I use for a financial bubble in case of AI is the following: financial debt obligations/investments that can not be repaid/serviced w/ future growth. Unlike pure software, the growth in AI is coupled w/ huge investments in real infrastructure (data centers, power plants, network interconnect, etc). This means that regardless of what happens the infrastructure is not going away & given how software can be so easily reprogrammed these days there is no way that all of it will suddenly go up in a puff of smoke. OpenAI & others will figure out how to redeploy the infrastructure for different use cases & keep the money flowing. This is unlike office buildings & general real estate b/c it's not like an office building or a family home can be quickly repurposed for something else to keep the money flowing to whoever owns the underlying infrastructure.

    I don't think it's a bubble. The numbers seem large but that's b/c the underlying infrastructure costs a lot of money & unlike other forms of infrastructure computers can be used for all sorts of different things by simply redeploying different software to it that consumers will find compelling (even if it's no longer 6 second clips of cats doing backflips from diving boards).

  • arbirk 8 hours ago ago

    I suspect there is a substantial first mover disadvantage right now. The extreme investments will not be profitable, leading to the bubble bursting at some point in the not so distant future. This will lead to short term price increases for inference and slower innovation in a period, the tech will emerge more mature and stable etc.

    As one who clearly see the huge potential of this tech this is an interesting outlook; make sure to make your products resilient to changing vendors and price hikes and it will probably be fine.

    Side note: Google seems to be playing the long game..

  • airstrike 8 hours ago ago

    I think any comment about an AI bubble needs to start by defining who the players are and how it affects them differently.

    AI foundries, Nvidia, the hyperscalers, enterprise buyers of AI, consumers, the US, China, the rest of the world, startups, investors, FOSS, students, teachers, coders, lawyers, publishers, artists... each stand to win or lose in profoundly different ways.

    Otherwise we all end up talking past each other.

  • FergusArgyll 7 hours ago ago

    I have a simple rule. It's too cute by half but its track record has been good since I've deployed it.

    Markets can not be forecasted, therefore if everyone is predicting a downturn, the downturn will not come. There needs to be some ambiguity, a kind of FergusArgyll uncertainty principle

  • lowsong 8 hours ago ago

    Effectively everyone, including Sam Altman [0], is saying we're in a bubble.

    The only questions left, the only ones that matter, are:

    - When is it going to pop? Tomorrow? Next year? 2030?

    - How hard is the crash going to be? Only a bunch of AI startups and one or two of the big "AI" companies (OpenAI, Anthropic) go down, or a global financial crash that wipes out hundreds of companies and hundreds of thousands of jobs.

    - What's left of "AI" tech in the wreckage. Once the hype is over, what real use-cases exist?

    [0] https://www.cnbc.com/2025/08/18/openai-sam-altman-warns-ai-m...

  • zeroonetwothree 8 hours ago ago

    The rare violation of Betteridge’s Law.

    • rightbyte 8 hours ago ago

      The headline does not end with a question mark though.

  • indigodaddy 8 hours ago ago

    Pretty decent analysis by Google AI mode:

    https://share.google/aimode/49jtBuQy9X3wSKy5R