The AI emperor has no clothes

(jeffgeerling.com)

51 points | by warrenm 3 hours ago ago

59 comments

  • gkoberger 3 hours ago ago

    The premise is true, but I disagree with the conclusion. There are many companies with outsized valuations, and few will ultimately survive. The valuations of these companies will go down; there clearly is no way the market can handle it all.

    But the revenue going into AI will not go down. The job loss will not go down. What you're seeing isn't AI being an empty promise, but rather many different people trying to win the market.

    It's like "tech"... 20 years ago, there was a category of companies we called tech and it was getting outsized valuations. Now every company is a tech company, whatever that means – there's as many programmers at Goldman Sachs or Walmart as there are at Airbnb or Dropbox.

  • logtrees 3 hours ago ago

    "There is no way the trillions of dollars of valuation placed on AI companies can be backed by any amount of future profit."

    This is just a case of the user being unable to see far enough into the future. Yes, there's huge future profit to be had.

    • DocSavage 3 hours ago ago

      Aside from the better versions of what AI is visibly doing now (software dev, human language translation, video gen, etc), many of the AI bears are dismissing the potential impact of hooking AI up with automated experimentation so it's able to generate new types of data to train itself. The impact on drug discovery, material science, and other domains are likely to be very significant. The Nobel Prize in Chemistry for AlphaFold is just a glimpse of this future.

      • logtrees 3 hours ago ago

        Completely agreed. It won't even displace the people who were diligent in all of those crafts. It will supercharge them. And there will be novel combinations producing new services/products. It's going to be great.

      • AtlasBarfed 3 hours ago ago

        "automated experimentation so it's able to generate new types of data to train itself"

        AIs don't understand reality. This type of data generation would need a specific sort of validator function to work: we call this reality. That's what "experimentation" requires: reality.

        We already have this right now, with the AI training ingesting AI crapgen, with StackOverflow posts no longer happening. That would seem to point to a degrading AI training set, not an improving one.

        • DocSavage 3 hours ago ago

          A number of startups are working in verifiable domains where they can provide realistic data. This is an interesting thread from one of those startups: https://x.com/khoomeik/status/1973056771515138175

          Here's a discussion with Isomorphic Labs (Google DeepMind spinoff) on this line of thinking: https://www.youtube.com/watch?v=XpIMuCeEtSk

        • 2 hours ago ago
          [deleted]
        • ethagnawl 2 hours ago ago

          Side note: I happened to look at the SO "Community activity" widget earlier this week and was quite surprised to see just how far engagement has fallen off. I don't have historical entries to reference but I'm _fairly certain_ there used to be hundreds of thousands of users (if not more) online during the middle of an average work day (I'm in America/New_York) and there are currently ... 16,785.

      • gjsman-1000 3 hours ago ago

        A more sane answer is garbage in, garbage out, and this future never materializes.

        • logtrees 3 hours ago ago

          Don't send garbage in!

          • rkomorn 3 hours ago ago

            Isn't that about as tangible as "don't write bugs"?

            • logtrees 2 minutes ago ago

              Not really. One is a conscious design choice for what you choose to build as your ethos or Magnum Opus or what have you. And the other is a consequence of dealing with hard techincal engineering and scientific matters. :)

            • 2 hours ago ago
              [deleted]
    • potato3732842 3 hours ago ago

      I think a lot of this viewpoint comes from the fact that the median software engineer doesn't really have a lot of exposure to mature, and often therefore regulated industries and how much make-work paper pushing and ass-covering paper pushing there is.

      I have no idea what fraction of our economic productivity is wasted doing these sort of TPS reports but it's surely so massive that any software that lets us essentially develop more software on the fly to cut that back even slightly is highly valuable.

      Previously only the most moneyed interests and valuable endeavors could justify such software, like for example banks flagging sus transactions. Current AI is precariously close to being able to provide this sort of "dumb first pass set of eyes" look at bulk data cheaply to lesser use cases for which "normal" software is not economically viable.

      • bryanlarsen 2 hours ago ago

        AI will not reduce the amount of time wasted on paperwork. It'll massively increase the amount generated and consumed.

      • _DeadFred_ 2 hours ago ago

        The problem is that those same workers have like 5% key stuff they do, based on knowledge and depth they probably wouldn't have without all the surrounding 'TPS' style bs. Definitely not knowledge you can take from 10 seperated workers with their 5% and somehow get 1 worker working on that stuff 50% of the time.

        Boring ass code reviews come in super handy because of the better familiarity, getting exposure to the code slowly, exposures to the 'whys' as they are implemented not trying to figure out later. The same with buyers overlooking boring paperwork, team leads, productions planners. Automating all that is going to create worse outcomes.

        In a sane world if we could take the fluff away we would have those people only working 5% of the time for the same pay, but we live in a capitalist system where that can't be allowed, we need 100% utilization.

        • potato3732842 2 hours ago ago

          > based on knowledge and depth they probably wouldn't have without all the surrounding 'TPS' style bs.

          >Boring ass code reviews come in super handy because of the better familiarity, getting exposure to the code slowly, exposures to the 'whys' as they are implemented not trying to figure out later.

          But to what extent is this truly necessary vs a post-hoc justification? Workers are pushed to work right to the limit of "how little can you know about the thing without causing bad results" all the time anyway.

          >In a sane world if we could take the fluff away we would have those people only working 5% of the time for the same pay, but we live in a capitalist system where that can't be allowed, we need 100% utilization.

          <laughs in Soviet bureaucracy>.

          The catholic church was making fake work for itself for about 500yr before it caused big problems for them. It's not the capitalism that's the problem. It's the concentration of power/influence/wealth/resources that seems top breed these systems.

    • computerphage 3 hours ago ago

      Indeed. There are trillions of dollars /per year/ paid to workers in the US alone.

      • computerphage 3 hours ago ago

        Like, there is an argument that can be made here, but "there's just not enough money in the world to justify this" definitely isn't it

        • cpgxiii 28 minutes ago ago

          Just because trillions are currently spent on employees, does not mean that another trillions exists to spend on AI. And if, instead, one's position is that those trillions will be spent on AI instead of employees, then one is envisioning a level of mass unemployment and ensuing violence that will result in heads on pikes.

    • moffkalast 3 hours ago ago

      Trillions of dollars is pocket change if you wait for enough inflation.

  • lazystar 3 hours ago ago

    > There are good use cases for machine learning, AI, etc. But sadly, the best ones are masked by the hype-train AI junk that is either useless or incredibly expensive in comparison to the amount of money being charged for it.

    so ive been in the industry ~8 years now, and this is the exact same discussion that was held during the NFT/blockchain hype. this cycle seems inevitable for any new disruptive tech.

    • filoleg 3 hours ago ago

      The difference is that NFT always seemed to be just hype for hype’s sake, with zero functional/helpful use-case at all. No, “trading cards [more like hashes/serial numbers receipts of those tbh], but digital” is not a functional use-case.

      And afaik, that was the only use of NFTs that actually worked. The whole “omg you can buy skin in this one game as an nft, and then have it across multiple games” (as well as other bs of a similar nature) was always a ridiculous thing, as it still requires gamedevs to collab and actually do all the required integration work (which is rather infeasible). And even if they decided to ever do that, they would almost certainly find it much easier to accomplish without relying on NFT tech.

      AI also has a ton of hype and pipedreams that are vaporware, of course (just look at all the batches of chatgpt wrapper startups in thr past couple of years). However, what the AI hype has (that NFT never did) is a number of actual functional use-cases, and people utilize it heavily for actual work/productivity/daily help with great success.

      Claud Code/Gemini CLI alone can be such a boon in the hands of a good software engineer, it is undeniable.

      • bigstrat2003 3 hours ago ago

        AI also doesn't have good use cases yet. It sucks at the one thing it's supposed to be good at (writing code).

        • filoleg 21 minutes ago ago

          > It sucks at the one thing it's supposed to be good at (writing code).

          It sucks at replacing software engineers. So yes, if you expect it to design a complex scalable system and implement it for you from start to finish, then sure, you can call it bad at "writing code".

          I don't care about that use-case. I think I am decently good at writing code, and I rather enjoy doing it. I find Claude Code/Gemini CLI extremely helpful at both saving me lots of time by freeing me from dealing with annoying boilerplate, so I can focus more on actual system design (which LLMs fail at terribly, if we are talking about real production apps that need to scale) and more difficult/fun parts of code (which LLMs cannot handle either).

          That's the real power of it, multiplying the productive output of good SWEs + making the work feel more enjoyable for them, by letting those SWEs focus on actual tricky/difficult parts, instead of forcing them to spend a good half of the time just dealing with boilerplate.

          And I am not even gonna bother getting into tons of other non-coding tasks it is already very useful of. I find it amazing that I can now not only get a transcript of my work meetings (which is already massively helpful for me to review later, as opposed to listening to a ~25min video recording of it), but also ask an LLM to summarize it for me or parse/extract info from it.

        • Ukv 2 hours ago ago

          Transcription, translation, material/product defect detection, weather forecasting/early warning systems, OCR, spam filtering, protein folding, tumor segmentation, drug discovery/interaction prediction, etc. seem fairly promising to me, with machine learning approaches often blowing traditional approaches out of the water.

          I feel the idea that there's only "one thing it's supposed to be good at (writing code)" is largely down to availability bias, in that we're in programming circles where LLM code completion gets talked about a lot. ChatGPT reportedly has 700 million weekly users - I'd assume many using it for tasks that I'm not even aware of (automating some tedious/repetitive part of their job).

        • dwaltrip an hour ago ago

          If Terence Tao says it can save hours of work for him, you might be holding it wrong.

    • lucisferre 3 hours ago ago

      Does NFT/blockchain actually have a cycle? I have not seen any real killer use cases.

      AI may be overhyped but the actual use cases and potential ones seem very clear.

      • mikepurvis 3 hours ago ago

        I agree that this is an odd comparison— blockchain and especially NFTs really are a scam and a pyramid scheme, whereas modern generative AI has dozens of real applications where it's already been massively disruptive.

        Maybe compare with something the development of motion pictures, with dozens of small studios trying different approaches to storytelling, different formats, etc, with live performances being the legacy thing getting disrupted. And then the market rapidly maturing and being consolidated down to a few winners that lasted for decades.

      • samcat116 3 hours ago ago

        I think we’ll start to see a bunch of fintech companies use stablecoins for things, but as more of an implementation detail and not really a speculative market like it was before.

        • WJW 3 hours ago ago

          Well stablecoins by design cannot really fluctuate a lot in value (unless they collapse), so speculation on them is pretty much out.

          It's still pretty unclear to me why you'd actually want a blockchain for managing it though, rather than a traditional database hosted by the central bank that is responsible for the currency that the stablecoin is following. You'd get vastly more throughput at much lower costs, and it's not like you really need decentralization for such a system anyway. The stablecoin is backed by something the central bank already has authority over.

          • galenmarchetti 3 hours ago ago

            then why hasn't the central bank done this yet

            • WJW 2 hours ago ago

              Depends. Several central banks are working on exactly that, see for example the recent speech by the FED and the whole GENIUS act regulation framework.

              But to be honest where I am (northwest Europe) we already have subsecond person-to-person transactions via the normal banking system, no matter which bank the sender and receiver use. So stablecoin-ifying the Euro wouldn't make a huge difference. There might be more to gain if the region doesn't have that kind of payment infrastructure yet.

      • warrenm 3 hours ago ago

        The valid use cases for blockchain are relatively few

        The use cases where it gets applied are far more than the valid ones

        As for NFTs ... there never was (and never will be) a valid use case - it does not matter if you "own" a digital asset (like an image): a screenshot of it is good enough for 99.99999...% of people, so why pay for the "real" thing?

      • kelvinjps10 3 hours ago ago

        I don't know about NFT, but blockains allows to me to send money to my Family in usdt/usdc for cents on the dollars, and for example in countries like Venezuela besides using it for receiving and sending money some people use it to save money because you cannot trust in the government and banks

        • roland35 2 hours ago ago

          I feel like this isn't really a new or novel thing? Storing cash in a bank has risks, storing crypto in any sort of method also has risks, right?

          Sadly for those in unstable countries there are lots more risks in life, including financial. I'm not sure crypto is the best long term solution here

        • drivebyhooting 2 hours ago ago

          That’s because organized crime provides the demand and liquidity needed for cryptocurrency. Without cartels and the like trying to bypass money laundering and capital control laws you would not easily transfer bitcoin to and from Venezuela.

      • filoleg 3 hours ago ago

        Blockchain does, NFTs not so much.

    • fabian2k 3 hours ago ago

      NFTs never had a real use case. Blockchains do have limited use, and cryptocurrencies are different enough that I don't think it's a valid comparison.

  • cs702 3 hours ago ago

    Hmm... I'd say most AI emperors have no clothes -- that is, no profits and no prospect of them in the foreseeable future.[a]

    A tiny number of them are actually wearing something, like a fig leaf. They are the ones won't be caught in embarrassment when an innocent child points most emperors are walking around in the buff.

    ---

    [a] Cloud providers, who do earn profits, are service providers to emperors, not emperors on their own.

    • warrenm 3 hours ago ago

      This is precisely why nVidia is providing GPU to AI companies, and not trying to be one themselves ... loads more money to be made selling shovels to prospectors than in being a prospector

  • DrNosferatu 3 hours ago ago

    Few companies will survive, but the ones that do will be in a world that will never be the same again.

  • criddell 3 hours ago ago

    Are the AI improvements of the past few years going to lead to advances in self driving cars? How close are we to where a robot could get into a car from 2010 and drive me around?

    • warrenm 3 hours ago ago

      >How close are we to where a robot could get into a car from 2010 and drive me around?

      A long way away

      And here is why - driverless cars are a thing ... essentially making the car the "robot"

      General-purpose robots are an amusing scifi trope, but have no practical benefit in reality

      Purpose-built robots (even ones that can flex within that prupose to different applications) make far more sense (and have already been around for decades)

      • drivebyhooting 2 hours ago ago

        Genetically engineered, properly “educated”, politically controlled, and brain washed humans would be far more useful than electromechanical robots we can build any time soon. This too is a common scifi trope.

        And it’s true. Industries have shown time and again that they’d rather send the work to paupers’ hands in countries without rather than automate to metals hands within.

        • pixl97 34 minutes ago ago

          >And it’s true. Industries have shown time and again that they’d rather send the work to paupers’ hands in countries without rather than automate

          I disagree, to the point I'd say the statement is nearly false.

          You have to look at what the 'expensive' point of the work is.

          In some cases it's electricity, those things generally don't get sent overseas, but instead are highly automated.

          In some cases it's waste by product. Those get sent to countries where they don't care if you dump it and poison it.

          In some cases it's proximity to other manufacturing of parts you need.

          In some cases it's cheap labor without the need for paying healthcare.

  • drcongo 3 hours ago ago

    This popped up just after I'd finished my daily chore of adding AI spammers to my mail filters. I'm up to about 10 per day now.

  • 3 hours ago ago
    [deleted]
  • fbn79 3 hours ago ago

    Is this true even if AI-2027.com is right?

  • andrewmutz 3 hours ago ago

    The AI bubble will be like the dot com bubble. There is too much hype and lots of investors will lose money. But at the same time, lots of successful and world-changing businesses will be created.

    Does that mean the AI emperor has no clothes? I don't think so.

    • warrenm 3 hours ago ago

      Most of the self-proclaimed 'emperors' have no clothes

      The actual emperors? They are dressed (or, at least, in the fabric section of the hobby store and have a sewing machine at home)

    • WJW 3 hours ago ago

      It might be like the dot com bubble, where eventually a few winners emerged that went on to be hugely valuable. It could also be like the tulip bubble, where eventually prices collapsed and never recovered. Or for a more modern example, the NFT bubble.

      • andrewmutz 3 hours ago ago

        I'm not saying that all bubbles build real businesses that have a big impact on the world, I am saying specifically this AI bubble will do so.

        I've already seen enough significant, real-world value being created by AI technology that I am a believer that it will have a big impact. It will also destroy a lot of investment money along the way, but that's just normal bubble stuff.

  • gjsman-1000 3 hours ago ago

    I'm also giving it just 2-3 years before the scores from high schoolers are so bad, across the board, that a politician calls for age verification on AI platforms. Even if solely to force them to do school, other harms ignored. Especially on Windows 11: Who in Microsoft decided educators needed to compete with a colorful taskbar button that can't be turned off with parental controls?

    Watch the engagement plunge, and with that, the dreams of profit.

    • pixl97 32 minutes ago ago

      > that a politician calls for age verification on AI platforms

      And MS et al will grease the palms the the politicians running against them. Lobbying is easy when you're a huge corporation.

  • keernan 3 hours ago ago

    When openAI made its Nov 2022 chatGPT announcement, why did they try so hard to hype it in anthropomorphic terms? Why did they push the idea it was a 'black box' that they didn't understand how it worked? That it was a serious threat to humanity and that government oversight was urgently needed (leading to a meeting at the White House)?

    I'm not suggesting GPT has no value. But in hindsight everything I described above was pure bull. I suppose it could be suggested that Sam Altman and his engineers didn't understand their own algorithms. But I don't buy that fairytale.

    • warrenm 3 hours ago ago

      >why did they try so hard to hype it in anthropomorphic terms?

      Marketing, pure and simple

      People relate better to something that sounds humanesque (even though it is not) vs calling it what it is (in this case, a massively-backed (ie LLM-based) Markov Chain generator)

  • babuloseo 3 hours ago ago

    This is a really insightful discussion! It's fascinating to see the parallels drawn with the dot-com bubble and the varying perspectives on the sustainability of the current AI boom. Mark Harrison's point about the internet having "a lot of clothes" even after the initial crash is particularly thought-provoking, suggesting that foundational infrastructure built during hype cycles can still lead to long-term value. It's also interesting to consider Jay G's observation about the "credits" pricing model. If these tools are genuinely providing enough value that users are willing to pay per use, it could indeed indicate a more robust economic model than some might assume, even with high compute costs. The debate between immediate overvaluation and future potential is clearly at the heart of the matter. Thanks for sharing this!

    • BizarroLand 3 hours ago ago

      Isn't it rather foolish to insert an AI generated response into a conversation between humans?