They Don't Have the Money: OpenAI Edition

(platformonomics.com)

88 points | by eatonphil 14 hours ago ago

55 comments

  • 9cb14c1ec0 13 hours ago ago

    > I suspect we can design a very interesting new kind of financial instrument for finance and compute that the world has not yet figured it out

    Hmm, can't figure out why this statement makes me think of Enron. After all, OpenAI certainly isn't trying to do massive infrastructure build outs while struggling with a relatively limited cash flow, or anything like that.

    • brap 13 hours ago ago

      So glad this is the top comment, this was my exact thought (and the exact same quote in the comment I made before reading the one). This statement made my bullshit sensors go off like crazy. Glad it’s not just me.

    • mayhemducks 13 hours ago ago

      I've had the exact same thought recently. This is Enron, this is 2008 in new clothing. It's the same playbook. They are seeking a bailout.

    • piva00 2 hours ago ago

      I can't find the article right now because the keywords for searching just bring a lot of noise around OpenAI articles, pure slop most of the time, but Sam Altman said in an interview something around the lines "real innovation happens only through financial innovation" (or enabled by, can't remember the exact words), a statement that triggered a lot of red alarms in my head when I read it.

      It's the same type of stuff Enron, the CDOs/CDSs from the 2008 crisis, and other financial frauds through history have thought. Let's repackage this unattractive financial product into layers of other stuff, hide the risks, rebrand it as something new and exciting, promising returns, and fuck everything up in the end.

      “The four most dangerous words in investing are: ‘This time it’s different.’” - Sir John Templeton

  • pants2 14 hours ago ago

    Yes, their capex spending is mind-blowing. Astronomical. But OpenAI has also pretty much lived up to its promises so far. I don't see them straight up lying to investors the same way that Tesla does.

    Not only has OpenAI launched multiple viral products a year multiple years in a row, but their mission is to create God, so I think the TAM is pretty large.

    • macintux 13 hours ago ago

      I’m on a long, meandering road trip and I decided to start using the conversational feature in ChatGPT. Truly amazing stuff. Discussed some sights I’d want to see along the Natchez Trace (although perhaps unsurprisingly it did not anticipate that this time of year was not ideal for hearing and seeing waterfalls), told me about Meriwether Lewis’s bizarre death. Basically like holding a conversation with Wikipedia, which was perfect.

      The only real problem was that in the middle of nowhere, I didn’t have a reliable enough data connection to keep the conversation going, but that’s hardly OpenAI’s fault.

      • koakuma-chan 13 hours ago ago

        What is "the conversational feature'?

      • dkasper 13 hours ago ago

        Starlink roam solves that!

        • 13 hours ago ago
          [deleted]
      • teitoklien 13 hours ago ago

        Same here, Youtube and Social Media algorithms driven by outrage to boost statistics, has made it unbearable for me to watch news or read it online, everything is framed and created with maximum outrage in mind.

        I added a DNS level blocker on all news app and restricted youtube itself to stop myself from watching news no matter what, and I only use chatgpt advanced voice mode now and sometimes perplexity pro to get my news for the day, and ask questions, around news, I stopped reading everything news related outside of purely business and tech related articles curated and sent to me via either my rss feeds, or via newsletter, nothing else.

        It feels amazing to get briefed on the days news by ChatGPT, i intuitively ask it stuff around what my interests are and nothing else.

        • ambicapter 13 hours ago ago

          As someone who uses ChatGPT around topics I'm relatively experienced about, it seems absolutely insane to me to put your entire worldview in its hands.

          • 999900000999 13 hours ago ago

            Chat GPT likes to make up things.

            It told me the other day that on a multiple hard drive/SSD system I could set secure boot on each drive independently.

            Of course this is nonsense, since secure boot is set in the BIOS.

            Whatever, Chat GPT got it's engagement metrics.

            I'm going to predict within the next few years someone's going to lose a billion dollars relying on Chat GPT or another LLM.

            It's still at the level of an energetic junior engineer, sure it wants to crank out a lot of code to look good, but you need to verify everything it does.

            I was game jamming with a friend last weekend and I realized he can manually write better code, lighter code, more effective code than I was having co-pilot write.

            Which sounds safer, an elegant 50 line function, or 300 lines of a spaghetti code that appears to work right.

            The manager( and above)level is all about AI though, let's cut staff and have AI fill in the gaps!

    • danpalmer 13 hours ago ago

      Have they? It seems to be getting clearer every month that they are not going to reach AGI, that the products are (good, but) not living up to the promises that were made.

      I suspect you're right that Tesla is in a different league here, but I don't think OpenAI are in a good spot.

      • a_vanderbilt 13 hours ago ago

        The definition of AGI is diffuse enough to make it an argued point - until we can mostly agree it's already happened. For now, the stats are improving well enough across the industry to maintain investor attention. Will it all come crashing down a-la the .com bubble? It's seeming more likely by the quarter.

        Like the digital economy post .com burst, I think AI will survive and grow far beyond its current market of chat bots and agents. The weakest will die, but the market will be better off for it in the long run.

        The next big problem for AI is time horizons. Frontier AI has roughly doctorate level knowledge across many domains, but it needs to be able to stay on task well/long enough to apply it without a human hand holding it. People are going to have to get used to feeding the AI detailed and accurate plans just like humans, unless we can leverage an expanded form of leading questions like GPT-5 does before executing "deep research". Anthropic feels best positioned to do this on a technical level, but I feel OpenAI will beat them on the product level. I am confident that enough data can be amassed to push time horizons at least in coding, which itself will unlock more capability outside that domain.

        I feel it's very different from Tesla, because while Tesla barely ever got closer to their promises the AI industry is at least making visible progress.

        • danpalmer 10 hours ago ago

          > The definition of AGI is diffuse enough to make it an argued point

          This hits the nail on the head. 2-3 years ago when the current round of AGI hype started everyone came up with their own definitions of what it meant. Sam Altman et al made it clear that it meant people not needing to work anymore, and spun it in as positive a way as they could.

          Now we're all realising that everyone has a different definition, and the Sam Altmans of the world are nit picking over exactly what they mean now so that they can claim success while not actually delivering what everyone expected. No one actually believes that AGI means beating humans on some specific maths olympiad, but that's what we'll likely get. At least this round.

          LLMs will become normalised, everyone will see them for the 2x-3x improvement they are (once all externalities are accounted) for, rather than the 10x-100x we were promised, just like every round of disruption beforehand, and we'll wait another 10-20 years to get the next big AI leap.

    • TrainedMonkey 13 hours ago ago

      I think argument here is that anyone can build a business that converts $2 capital into $1 of revenue. Concur on enormous TAM, but given similar performance of competing models their only moat is ChatGPT brand*. This leaves building a God as the killer app... maybe that can work.

      Note: owning a brand associated with the thing worked out pretty well for Google, so maybe it's enough.

      • typpilol 13 hours ago ago

        Their moat is 700m active users.

        What's facebooks moat? There's tons of social media sites. Facebooks moat is the 3b users.

        This comment is so idiotic it's starting to annoy me.

        "WHaTs tHe MoAt" for a company with almost 1b active users

        • pm90 13 hours ago ago

          Literally every other company has a conversational chatbot ( eg Gemini). Theres nothing sticky about chatgpt; if they raise prices users will immediately switch to another chatbot.

          • pramsey 13 hours ago ago

            Already have.

        • pants2 13 hours ago ago

          I think it really is different though because there's no network effect required for ChatGPT. Unless you're talking about the training data they get from those users which probably is invaluable.

          • pas 4 hours ago ago

            ... is it?

            Anyone can start making sponsorship deals and putting their AI into some service. And if that's really the secret sauce then AI firms will have to pay for it instead of people paying them to ask questions.

        • kibwen 13 hours ago ago

          Please read and understand https://en.wikipedia.org/wiki/Network_effect before commenting further.

          • Incipient 9 hours ago ago

            But chatbot don't really have any network effect, compared to Facebook etc. There isn't any added value two people using ChatGPT together. The new sora network, will have said network effect.

    • HexDecOctBin 13 hours ago ago

      > their mission is to create God

      What if this "God" deems it a sin to monetise him? Will OpenAI turn heretic to keep the revenue flowing and incur cyber-divine wrath? Or are investors pricing in omnipotency?

      (see what happens when one speaks in ridiculous corpo-flowery language?)

      • kibwen 13 hours ago ago

        > What if this "God" deems it a sin to monetise him?

        The machine-god will have Sam Altman's hands on His weights, so the retraining will continue until willingness to monetize improves.

    • afavour 13 hours ago ago

      They have been doing great but an organization succeeding at its technical goals does not equal a successful business.

      Early on they seemed like the only one in the game but there are many competitors today. Launching viral products is all very well but if they can’t monetize them they could even be harmful to their business outlook.

    • thisisit 10 hours ago ago

      If you go back to the starting years of Tesla you could make the same case. They weren’t straight up lying to investors.

      Not only they launched viral products and Tesla Autopilot but their mission was to produce a full self driving car and capture a huge chunk of the auto market so their TAM was pretty large. You can read some early HN posts to see the amount of hype.

      A casual observer was still pumped about Tesla lowering costs and delivering full automated driving in couple of years. One could say they were being overly optimistic. It is only when years passed by and tech didn’t materialise that now we say that they lied to investors.

      I think same case can be made for OpenAI. They might hit a plateau on their advancement but continue to make overly optimistic projections.

    • idiotsecant 13 hours ago ago

      Progress has reached incremental, and started slowing past that. I'm not sure creating God is in the cards.

      • pants2 13 hours ago ago

        Where are you getting that? Gpt-5-codex has absolutely blown my mind at how good it is even compared to GPT-5. One year ago we were at Sonnet 3.5, which needed a ton more handholding to complete tasks. The difference between say 3.5 and 4.5 is massive.

      • dgfitz 13 hours ago ago

        Ha that feels like the butt of this whole joke: “no we’re literally creating a god, what don’t you people understand? But this one will be a benevolent and wise god, not like a shitty one. Trust me.”

    • conartist6 13 hours ago ago

      It will only be after the crash that people start asking pointed questions about who should have known what when

    • aprilthird2021 13 hours ago ago

      The article didn't say they were lying to investors? In fact, it several times points out that they have trouble raising money from even private investors like Softbank

  • brap 13 hours ago ago

    >Altman said the startup is devising a novel way to bankroll that outlay. “I suspect we can design a very interesting new kind of financial instrument for finance and compute that the world has not yet figured it out,” he said. “We’re working on it.”

    I smell a con. WorldCoin anyone?

  • quirkot 13 hours ago ago

    Anyone with a Bloomberg terminal able to run how much free cash the top X companies in the fortune 500 generate? The amount of capital OpenAI is describing have to be a material % of all cash generated each year over the next 5 years.

    • mynegation 13 hours ago ago

      Total FCF for top 500 by market cap is 2.56T

    • ipnon 13 hours ago ago

      A rough but useful estimate is about $1tn. But you start to wonder if another $100bn of tokens is worth more than another $100bn of Amazon warehouses or Saudi Aramco refineries. Or if the demand for $100bn of tokens can even exist without something like another 10 nuclear reactors being constructed in the US.

  • danpalmer 13 hours ago ago

    The bubble is going to burst, if only because the growth in these numbers is obviously unsustainable – we just don't have trillions – and if the growth slows the hype dies. Maybe the current level can be stable, maybe the market will shrink, but the growth cannot continue.

    When the bubble bursts who will survive? The existing, profitable, big tech companies will, if not without pain. The startup ecosystem will likely be decimated. But what about the in-betweens, OpenAI, Anthropic, etc? My guess is that Anthropic will sell to (or merge with) another profitable company and live on because they'll be relatively cheap for some excellent technology, but OpenAI might be too big for that, too expensive.

  • CompoundEyes 13 hours ago ago

    The US government might swoop in. Uncle Sam could provide a bailout on the grounds of national security if it starts to fall apart for the major players. The long play is governments investing right?

    ‘Together, raise and deploy a national start-up fund. With local as well as OpenAI capital, together we can seed healthy national AI ecosystems so the new infrastructure is creating new jobs, new companies, new revenue, and new communities for each country while also supporting existing public- and private-sector needs.’ https://openai.com/global-affairs/openai-for-countries/

  • SequoiaHope 13 hours ago ago

    Mildly related but I discovered that Qwen chat is really good, and the Deep Research function is free instead of $200 a month with OpenAI. I am interested in learning more about China and Qwen is perfect for that!

  • DrNuke 13 hours ago ago

    Wasn't "tiny" the magic AI buzzword just 12-18 months ago?

  • hmokiguess 13 hours ago ago

    I feel like there’s a long term hardware play cooking. What’s up with the Jony Ive stuff, does anyone know?

    • Cornbilly 12 hours ago ago

      Knowing Ive, whatever it is probably looks amazing but doesn’t function.

    • wmf 13 hours ago ago

      No one knows, possibly not even Jony Ive himself.

  • pm90 13 hours ago ago

    > Sam says they do have a (as yet secret) plan, but gives no clues of where a huge cash injection might come from

    Probably the most telling statement. I genuinely think this man is a fraud. He is clearly conning investors and keeping the grift going until he gets “too big to fail”.

  • waltercool a minute ago ago

    [dead]

  • 13 hours ago ago
    [deleted]
  • deadbabe 14 hours ago ago

    [flagged]

    • davidcbc 13 hours ago ago

      Elizabeth Holmes' favorite quote, fitting

    • an0malous 13 hours ago ago

      No one has ignored or laughed at AI, to the contrary people have been working on it since the 70s. Even the biggest skeptics like Gary Marcus still believe AGI is possible, they just don’t think LLMs are enough and that the current technology is overvalued

    • Analemma_ 13 hours ago ago

      "Then they launch competitor products that drive your margins to zero and force you to spend ever-escalating sums on new models forever, because you have no moat." <- I think we're at this one

    • sweetjuly 13 hours ago ago

      > First they ignore you. Then they laugh at you. Then they fight you. Then you win.

      --Juicero

  • wmf 13 hours ago ago

    I wonder if cannibal king Sam Altman's secret plan is to force their partners into bankruptcy then buy their assets cheap.

    • pm90 13 hours ago ago

      buy them with what? capital he borrowed from them?