Behind OpenAI's plan to make A.I. flow like electricity

(nytimes.com)

133 points | by typon 3 days ago ago

168 comments

  • ChrisArchitect 3 days ago ago
  • kurthr 3 days ago ago

    Wait, he wants his open product to be like a utility with low profits for maximized reach?

    Or he wants his suppliers to be a regulated utility while he sells 100% margin products on top of it?

    Or he wants all the greater fools now wherever they are so he can get out before the collapse?

    It feels like this is a case of following the money to understand the real goals, since it's unclear to me that AGI is that goal.

    • dogcomplex 2 days ago ago

      Yes.

      >Or he wants his suppliers to be a regulated utility while he sells 100% margin products on top of it?

      Near term yep - soon as regulatory capture drops. They'll make out like bandits.

      >Or he wants all the greater fools now wherever they are so he can get out before the collapse?

      Then this, by the time competition catches up anyway because there are no real moats here besides regulation

      >Wait, he wants his open product to be like a utility with low profits for maximized reach?

      Then this is what remains, in a sea of other providers.

      Financially? Pump and Dump, baby. Though I reckon the end result will still be intelligence flowing like water.

    • skywhopper 2 days ago ago

      All he wants is to find ways to keep propping up the unsustainable and insatiable beast he’s bullied into existence. The promise of AGI is one-half threat and one-half desperate hope, because there’s no other way to keep the bubble growing without government resources.

    • ben_w 3 days ago ago

      > it's unclear to me that AGI is that goal.

      Ironically, much of the observed behaviour is instrumentally convergent for most of the suggested ultimate goals.

      Trying to make a safe and aligned AGI or a statutory government monopoly would both get about half the things we've seen.

      The other half is stuff which is collectively mutually exclusive on all goals, but humans aren't perfect logical spheres in a vacuum, so it could still be basically any of them.

    • krapp 2 days ago ago

      The goal is to be the man who stands at the right hand of God, holding the keys to the kingdom.

      • riehwvfbk 2 days ago ago

        The sales pitch for the investors, of course, is that they will get to be God in this picture.

        • krapp 2 days ago ago

          No, the AI is God. Investors just need to pay in to be one of the saints.

          • lupire 2 days ago ago

            Saints don't profit. Priests do.

      • kylehotchkiss 2 days ago ago

        yeah the whole fixation on "superintellegence" really feels like the a modern telling of Tower of Babel. Except this time we won't end up with a lot of different languages or whatever.

        at this point, the lofty goals are a distraction from celebrating the utility of what we have, incremental upgrades, and reduced resource usage.

        • exe34 2 days ago ago

          this time the tower speaks all languages.

    • 2 days ago ago
      [deleted]
    • dotancohen 2 days ago ago

        > As the availability of electricity became more widespread, people found better ways of using it.
      
      It's actually a nice analogy.
      • mrbungie 2 days ago ago

        The level of hubris when comparing something that brings literal physical light to something that is not even public or reproducible.

        • dotancohen 2 days ago ago

          When electricity was first brought to people's homes, it wasn't something that could be easily produced in the home either.

          This is an analogy, not a formula. And a great aspiration for a company that until recently was at the forefront of its industry.

          • mrbungie 2 days ago ago

            It is not a good analogy for LLMs and the infrastructure that is needed for running them, not by an inch. The variability of scale is not there, according to OAI we need massive piles of resources for allowing innovation by common people (i.e. scale it, make it an utility and then wait for emergent use cases).

            Electricity didn't need the scale sama is asking for thriving as a tech (obviously eventually it did explode in use cases with enough scale, but that's not my point). Scientists used to make electromagnetism experiment demonstrations in Academies with equipment that was not that expensive nor inaccesible for the time. You can even produce enough electricity with a potato to light a bulb.

            A better technology for this analogy would be promoting on-device/on-site open SLMs, but that's not what OAI promotes, is it?

            • dotancohen 13 hours ago ago

                > Scientists used to make electromagnetism experiment demonstrations in Academies with equipment that was not that expensive nor inaccesible for the time.
              
              I'm not even a scientist, yet I've made experimental demonstrations with LLMs using equipment that is certainly not expensive nor inaccesible.
              • mrbungie 10 hours ago ago

                At least in the case of researchers, like the ones working on the PlanBench benchmarks they were kindly asking for increasing API rate limits for running the benchmarks on o1 (https://x.com/rao2z/status/1834314021912359393?s=46&t=u6bMGX...).

                Treat it as a perdonal opinion but I really think the analogy resists no analysis after situations like that. If it is closed (and that's what OAI proposes, they aren't advocating for open models but the contrary), it will never be like electricity.

            • jononor a day ago ago

              You can train a tiny LLM and do experiments on it and demonstrations with your lapop. It is about as accessible as electricity generating potatoes.

  • Jtsummers 3 days ago ago

    > Mr. Altman has since scaled his ambition down to hundreds of billions of dollars, the nine people said, and hatched a new strategy: Court U.S. government officials by first helping to build data centers in the United States.

    When all else fails, pitch your idea to the right officials in the USG and you'll make bank. And if you do it right, you won't even be doing anything at all. There are companies getting 8-10 figures a year to fail at upgrading systems from DOS to literally anything not DOS. It's a very low bar, but a profitable one if you can pull it off.

    • mesh 2 days ago ago

      Do data centers drive a lot of jobs or revenue for local or federal governments?

      • edm0nd 2 days ago ago

        The NSA Utah data center cost $1.5B-$2B to build and then another ~$2B of electronics to stock inside it.

        It also had hundreds of contractors and employs 200+ full time employees. It consumes $40M worth of electricity each year and uses 1.7 million gallons of water each day for cooling.

        So it sure doesn't seem like it unless we want to get into a debate around how much perceived useful data and value the NSA gets out of spying on Americans and foreign countries.

        https://en.wikipedia.org/wiki/Utah_Data_Center

    • citizenpaul 2 days ago ago

      >There are companies getting 8-10 figures a year to fail

      I read some article about how the federal retirement system is about up to $500m over the last 20 years on a failed outsourcing the digitization of their paper system. They are apparently not much closer to having a digital version than they were when they started something like 3-4 contractors ago. Specifically the storage hub in West Virginia.

      There has to be some sort of grift in that kinda waste. How do that many companies not move the needle and get to keep the money?

      • jonathanyc 2 days ago ago

        In Canada, there was a lot of controversy over the government awarding ~$200M in contracts to one consultancy, almost always without competitive bidding. They chronically underdelivered, e.g.:

            There has been much scrutiny over how much the ArriveCAN app cost to develop and who was subcontracted for its development. Contracts show that the federal government will spend close to $54 million with 23 separate subcontractors. A Parliamentary committee ordered federal departments to submit contracting documents related to the app but have been told that the names of subcontractors cannot be released citing issues of confidentiality. In October 2022, two developers at two separate IT companies took part in a hackathon where they both developed duplicates of the ArriveCAN app in under two days, for an estimated cost of $250,000.
        
        Surely the actual app was more complicated than the hackathon duplicates. But where in between $250k and $54m should the cost have been? To be fair, I read estimates saying Healthcare.gov cost around $500m, and a friend who I know is a great engineer worked on that (albeit in a rescue capacity). And a single F-35 costs $80m, so maybe we need to triage things.
        • citizenpaul 2 days ago ago

          > cannot be released citing issues of confidentiality

          How the f can this even be an excuse? How can the government be confidential from itself? If so why has the person that allowed confidentiality in these contracts not been removed. Rhetorical of course, "The un-accountability machine" grift in action.

      • 2 days ago ago
        [deleted]
      • Jtsummers 2 days ago ago

        There are a lot of problems. A key one I saw repeatedly:

        No one in program offices (the offices responsible for these system developments) has the proper expertise to judge IT/software contracts or the IT/software portion of contracts. And, they won't listen to you because they've got a contractor or two sitting on their shoulder whispering into their ear things that are very wrong (this got a Colonel I worked with in trouble once) and leads to very biased decision making, away from reality and in favor of the grifters.

        This is a reliable thing. There are some really good people in gov't, but they don't seem to make their way to the program offices. So without expertise and good judgement, you get these 8-10 figure (or maybe worse) boondoggles.

      • ohSidfried 2 days ago ago

        [dead]

  • a13n 3 days ago ago

    > The OpenAI chief told White House officials that A.I. data centers would be a catalyst for the re-industrialization of America, creating as many as half a million jobs

    Would love to hear OpenAI's explanation behind this line of thinking.

    • JumpCrisscross 3 days ago ago

      > Would love to hear OpenAI's explanation behind this line of thinking

      Would have to be energy. Data centres have light human footprints. And Altman wants to fabricate the chips in the Middle East, not America [1].

      [1] https://www.bloomberg.com/news/articles/2024-04-10/openai-s-...

      • segasaturn 3 days ago ago

        Yes, according to Vox Microsoft is leasing all the energy from the Three Mile nuclear power plant and planning to construct more to power their AI training. People pointed out how much energy Bitcoin was (and still is) wasting, AI looks like it's going to dwarf that amount of energy use for an equally questionable product.

        e: https://www.theverge.com/2024/9/20/24249770/microsoft-three-...

        • ziml77 18 hours ago ago

          People are also concerned about the amount of energy going into AI.

          But still, there is a major difference between AI and Bitcoin. With AI, the more power we put in the more we get out. That might be in the form of more accurate output, longer output, or larger context windows. With Bitcoin, more power going into the calculations just makes the calculations harder so the amount of useful work stays exactly the same. The calculations themselves have no use other than to waste time and power.

        • 2 days ago ago
          [deleted]
        • soulofmischief 2 days ago ago

          "Equally questionable" is doing a lot of heavy lifting there. There are literally massive, paradigm-changing projects in both spaces right now, with real users and communities. If you haven't gotten anything out of these technologies, that's on you and not them.

          There's all manner of insane, ambitious things being worked on in the open right now, if you are able to muster the discernment necessary to ignore the natural, unavoidable pile of grift that follows any new technology or hype cycle.

          • beeflet 2 days ago ago

            In cryptocurrency, mining is not necessary for technical progress. You could have a world in which the top cryptocurrencies are 1/100th of their current speculative value and thus 1/100th of the funding would go to miners and they could still demonstrate the technology and improve the state of the art.

            In many ways, the amount of money being put into cryptocurrency is antithetical to its development

          • jjulius 2 days ago ago

            >There are literally massive, paradigm-changing projects in both spaces right now, with real users and communities.

            And in your opinion, these are...?

            • soulofmischief 2 days ago ago

              Regarding crypto, IC[0] for example is an incredible experiment to make decentralized cloud compute viable, using WASM runtimes. Lots of neat crypto projects like this. People forget the whole point was decentralization and trustless systems. While the grifters settled on centralized systems and gaslighting users, real crypto projects continue to explore the possibilities of decentralization.

              With AI, I don't even know where to start. Have you used the internet recently? We have multimodal transformer models which can input and output images, video and text. We can create music and images using diffusion techniques. We have world-class text generation systems, which of course still need a lot of infrastructural support, but the viability is already being proven. We have cutting-edge translation tools which blow anything previous out the water. We have code generation and introspection tools, which work well enough to majorly augment my engineering workflow. We are developing ways to "speak" with your data, turning natural language into advance query, analysis and synthesis tasks. We have neural radiance fields. Lip syncing and voice translation tech to automate localization in powerful new ways.

              And we have open models! Anyone can dive in, fine-tune a model and get viable results today. We can already start creating new pipelines today, so that we aren't wasting our time doing that once models get better at not hallucinating. The individuals and organizations who wait for this new technology to be perfectly reliable will get left in the dust by those who took the chance and did the pioneering work.

              This is still barely scratching the surface. It seems like there's at least one ground-breaking paper a month, often more than that. It's certainly easier to bash these technologies and communities from afar, but that attitude will only hinder you in the future once the tech has caught up. What we have already created through experimentation just since 2015 is stuff that I grew up reading about in science fiction.

              Could we hit an AI winter instead? Will decentralization ultimately be a dead end? Maybe. But it would be considerate to not bash those who are spending time and money to find out, and to not let grifters whom they cannot control co-opt the space and the narrative around it.

              [0] https://internetcomputer.org/

              • FactKnower69 2 days ago ago

                >Experience full stack decentralization: from DAOs and crypto cloud services to games, NFTs, and social media, the Internet Computer has something for everyone.

                hysterical. exact same shit as always, folks

                • soulofmischief 2 days ago ago

                  Would you mind elaborating your criticism into a substantial argument?

                  • namaria 2 days ago ago

                    You're the one copying and pasting 10 year old pitches.

                    • soulofmischief a day ago ago

                      I can't meaningfully engage with substanceless arguments, and I'm not interested in exchanging snarky pot-shots that don't actually address anything being said.

                      • namaria a day ago ago

                        If merely pointing out what you're doing gets to you maybe you should reconsider your choices.

                        • soulofmischief 16 hours ago ago

                          You misunderstand the situation and your comment comes off quite immature. When you're ready to have an adult conversation, with a substantial and clear argument, be my guest. But Hacker News isn't a place for flame bait or low effort posts. Have a nice day.

              • therouwboat 2 days ago ago

                Image generation with stable diffusion was useless, most of the time it makes images that don't even make any sense, like driving a car and driver is sitting in air outside of the car.

                Music generation is just mangled music from some popular band with custom lyrics, how can you think that you are creating something here?

                • soulofmischief 2 days ago ago

                  You can have that opinion. But for a creative person, any tool can be viable, and constraints often aid the creative decision-making process. Many artists are making use of these tools already across many mediums in order to create new sounds, imagery, 3d models from text, all sorts of things, and the people consuming it consciously are enjoying it.

                  If an artist enjoys using a tool to make something, and their patron enjoys the creative output, do you or I have any right to judge or admonish their process? Visionaries see what is possible, they don't spend their time critiquing things that aren't perfect and instead recognize and engage with novel, paradigm-shifting tools. I understand not wanting to interact with low-effort VC-driven bullshit, but there's a lot of amazing work being done in the open by scientists and enthusiasts.

                  The endgame, after these tools are developed and ironed out, will be a wealth of highly creative content like we've never experienced. We can shit on these tools or grab a shovel and try to improve them, which is what a huge community of people is doing right now despite the criticism.

                  • silver_silver 2 days ago ago

                    The vast majority of artists hate generative models with a passion. Many have stopped publicly sharing their work because they feel violated by their work being used to train them

                    • spacebacon a day ago ago

                      The reason we hate them is we rarely reveal our true secrets and methods. These stacks of creative building blocks (digital and physical) serve an artist well. Now anyone can be almost as good as a resourceful artist with a little effort.

                      The reason we love them is the same reason we love art… To create something out of nothing with the resources at hand.

                      Most artist love and hate, xor love to hate while simultaneously hating to love ai.

                    • HappMacDonald 2 days ago ago

                      [Citation Needed]

                      It is my understanding that artists up in arms over AI are merely a noisy minority.

                      Artists who couldn't be chuffed or who like AI and perhaps even use it as a tool are not per capita likely to scream quite as loudly about their positions or try as hard to dominate the narrative.

                      Reading the headlines 200+ years ago it would have been easy to assume that all weavers hated Jacquard looms, as well. Especially in light of the fuming Luddites that had a tendency to break into shops and smash them up.

                      But at the end of the day, a person really has to pick a side: Is AI imagery worthless slop or is it a dangerous force that will replace human artists? I'd suggest that anyone legitimately unable to compete against worthless slop must be vastly overestimating the quality of their own work.

                      • silver_silver 15 hours ago ago

                        According to this survey [1] of 1000 visual artists, 95% believe they should have a say in whether their art is used for training. That much should be obvious. As for hating it with a passion: that's an anecdote from my circle and the discussions I've seen online outside of tech bubbles.

                        You say it's more efficient but I think if artists were fairly compensated for their work, or at the very least had a say in the process, it would be too expensive or not have good enough output to compete. On top of that, it uses A LOT more energy - which should be enough to exclude it as an option considering the current trajectory of climate change.

                        Aside from that we need to consider the more philosophical question of whether we want to make creative dream jobs even more impossible to find. Should we degrade the public space even more with an avalanche of "good enough" imagery simply to improve margins for the executive class? Computers have already made every level of developed economies significantly more productive and yet individuals are worse off financially than they were in the 90s. All this pipedream of replacing workers with AI will do is relegate them to manual labour or lock them into some barely-enough universal basic income scheme.

                        [1] https://www.dacs.org.uk/news-events/artificial-intelligence-...

                        • soulofmischief 9 hours ago ago

                          The anti-compute pro-climate argument is so tiresome and it ignores that compute requirements decay non-linearly over time.

                          Additionally, as an artist myself I don't really much care what "the majority" of artists I think, I didn't become an artist to follow trends and outsource my own critical thinking. As a software engineer, I have a level of understanding behind these models that far surpasses the average artist. As a technological visionary, I understand where the technology will lead, as well as its inevitability. The cat's out of the bag. So I don't very much value the average uninformed opinion.

                          > Aside from that we need to consider the more philosophical question of whether we want to make creative dream jobs even more impossible to find

                          Technological progress always leads to the destruction and creation of jobs. People similarly bashed the loom, the printing press, calculators, computers, the internet, cars, planes, you name it. There's no benefit in being a Luddite. Intelligent and aware engineers and artists are incorporating these tools into their workflows today, or at least staying informed, so that they find themselves still employed in whatever future is ahead of us.

                          • silver_silver 5 hours ago ago

                            It is a fact that the majority didn't consent to this, not an opinion. The foundation of these tools is, in my opinion, copyright violation. Building a product by transforming a collection of works cannot be fair use by any stretch of the imagination. The artists aren't being compensated because it would cause the business model to collapse.

                            This isn't ludditism. Ethical ML artistic tools do exist but none of these prompt-based generators fall into that category. None of the tools you mentioned are directly equivalent because they don't depend on a collection of existing work to produce derivatives.

                            All I will say about the climate is that emissions have to peak next year, and be halved by 2030 to meet the Paris Agreement goal. I have to assume from your casual dismissal that you're not aware of the consequences of missing it.

                • JumpCrisscross 2 days ago ago

                  > most of the time it makes images that don't even make any sense

                  This is good enough for 90% of image use cases, which is mildly relevant filler between and next to text.

              • oefnak 2 days ago ago

                > Talking with our data

                Beautifully put, thanks for writing my thoughts out.

                • soulofmischief 2 days ago ago

                  It's something we'll take for granted one day and wonder how anyone ever got anything done without it.

          • a13n 2 days ago ago

            yeah saying AI is questionable is like saying building the internet or electricity is questionable

            • grugagag 2 days ago ago

              Massively training LLMs in hoping they would magically turn intelligent is questionable. I hope I am wrong and nothing is wasted. Only time will tell

              • soulofmischief a day ago ago

                Motivated people still have to spend their time and resources finding out, instead of just complaining like many have done.

              • qgin 2 days ago ago

                Nobody really knows where the limit of scaling is, but we haven’t hit it yet. Models get smarter with emergent abilities as they get bigger.

      • scotty79 2 days ago ago

        > Data centres have light human footprints.

        only after they are already built

    • ben_w 2 days ago ago

      "The factory of the future is a man and a dog. The man's job is to feed the dog. The dog's job is to bite the man if he touches anything."

      There will therefore be a sudden supply of half a million such factories.

      • beeflet 2 days ago ago

        I like how this quote combines the bleak reality of automation with the imagery of a far side comic

        • selimthegrim 2 days ago ago

          Plot twist: it’s the Black Mirror take on Autofac

    • weego 3 days ago ago

      They want huge tax credits for data centers that then won't employ many people or will employ visa'd migrants, but that can of worms can be kicked down the road to another administration.

    • moogly 3 days ago ago

      All the people within creative arts will have to start working in the lithium mines. So that's accurate.

    • BeefWellington 2 days ago ago

      > Would love to hear OpenAI's explanation behind this line of thinking.

      Think the Matrix, only the futuristic artificial intelligence isn't harvesting human brainpower/heat -- instead most people work at power plants.

      • yencabulator 2 days ago ago

        In the poorly-thought-out script of The Matrix, the people work as power plants.

    • _heimdall 2 days ago ago

      Energy, infrastructure, and the potential for new executive branches dedicated only to enforcing AI regulation would go pretty far. The government is pretty damn good at spending too much money and creating jobs, they often don't need much of a reason as long as it justifies spending more money and centralizing more power.

    • throwaway4233 3 days ago ago

      > Would love to hear OpenAI's explanation behind this line of thinking.

      My assumption is that the data centers would mostly be staffed by those who will have to manually audit data going in and out of the LLMs on a daily basis. There would also be a need to generate/curate the test data that the LLMs will have train on. There is potential for half a million jobs, but is that what you would want to have human effort invested in, is the real question.

      • yencabulator 2 days ago ago

        Such a job would not be at the data center. And existing data cleanup jobs are off-shored to cheaper labor, there's no indication that would change.

    • namaria 2 days ago ago

      It's snakeoil. It cures all, it solves every problem. Pay the man now and he'll show it later.

      • this_steve_j 2 days ago ago

        Patent medicine is how I like to think about it. Strong placebo effect, occasionally some health benefits, mostly nominal. Strong economic incentives and some harmful effects. Traveling salesmen who claim wonders, then leave town before the miracle is delivered.

        Not working for you? Be the first to try the new elixir.

    • tarikozket 3 days ago ago

      someone gotta plug the GPUs in and run the cables, right?

      • mirekrusin 3 days ago ago

        there should be robot for that

        • Aerroon 2 days ago ago

          There should be and could be a robot for everything. The scale and variability of the tasks just don't make them viable.

          I should have a machine that peels my potatoes. I should have a machine that cooks me my favorite meal at the push of a button. I should have a machine that I can feed fabric into and it will sew me a shirt. All of these things are doable, but they don't make economic sense or they need to handle too many unforeseen cases.

          A machine that sews shirts doesn't make sense to have at home, but if you're selling millions of shirts then it does. Same for all the rest.

          A machine that runs all the cables etc needs to be too complicated to handle all the eventualities that there isn't enough scale (yet) to justify its existence. Basically, we don't build enough data centers.

          • mirekrusin 2 days ago ago

            When it comes to arms people don't seem to have problems with so many zeroes in cost and scale in general. Make fabs, not war.

      • Terr_ 3 days ago ago

        But after they plug in the GPUs and run the cables, Large Language Models will give everyone personal robot servants and jetpacks while allowing {your generous investment nation here} to conquer all the economies. /s

    • xbar 2 days ago ago

      Altman is not a good economist.

    • selimthegrim 2 days ago ago

      You better tell Memphis that.

  • mrbungie 2 days ago ago

    This is a twisted and ironic example of VC thinking taken to the extreme: we just need it to growth it more for it to really show its potential.

    What happens if giving to everyone and their mothers ends up in low usage? Are they going to blame scale again and ask for a Dyson Sphere?

  • groby_b 2 days ago ago

    Sam's flailing. He's desperate for more money, because OAI is running out. And Sam will say anything and everything to get more money. He's making whatever noise makes the audience feel good, truth be damned.

    And it's kind of funny to see how well MS has played that particular instrument. A partnership with Sam got them 49% ownership and got rid of the doomer faction. The 75% profit share makes any future investors look long and hard at investing into what looks to be a capital-intensive low-revenue business right now.

    Which makes it likely that OAI will run out of money, and oops, let's see if there aren't suddenly more than 49% ownership. And MS has a technology where they were long behind. (My bet is that the majority ownership will happen this funding round, because Sam is desperate)

    The only way that doesn't happen is if Sam manages a massive investment round and finds a path to profit before the money's burned up. And the beauty of it is that the early 49% ownership means they only need to make a comparatively tiny investment this round to still be a majority owner in the company, with a 75% revenue share until their money is fully recouped. And the other investors get to bear the majority of the risk for the much larger funding round.

    This are all utterly logical plays if you are willing to accept Sam is somebody who can make things move, loves gambling, and is a narcissist.

    I really do think he's being played by a virtuoso, and it gives me great joy to watch that unfold.

  • mhh__ 2 days ago ago

    In a sense I kind of respect it but the way OpenAI are 100x-ing all their statements ("Universal basic compute"!) like every small startup has to pretend to be the next XYZ (only on a much bigger scale) is hilarious. Are they even in the lead at the moment?

  • DonHopkins 2 days ago ago

    Sam Altman will champion Direct Intelligence, while Elon Musk will take credit for Alternating Intelligence. Then, to prove how dangerous Alternating Intelligence really is, Sam will use it to execute dogs, fry elephants, and even immolate a few Teslas, before finally inventing the deadly Artificially Intelligent Chair to execute prisoners.

  • karaterobot 2 days ago ago

    > In private conversations, Mr. Altman has compared the world’s data centers to electricity, according to three people close to the discussions. As the availability of electricity became more widespread, people found better ways of using it. Mr. Altman hoped to do the same with data centers and eventually make A.I. technologies flow like electricity.

    Reminds me of a quote I saved from Kevin Kelly's book The Inevitable, which was published in 2016. I saved it because it sounded so absurd at the time. To be clear, it sort of still does sound absurd to me, who remain moderately skeptical about AI (though admittedly less so than in 2016, largely because of the following sentence). What's very serious is how much money and old-fashioned brainpower is going into making this future real. I don't know whether the metaphor was arrived at independently, derives from the same source, or is just an obvious way of thinking about the world when you've drunk the right flavor of Kool-Aid.

    > Amid all this activity, of a picture of our AI future is coming into view, and it is not the HAL 9000—a discrete machine animated by a charismatic (yet potentially homicidal) humanlike consciousness—or a Singularitan rapture of superintelligence. The AI on the horizon looks more like Amazon Web Services—cheap, reliable, industrial-grade digital smartness running behind everything, and almost invisible except when it blinks off. This common utility will serve you as much IQ as you want but no more than you need. You’ll simply plug into the grid and get AI as if it was electricity. It will enliven inert objects, much as electricity did more than a century past. Three generations ago, many a tinkerer struck it rich by taking a tool and making an electric version. Take a manual pump; electrify it. Find a hand-wringer washer; electrify it. The entrepreneurs didn’t need to generate the electricity; they brought it from the grid and used it to automate the previously manual. Now everything that we formerly electrified we will cognify. There is almost nothing we can think of that cannot be made new, different, or more valuable by infusing it with some extra IQ. In fact, the business plans of the next 10,000 startups are easy to forecast: Take X and add AI. Find something that can be made better by adding online smartness to it.

    • yencabulator 2 days ago ago

      Rarely have I read that much blind faith in AGI happening.

      Reality today seems to be anyone who's getting decent results out of ML is doing something slightly different according to the specifics of their domain. You can't take a chatbot LLM and expect it to predict the weather, architect a building, drive a car, and fold laundry.

  • throwanem 3 days ago ago

    It has to be easier to pitch sovereign wealth funds on something that isn't even nominally nonprofit.

  • throwaway918299 2 days ago ago

    Who is buying this absolute nonsense?! Sam Altman is the biggest conman of the last century.

    • didcoten 2 days ago ago

      And worse than that, he sexually molested his sister.

      https://news.ycombinator.com/item?id=38311509

      Pure evil.

      • disqard 2 days ago ago

        Thank you for sharing that!

        Whatever else he has done, he ranks super-super-low on the empathy scale...

        https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...

        Search for "car payments". It's heartbreaking :'(

        If I ran a company, I would not hire this man.

      • beeflet 2 days ago ago

        Idk if the clam is true, but it does make me question sam's character. I appreciate how well documented and sourced it is and how the author attempts to capture the big picture through the accumulation of many small verifiable facts.

        Edit: what is this lesswrong site about?

        • marcosdumay 2 days ago ago

          > what is this lesswrong site about?

          It's about philosophy. Or, at least that's the closest I can place it.

          It has also been very interested on futurism, AI and the Singularity on the past.

          • hiddencost 2 days ago ago

            It's about applying Kahneman and Tverskys Thinking Fast and Slow as a way of life. It's the core community hub of the Rationalists, who are hugely influential in San Francisco, and the cutting edge AI scene.

      • throwaway918299 2 days ago ago

        Holy

        Shit

    • xbar 2 days ago ago

      He makes my list, but the list is long.

      • Paddywack 2 days ago ago

        Out of interest, who would be in the top X of that list?

  • Ericson2314 3 days ago ago

    > TSMC’s executives found the idea so absurd that they took to calling Mr. Altman a “podcasting bro,” one of these people said.

    This is very heartwarming.

    • benreesman 2 days ago ago

      Never having been an elite semiconductor person myself, I have only the dimmest intuition about the absurd science, technology, engineering, and mathematics that goes into it: but you hear these parts per trillion, measured in angstroms, quantum mechanics type units of account and it’s hard not to imagine it as some world apart.

      I have a fond fantasy that those folks are laughing their asses off whenever we call the software stuff “high technology”.

      • bob1029 2 days ago ago

        The hard part of semiconductor manufacturing is all about the emergent complexity of many systems put together all at once. Any one tool can be isolated and dealt with by a finite team in a finite period of time. It's when you combine all of the tools and facilities together that you get this beast that has incomprehensible levels of complexity.

        Some people like to say that everything is harder in game development, but I'd say that it's even more so in semiconductor systems engineering. Tracing problems through every layer of the factory is like playing Elden Ring without a monitor. All you get is a guy in Korea who speaks very bad English to describe the scene for you. I can guarantee there is nowhere you will find more cursed problems. Imagine enduring a root cause analysis that winds up attributing a yield issue to fertilizer being spread on a field 20 miles away.

      • petre 2 days ago ago

        They're laughing their asses off when some CEO of an AI company comes out and asks investors for $7T for AI chips. Sure bro, here's 1/4 of the US GDP for your AI chips.

      • hiddencost 2 days ago ago

        Keep in mind they're also dealing with being potentially the only thing keeping China from invading Taiwan. So keeping their competitive advantage is very important to them.

    • JumpCrisscross 3 days ago ago

      Is Altman another Ackman/Musk? I thought he tends to stay on topic when doing interviews.

      • mrcwinn 3 days ago ago

        It may just be his public persona. I don’t work with him on a day to day basis of course. I find him to be very uninteresting. He speaks with an air of aloof tech-savant but it feels quite hallow, faked, forced.

        I can’t think of many examples where he said something in an interview and it really challenged or surprised me. By contrast, any random hour with Lisa Su and you will come away quite impressed and easily understand why she is her company’s leader.

        Plenty of smart money behind Altman, though, so maybe he shows up stronger behind the scenes.

        [Edit: added "aloof" due to perceived aloofness.]

        • botro 2 days ago ago

          "Suddenly, the chat window on Sequoia’s side of the Zoom lights up with partners freaking out.

          “I LOVE THIS FOUNDER,” typed one partner.

          “I am a 10 out of 10,” pinged another.

          “YES!!!” exclaimed a third.

          What Sequoia was reacting to was the scale of SBF’s vision....We were incredibly impressed, Bailhe says. “It was one of those your-hair-is-blown-back type of meetings.”

          This is 'smart money' in reference to Sam Bankman Fried.

          • s1artibartfast 2 days ago ago

            Not really the dig you think it is. They were right but got unlucky that SBF broke the law, which is hard to predict.

        • soneca 3 days ago ago

          Magicleap also had plenty of smart money behind them. It ceased to be a useful signal for me then.

          • disqard 2 days ago ago

            Indeed, and SoftBank flushed a ton down the toilet named WeWork because of Adam Neumann's charisma.

            I don't think anyone doubts Sam's ability to charm people and tell a good story. The graver concern is "should we trust him"? My gut says NO.

      • drexlspivey 3 days ago ago

        My opinion after hearing him on multiple interviews is that he always just says various generalities, he can be talking for 5 minutes and never say anything, all noise. I don't know if he is under an NDA and can't say literally anything about OpenAI and the future or if it's just the way he talks.

        • surfingdino 2 days ago ago

          I had the same revelation when I read the transcript of one of his interviews with Lex. I found it to be so free of detail or substance that I wanted to make sure it wasn't just a one case of him having a bad day, but reading transcripts of a couple of more interviews confirmed my suspicion that he has nothing of substance to say.

          • shafyy 2 days ago ago

            That's all of Lex's interviews.

            • mrbungie 2 days ago ago

              That's not my impression from LeCun's or Carmack's (5-hours!) interviews. I would posit that, naturally, it depends on who is being interviewed, and that Lex also tries to keep the topics and their approach as accessible as possible.

            • Vecr 2 days ago ago

              No it's not, for example Yudkowsky gives his standard talk on there. I think it matters if the guest has something ready to go for the allotted time.

              • nirav72 2 days ago ago

                  Yodkowsky shouldn’t be taken seriously. Especially after him saying that any person or country working on banned AI research should have their data centers destroyed via an airstrike.
                • beeflet 2 days ago ago

                  Can you make a case against this? It seems plausible to me that AI could be used as a weapon of mass destruction

                  • FactKnower69 2 days ago ago

                    americans unironically trying to justify air strikes using the phrase "weapon of mass destruction" in 2024 -- to describe a fucking computer program, no less -- is so far beyond parody

                    wonder if this guy considers stuxnet a wmd

                    • beeflet 2 days ago ago

                      I think people should generally be allowed to own weapons and general purpose computers, I don't think anyone has a right to build a doomsday device in their garage. If it is impossible to reconcile these, then freedom will be impossible. The only long-term solution is to develop "anti-doomsday" technology.

                      If you have a massive warehouse to train an AI that, IDK folds proteins to develop a bioweapon or something, then you should not expect everyone else to sit on their hands while you plan to hold the entire world hostage.

                      Stuxnet was a program that targeted specifically SCADA controllers used in uranium enrichment facilities. It is not a doomsday device, it is a targeted counter-doomsday device. And while the USA has an imperfect track record with respect to "WMD"s/Proliferation, stuxnet is something that I approve of as an american.

                      • nirav72 2 days ago ago

                        >folds proteins to develop a bioweapon or something

                        That has been possible for a long time. The only thing needed was raw compute power to simulate it. Not sure how AI research suddenly makes itself worthy of a airstrike based on clearly unproven scenarios. What makes you think a rogue nation will sink ridiculous amounts of money building the power , cooling infrastructure, acquiring the necessary computing hardware and knowledge to build an hypothetical tool that will aid them in making a WMD - when they could just create tanks of sarin gas or anthrax for a fraction of the cost plus manpower that could be delivered to the target in multiple ways? A rogue nation having the capability to build AI datacenters, will also have the ability to do simple risk/benefit analysis.

            • surfingdino 2 days ago ago

              I meant Sam's answers.

          • null0pointer a day ago ago

            I’m glad I’m not the only one who thought his talk is absolutely devoid of content.

        • mrweasel 3 days ago ago

          Maybe he doesn't actually know anything?

          • ben_w 3 days ago ago

            That would be my bet. He's a CEO, I've not once heard it suggested that he's also a researcher (though according to Wikipedia he's a Stanford CompSci dropout). Quite rare to be competent at both CEO-things and the stuff you're hiring people to do.

            • disqard 2 days ago ago

              A Stanford dropout? He must be hot stuff, like Elizabeth Holmes... oops, wrong example.

          • maxwell 3 days ago ago

            But he has such force of will.

        • akomtu 3 days ago ago

          Perhaps he uses an earpiece connected to an LLM.

      • dotnet00 2 days ago ago

        Altman projects more Bankman-Fried vibes to me than Musk vibes.

        A common sentiment about SpaceX and Musk is that the employees also believe in the mission of making space accessible and going to Mars. Musk has been consistent on that being SpaceX's eventual goal since founding it, and he stuck through it even when SpaceX was one failure away from missing payroll. All of his early employees still publicly support this mission, even after retiring or moving on to found their own space companies.

        Altman comes off as a "say anything to keep the money flowing" type guy even to his own employees. The fractured leadership all seems to suggest that he can't even manage to convince the people who work with him the most that he actually believes in his oft-repeated lofty goals regarding AGI.

        • s1artibartfast 2 days ago ago

          I thought SBF interviews I listened to were excellent, with lots of substance and clear explication, especially the interview with Matt levine

          • dotnet00 a day ago ago

            Interesting, my impression of him from his interviews was the opposite, while I didn't follow him closely, I felt from the first time I heard him speak that FTX would turn out to be yet another crypto-scam/disaster.

      • blackeyeblitzar 3 days ago ago

        Ackman and Musk stay on topic in interviews as far as I can tell. They may share viewpoints that much of HN disagrees with, but I don’t see them answering questions with completely unrelated responses. Altman however, dodges questions all the time, with vague corporate speak and generalities. To me it looks like he is avoiding hard questions, although maybe he just doesn’t know the answer and is trying to stumble his way through, or he genuinely wants to not give away confidential things like trade secrets.

        • JumpCrisscross 3 days ago ago

          > Ackman and Musk stay on topic in interviews as far as I can tell. They may share viewpoints that much of HN disagrees with

          Makes sense. I'd consider Ackman issuing comments on politics off topic for a hedge fund manager. But I suppose he's technically on topic with a narrow view.

          > Altman however, dodges questions all the time, with vague corporate speak and generalities

          He has a documented history of dishonesty, correct?

          • blackeyeblitzar 2 days ago ago

            I don’t know his history fully enough to say. But it does seem like past claims about not getting equity or about keeping OpenAI’s founding principles may have at least changed. Some claim it may be an honest change - they need capital to compete, attract talent, and complete their mission, and giving equity actually keeps Altman focused on OpenAI instead of side gigs and investments. That could be true. But I’ve found his evasiveness in interviews to be untrustworthy.

      • ohSidfried 3 days ago ago

        [dead]

      • Ericson2314 3 days ago ago

        [flagged]

        • 3 days ago ago
          [deleted]
  • sanp 2 days ago ago

    All these Libertarians asking for government handouts…

  • 3 days ago ago
    [deleted]
  • beezlewax 2 days ago ago

    Honestly the more AI I use the more issues I find with it. Copilot chat is especially useless outside of anything basic.

    Downright harmful in some cases.

  • rldjbpin 2 days ago ago

    call me narrow-minded but just like general public read books on a daily, write essays regularly like daily journals, or make public speeches, you'd have to drink a special kind of kool-aid to assume we all need AI like public utilities.

    the "muh job" angle is well and good, but if this really convinces you, i got a bridge to sell. maybe sam thinks we all are into making podcasts now.

  • pwb25 3 days ago ago

    so tired of all this AI hype and CEO stuff. just do your thing and let peopel buy it or not, no need to try to be the next henry ford or something

    • elliotec 3 days ago ago

      It's all part of the Sam plan to make as much money as possible before whatever happens next.

      • mrbungie 2 days ago ago

        I would also say that big tech companies (at least MSFT, Apple and Meta) are consciously all-in in this game. There is nothing else to drive "disruptive" growth in their ovens, and many tech stocks were already going down in 2022 pre-ChatGPT.

        If AI fails to deliver its promises or fails to do so in the short timeframes implicitly proposed by the ecosystem, I expect dark times in terms of growth for the tech industry at least for a while.

        Anecdote: As a on-off tech shopper dealing with some big providers from time to time, I can sniff the reek of desperation when trying to sell GenAI stuff. And that's weird.

    • TyrianPurple 3 days ago ago

      Just pump and dump, boys. Pump and dump.

      • fnordpiglet 3 days ago ago

        I’d rather we pump and dump new technologies and investment in renewables, scale out compute, and all the transferable benefits for the investment than NFTs.

        I’m kind of curious where it all leads to. There’s a non zero chance it’s amazing given what is in my hands today is already amazing compared to what was in my hands 3 years ago.

        I don’t think the choice is like AI or curing cancer. I think it’s more like Doge coin and meme stocks or whatever finance fad.

      • fuzztester 2 days ago ago

        Or pick the right shovels.

        And profit!

    • vedant 2 days ago ago

      There is absolutely no way to achieve the impact of Henry Ford without actively trying extremely hard to be the next Henry Ford.

      • pwb25 2 days ago ago

        why does he need to be? just relax Sam

  • gafferongames 3 days ago ago

    Eyes rolling so hard right now

  • zero-sharp 3 days ago ago

    [dead]

  • 7e 2 days ago ago

    For 7 trillion dollars we could solve aging, making humans immortal and genetically engineer them to be super intelligent. Instead we’re going to create super-inefficient AIs? At least use the money to fix climate change and other existential threats, rather than creating a new one.

  • jmakov 3 days ago ago

    There's really no way back. AI is today helping design chips (check Google's report on how they use it for their own designs), drugs etc. And having a LLM connected to a simplistic enterprise app is a money printing machine. Really is a new ind revolution. And it's held back by not enough infra and power.

    • jazzyjackson 3 days ago ago

      I haven't seen a single case study about copilot helping a company make money by rewriting their excel formulas to have named references with type checking to prevent clerical errors, which must be a billion dollar industry on its own. Why not?

      • scotty79 2 days ago ago

        There isn't much evidence for benefits of strongly typed computer languages or object oriented programming, yet here we are.

        • jazzyjackson 2 days ago ago

          Typed vs Untyped is a different matter. Excel is typed but wrapping everything you do in type checks is annoying so spreadsheets used as sources of truth are littered with hard to find errors (strings become numbers, numbers become dates etc)

        • 2 days ago ago
          [deleted]
      • 2OEH8eoCRo0 3 days ago ago

        Rome wasn't built in a day? Humans are also creatures of habit.

    • tmpz22 3 days ago ago

      > And having a LLM connected to a simplistic enterprise app is a money printing machine.

      What would be a concrete example of this? Has Salesforce revenue increased with their AI offerings? Has Microsofts?

      • segasaturn 3 days ago ago

        This is what I was wondering reading that comment too. The only people I've seen making money off LLMs are spammers.

        • scotty79 2 days ago ago

          ChatGPT itself

          • segasaturn 2 days ago ago

            ChatGPT loses billions of dollars, $5bn this year alone just on running the service. Even the subscription-based services like Github Copilot are losing money, $20 lost for every $10 subscriber according to MSFT. The only ones making money I can see are NVidia and maybe the oil and gas companies...

            • disqard 2 days ago ago

              The gameplan is simple:

              * Capture the market (like Uber), by doing whatever it takes to be successful (e.g. offer your services at 1/3rd of what it costs) -- flush the equivalent of several countries' GDP down the toilet, if necessary.

              * Wait for everyone to use only your service (because it's like water??)

              * Raise rates (hopefully, everyone is "locked-in" at this point)

              * Profit!

              • marcosdumay 2 days ago ago

                What is amazing is that the LLM market is already comoditized and in a race to the bottom, so step #1 can only be done by dumping.

                Also, it has almost no barriers to entry (AFAIK, closing a deal with NVidia is the largest one right now), so you can't do step #3.

              • mrbungie 2 days ago ago

                Oh yeah, directly from the Softbank ventures playbook. Spoiler: it does not work as well as they hope.

                Plus, we're talking about hyperscalers/big techs here, that would mean a commercial attrition war where OAI would be competing with actors such as Meta giving it for free from already existing platforms that have different main income sources (ig, fb, whatsapp). Good luck with that.

    • iwontberude 3 days ago ago

      I think we are seeing in realtime the resetting of expectations which means we don’t need to “go back” rather we are realizing we aren’t anywhere to go back from. LLMs haven’t made more than a small dent. Non LLM inference is still more important and will continue to be. The problem with LLM is they input/output in text or images and are too general.

    • mrweasel 3 days ago ago

      I think the keyword here is "helping". Then you have other industries, creative industries, copy-writing, human resource and customer support where LLMs are making thing worse, way worse.

      Writing of LLMs a being useless isn't wrong and unproductive, but assuming that they are universally applicable, or that they can function without supervision is also very wrong and potentially dangerous.

    • trash_cat 3 days ago ago

      I can't tell if you are being sarcastic or not.

      • jmakov 2 days ago ago

        Not sarcastic. Imagine a LLM fine tuned on your country's accounting standards, legal and tax docs. Instead of paying sbdy 250/h you can now just type "eli5 can I apply for a tax relief for R&D I'm doing in my company".

        • teleforce 2 days ago ago

          Personally I think this is one of the killer applications for LLM. In some countries you can even have double tax deduction instead of only normal tax deduction for R&D activities, if your company fulfills the government requirements and following the correct procedures. But the main problem is that the info is oblivious to most of the companies and majority are ignorant of the tax discount facility available to them with respect to R&D [1].

          [1] Comments on Ask HN: What have you built with LLMs?

          https://news.ycombinator.com/item?id=41508656