The Case Against Generative AI

(wheresyoured.at)

24 points | by speckx 19 hours ago ago

8 comments

  • logicprog 19 hours ago ago

    Whether or not he's right, Zitron just keeps repeating the same points over and over again at greater and greater length. This newsletter us 18,500 words long (with no sections or organization), and none of it is new.

    • jcranmer 14 hours ago ago

      As someone who generally agrees with the thesis, I still find the length of the article quite frustrating, since it's definitely quantity over quality for text here.

      The core issue is that OpenAI is committing to spending hundreds of billions on AI data center expansion that it doesn't have and that it doesn't appear able to acquire, and this basic fact is being obscured by circular money flows and the finances of AI being extremely murky [1]. But Zitron is muddying this message by excessive details in trying to provide receipts, and burying all of it behind what seems to be a more general "AI doesn't work" argument that he seems to want to make but isn't sufficiently well-equipped to make.

      [1] The fact that the Oracle and Nvidia deals with OpenAI may actually be the same thing is the one thing new to me in this article.

    • mortsnort 18 hours ago ago

      He should use AI for that

      • logicprog 17 hours ago ago

        It's all so regurgitated and unoriginal, even between him and other anti-AI critics, that it truly feels like he does. Not that AI hypers are better — but those with more nuanced middle of the road complex views are the ones that are (such as Simon Willams' work on The Lethal Trifecta, the "AI Coding Trap" article recently, etc), and I think that's interesting. I also feel like he really cherry picks his statistics (both model performance and economics wise) as much as his enemies do, so it can be exhausting to read.

        • apercu 17 hours ago ago

          But at least there are _some_ critics that try to apply critical thought against the hype machine and all the stochastic bullshit we deal with every day.

          • logicprog 17 hours ago ago

            Agreed. And I mean, I think the nuanced investigations of what AI is and is not good for or capable of, and how it might or might not be made sustainable going forward both economically and environmentally, are a much more meaningful, interesting, and worthwhile check on the hype than dogmatic rejection, because just as hype won't convince anyone sane, neither will dogmatic rejection built on a biased accounting of what's going on with no vision of the possible futures available to us. No one will be swayed by that that isn't convinced already, and many will be repelled. Especially given how wrong or incomplete their accounts often are (like Gary Marcus talking about how AI can't run searches to retrieve information lol).

            To be clear, I started out as a fan of Gary Marcus and Ed Zitron, and a rabid anti-AI hater, because I tried GPT 3.5 soon after it was released and was extremely unimpressed with any of its capabilities. But after a while, I started to get uncomfortable with my closed mindedness, and so decided to try to give the tools a fair shake, and by the time I had decided to do so, the capabilities had expanded so much that I genuinely became impressed, and the more I stress tested them the more I began to gain a more nuanced understanding where there are serious traps and limits and serious problems with how the industry is going but just because a tool is not perfectly reliable does not mean it isn't very useful sometimes.

  • strict9 17 hours ago ago

    Every time one of Zitron's posts come up I think of bitcoin or algorithmic social media feeds. Like those things, I understand people have strong opinions on whether or not it's good or bad for society.

    But what's the endgame? Is it to persuade people not to use these things? Make them illegal? Create some other technology that makes them obsolete or non-functional?

  • SoylentGreenGPT 16 hours ago ago

    Ed is insufferable. And for the most part, he is right. LLM’s are propping up the economy, but as a technology these models are not transformative but iterative. At the rate of investment, unless we reach AGI in the next 24 months, then the ROI will not pay off. I don’t know what I’m supposed to do if Ed is right. Maybe I need to move my retirement accounts out of index funds and into cash. But for now, it does seem the market is in a bit of collective psychosis. Sigh.