The Looming AI Clownpocalypse

(honnibal.dev)

54 points | by birdculture 19 hours ago ago

13 comments

  • waffletower 15 hours ago ago

    I appreciate the tone of this article. I am exhausted by the usual existential fear, but doomers are in good company -- during the development of the OG atom bomb, there was fear of the possibility that the fission chain reaction would not stop and all life-as-we-know-it would be destroyed upon detonation. I look at the idea of a rapid AI induced material "robocalypse" (robot-apocalypse) as a similar projected fear, and the result being similarly unlikely, particularly in the near term (coming decades). Even with AI access to sophisticated 3d fabrication facilities, there will be severe supply chain constraints that would impede an overwhelming spawn of robots. If say China or the United States had a ubiquitous deployment of robots already, with manual and mobility capabilities roughly equivalent to humans, the concern would perhaps actually be warranted. We are far from that. Clownpocalyse fits the bill better. Much of the Clownpocalyse are in the ideas themselves, like Nick Bostrom's paperclip improbability.

    • rep_lodsb 14 hours ago ago

      The scenario was about the first fusion (hydrogen) bomb test causing a runaway "ignition" of the atmosphere. It was never considered likely, but they still did the math to make certain it couldn't happen.

  • pixl97 17 hours ago ago

    There are two AI futures I see at the moment that are not so great.

    One is the centrally controlled 'large' AI models that become monitoring apparatuses of the state. I don't think there needs to be much discussion on why this is a bad idea.

    This said, open (weight) models don't save us from problems either. It's not hard to imagine a small capable model that can boot strap itself into running on consumer hardware and stolen cloud resources being problematic on the net spreading its gremlin like behavior wherever it could. The big AI companies would gladly use AI behaviors like this to dictate why all models/hardware should be controlled and once the general population is annoyed enough, they will gladly let that happen.

    Lastly, prompt injections are not a, at least completely, solvable problem. To put it another way, this is not a conventional software problem, it's a social engineering problem. We can make models smarter, but even smart humans fall for stupid things some of the time, and models don't learn as they go along so an attacker pretty much as unlimited retries to trick the model.

    • dragonwriter 16 hours ago ago

      > It's not hard to imagine a small capable model that can boot strap itself into running on consumer hardware and stolen cloud resources being problematic on the net spreading its gremlin like behavior wherever it could.

      If you understand what a model is and how you need separate traditional software to run it and turn is output from tokens into text and then (often in a separate piece of software) from text into interactions with the user or other i/o functionalities of the bost computer, it becomes harder to imagine a scenario where that is a problem primarily with an open model and not with the traditional software making up an open agentic framework (an OpenClaw successor is the threat here, not a Llama successor.)

      • pixl97 15 hours ago ago

        >small capable model that can boot strap itself

        You do understand what that term means right.

        Openclaw is software, right? LLMs can write software, so with a single running copy of an LLM you can make a worm style virus that can execute and spread itself via whatever means necessary, such as executing its own copy of Claw.

        • dragonwriter 14 hours ago ago

          > You do understand what that term means right.

          No, it doesn't mean anything because it is premised on a category error. That's literally the whole point of the post you are responding to.

          > Openclaw is software, right? LLMs can write software, so with a single running copy of an LLM you can make a worm style virus that can execute and spread itself via whatever means necessary, such as executing its own copy of Claw.

          You can with a framework wrapped around the LLM that allows it to do that; the danger point is the framework, not the model.

          • pixl97 9 hours ago ago

            If the model can write the that framework then what's the difference?

            Recursion is Recursion is Recursion

      • catigula 15 hours ago ago

        Open-Claude-Abliterated-8.5, design a virus specific to dragonwriter's biology. Deploy.

    • catigula 15 hours ago ago

      You can only imagine two bad scenarios?

      I can't even imagine one plausible good scenario.

      • pixl97 15 hours ago ago

        I don't disagree, this is more short term scenarios we're going to see unfold soon. The longer you look at the timeline the worse it tends to get.

  • roughly 14 hours ago ago

    “One of the four balloon animals of the AI clownpocalypse” is the best sentence I’ve read all week.

    I think this is why the LLM revolution has been so existentially depressing for so many senior engineers - we’ve spent our entire careers fighting for exactly what the author suggests, and we couldn’t make progress against the product and management cabal when code took time and people to write. Now code is “free,” and we’re all being told to just get on the train, don’t worry about the bridge being out, we’ll build a new one when we get there, you see how fast we’re going now?

  • MarkusQ 15 hours ago ago

    My generation totally missed the signposts when we RFC'd our way into insecure by default e-mail (and later, web) protocols. In hindsight, it's amazing things held together as long as they did.

    It looks like every generation has to learn this for themselves though.

  • naveen99 13 hours ago ago

    Oof, google api slop !