Recursive LLM prompts

(github.com)

81 points | by vlan121 6 days ago ago

17 comments

  • mertleee 3 days ago ago

    "Foundational AI companies love this one trick"

    It's part of why they love agents and tools like cursor -> turns a problem that could've been one prompt and a few hundred tokens into dozens of prompts and thousands of tokens ;)

    • danielbln 3 days ago ago

      It's be nice if I could solve any problem by speccing it out in its entirety and then just implement. In reality, I have to iterate and course correct, as do agentic flows. You're right that the AI labs love it though, iterating like that is expensive.

  • ivape 3 days ago ago

    The bigger picture goal here is to explore using prompts to generate new prompts

    I see this as the same as a reasoning loop. This is the approach I use to quickly code up pseudo reasoning loops on local projects. Someone had asked in another thread "how can I get the LLM to generate a whole book", well, just like this. If it can keep prompting itself to ask "what would chapter N be?" until "THE END", then you get your book.

  • danielbln 3 days ago ago

    The last commit is from April 2023, should this post maybe have a (2023) tag? Two years is eons in this space.

    • gwintrob 3 days ago ago

      Crazy that OpenAI only launched o1 in September 2024. Some of these ideas have been swirling for a while but it feels like we're in a special moment where they're getting turned into products.

      • mentalgear a day ago ago

        Well, I remember Chain of Thought being proposed as early as the GPT-3 release (2 years before chatGPT).

    • jdnier 3 days ago ago

      The author is Co-founder of Databricks, creator of K Prize, so an early adopter.

  • kordlessagain 3 days ago ago

    I love this! My take on it for MCP: https://github.com/kordless/EvolveMCP

  • mentalgear a day ago ago

    Trying to save state in a non-deterministic system, not the best idea. Those things need to be externalised.

  • K0balt 2 days ago ago

    This is kind of like a self generating agentic context.. cool. I think regular agents, especially adversarial agents, are easier to get focused on most types of problems though.

    Still clever.

  • James_K 3 days ago ago

    I feel that often getting LLMs to do things like mathematical problems or citation is much harder than simply writing software to achieve that same task.

  • mentalgear a day ago ago

    Should definitely get a date tag.

  • seeknotfind 3 days ago ago

    Excellent fun. Now just to create a prompt to show iterated LLMs are turing complete.

    • ivape 3 days ago ago

      Let's see Paul Allen's prompt.

  • NooneAtAll3 3 days ago ago

    LLM quine when?