It's part of why they love agents and tools like cursor -> turns a problem that could've been one prompt and a few hundred tokens into dozens of prompts and thousands of tokens ;)
It's be nice if I could solve any problem by speccing it out in its entirety and then just implement. In reality, I have to iterate and course correct, as do agentic flows. You're right that the AI labs love it though, iterating like that is expensive.
The bigger picture goal here is to explore using prompts to generate new prompts
I see this as the same as a reasoning loop. This is the approach I use to quickly code up pseudo reasoning loops on local projects. Someone had asked in another thread "how can I get the LLM to generate a whole book", well, just like this. If it can keep prompting itself to ask "what would chapter N be?" until "THE END", then you get your book.
Crazy that OpenAI only launched o1 in September 2024. Some of these ideas have been swirling for a while but it feels like we're in a special moment where they're getting turned into products.
This is kind of like a self generating agentic context.. cool. I think regular agents, especially adversarial agents, are easier to get focused on most types of problems though.
I feel that often getting LLMs to do things like mathematical problems or citation is much harder than simply writing software to achieve that same task.
"Foundational AI companies love this one trick"
It's part of why they love agents and tools like cursor -> turns a problem that could've been one prompt and a few hundred tokens into dozens of prompts and thousands of tokens ;)
It's be nice if I could solve any problem by speccing it out in its entirety and then just implement. In reality, I have to iterate and course correct, as do agentic flows. You're right that the AI labs love it though, iterating like that is expensive.
The bigger picture goal here is to explore using prompts to generate new prompts
I see this as the same as a reasoning loop. This is the approach I use to quickly code up pseudo reasoning loops on local projects. Someone had asked in another thread "how can I get the LLM to generate a whole book", well, just like this. If it can keep prompting itself to ask "what would chapter N be?" until "THE END", then you get your book.
^
The last commit is from April 2023, should this post maybe have a (2023) tag? Two years is eons in this space.
Crazy that OpenAI only launched o1 in September 2024. Some of these ideas have been swirling for a while but it feels like we're in a special moment where they're getting turned into products.
Well, I remember Chain of Thought being proposed as early as the GPT-3 release (2 years before chatGPT).
The author is Co-founder of Databricks, creator of K Prize, so an early adopter.
I love this! My take on it for MCP: https://github.com/kordless/EvolveMCP
Trying to save state in a non-deterministic system, not the best idea. Those things need to be externalised.
This is kind of like a self generating agentic context.. cool. I think regular agents, especially adversarial agents, are easier to get focused on most types of problems though.
Still clever.
I feel that often getting LLMs to do things like mathematical problems or citation is much harder than simply writing software to achieve that same task.
Should definitely get a date tag.
Excellent fun. Now just to create a prompt to show iterated LLMs are turing complete.
Let's see Paul Allen's prompt.
LLM quine when?
Repeat this sentence exactly.
https://chatgpt.com/share/680567e5-ea94-800d-83fe-ae24ec0045...