I think the key to really "unlock" these things is to separate as much as possible from where it can do harm (no important credentials, no shared identify, etc) then just give it its own home folder, its own credentials and let it rip.
You could technically instruct the agent to pilot local ollama and launch minions for "dumb" tasks in parallel, but i don't know if it could break out and modify the file system this way... but then, if it resides say in its own VPS, the damage will be contained.
I’ve experimented with semi-recursive loops where agents review outputs and refine prompts or workflows, but fully autonomous self-improvement still feels fragile. Most gains come from structured feedback and constraints rather than open-ended recursion. Stability becomes the real challenge.
I've tried to replicate the real world, so I give my agents backstories, triabl loyalties, and deep-seated character flaws. my agents try to dominate and manipulate each other. they make sure to take credit for every line code. I have manager agents that promote based on shared hobbies. so far it's going well.
I do a fun orchestration system for long running loops on exe.dev (small write up docs.coey.dev) and I feel like I have super powers.
Self healing, I try two ways:
1) use a memory tool to store learnings for next iteration (Deja.coey.dev) and have the loop system instructions tell how to use it. One orchestrator, and sequential worker agents who run til their context is full and then hand off to the next run with learnings
2) the agent Shelley on exe can search past convos when promoted too for continuation.
I’ve been doing this with great success just “falling” into the implementation after proper guardrails are placed
I release stuff all the time, so yes I suppose? Not trying to spam but you can find a mix of open source, b2b saas i own, and client saas I manage on my personal website.
Thank you - take each one with a grain of salt, and heavily ask your AI agent (or your brain) to vet everything. I'm honestly newer to open source, I've been lurking at my desk alone for too long
I'm working on something like this. Specifically, I'm doing recursive self-improvement via autocatalysis -but predominantly in writing/research / search tasks. It's very early, but shows some very interesting signs.
The purely code part you described is a bit of an "extra steps" -you can just... vscode open target repo, "claude what does this do, how does it do it, spec it out for me" then paste into claude code for your repo "okay claude implement this". This sidesteps the security issue, the deadly trifecta, and the accumulation of unused cruft.
To head off the semantics debate: I don't mean a model rewriting its own source code. I'm asking about 'process recursion'—systems that analyze completed work to autonomously generate new agents or heuristics for future tasks.
-ish. I often keep md files around and after a successful task. I ask Codex to write the important bits down. Then, when I come around to a similar task in the future, I have it start at the md file. It's like context that grows and is very localized. It helps when I'm going through multiple repos at multiple levels.
I’m also doing similar with fairly decent results. AGENTS.md grows after each session that resulted in worthwhile knowledge that future sessions can take advantage of. At some point I assume it will be too big, then it’s back to the Stone Age for the new agents, in order to release some context for the actual work.
I think the key to really "unlock" these things is to separate as much as possible from where it can do harm (no important credentials, no shared identify, etc) then just give it its own home folder, its own credentials and let it rip.
You could technically instruct the agent to pilot local ollama and launch minions for "dumb" tasks in parallel, but i don't know if it could break out and modify the file system this way... but then, if it resides say in its own VPS, the damage will be contained.
I’ve experimented with semi-recursive loops where agents review outputs and refine prompts or workflows, but fully autonomous self-improvement still feels fragile. Most gains come from structured feedback and constraints rather than open-ended recursion. Stability becomes the real challenge.
I've tried to replicate the real world, so I give my agents backstories, triabl loyalties, and deep-seated character flaws. my agents try to dominate and manipulate each other. they make sure to take credit for every line code. I have manager agents that promote based on shared hobbies. so far it's going well.
I do a fun orchestration system for long running loops on exe.dev (small write up docs.coey.dev) and I feel like I have super powers.
Self healing, I try two ways:
1) use a memory tool to store learnings for next iteration (Deja.coey.dev) and have the loop system instructions tell how to use it. One orchestrator, and sequential worker agents who run til their context is full and then hand off to the next run with learnings
2) the agent Shelley on exe can search past convos when promoted too for continuation.
I’ve been doing this with great success just “falling” into the implementation after proper guardrails are placed
Any released projects yet?
I release stuff all the time, so yes I suppose? Not trying to spam but you can find a mix of open source, b2b saas i own, and client saas I manage on my personal website.
I checked your website, it's quite impressive to release 1-3 opensource projects per day.
Thank you - take each one with a grain of salt, and heavily ask your AI agent (or your brain) to vet everything. I'm honestly newer to open source, I've been lurking at my desk alone for too long
I'm working on something like this. Specifically, I'm doing recursive self-improvement via autocatalysis -but predominantly in writing/research / search tasks. It's very early, but shows some very interesting signs.
The purely code part you described is a bit of an "extra steps" -you can just... vscode open target repo, "claude what does this do, how does it do it, spec it out for me" then paste into claude code for your repo "okay claude implement this". This sidesteps the security issue, the deadly trifecta, and the accumulation of unused cruft.
To head off the semantics debate: I don't mean a model rewriting its own source code. I'm asking about 'process recursion'—systems that analyze completed work to autonomously generate new agents or heuristics for future tasks.
-ish. I often keep md files around and after a successful task. I ask Codex to write the important bits down. Then, when I come around to a similar task in the future, I have it start at the md file. It's like context that grows and is very localized. It helps when I'm going through multiple repos at multiple levels.
I’m also doing similar with fairly decent results. AGENTS.md grows after each session that resulted in worthwhile knowledge that future sessions can take advantage of. At some point I assume it will be too big, then it’s back to the Stone Age for the new agents, in order to release some context for the actual work.