Interesting. I have been doing a simple man's version of multiple git clone folders and 'docker compose -p'. Making that smoother is attractive, esp if can be made opaque for our more junior teammates.
On one end, I have been curious about getting multiple agents to work on the same branch, but realized I can just wait till they do that natively.
More so, all this feels like a dead end. I think OpenAI and github are right to push to remote development, so these don't matter. Eg, mark up a PR or branch in GitHub, and come back as necessary, and do it all from my phone. If I want an IDE, it can be remote ssh.
Seems odd that the LLM is so clever it can write programs to drive any API.
But so dumb that it needs a new special purpose protocol proxy to access anything behind such an API...
It’s about resilience. LLMs are prone to hallucinations. Although they can be very intelligent, they don’t have 100% correct output unaided. The protocol helps increase the resilience of the output so that there’s more of a guarantee that the LLM will stay within the lines you’ve drawn around it.
That's really not true. Context is one strategy to keep a models output constrained, and tool calling allows dynamic updates to context. Mcp is a convenience layer around tool calls and the systems they integrate with
Interesting. I have been doing a simple man's version of multiple git clone folders and 'docker compose -p'. Making that smoother is attractive, esp if can be made opaque for our more junior teammates.
On one end, I have been curious about getting multiple agents to work on the same branch, but realized I can just wait till they do that natively.
More so, all this feels like a dead end. I think OpenAI and github are right to push to remote development, so these don't matter. Eg, mark up a PR or branch in GitHub, and come back as necessary, and do it all from my phone. If I want an IDE, it can be remote ssh.
Hi all, we open sourced this live on stage today at AI Engineer World Fair (great event by the way).
If you're interested, here's the keynote recording: https://www.youtube.com/live/U-fMsbY-kHY?t=3400s
Very cool that this runs as a MCP server, very cool demo
Seems odd that the LLM is so clever it can write programs to drive any API. But so dumb that it needs a new special purpose protocol proxy to access anything behind such an API...
It’s about resilience. LLMs are prone to hallucinations. Although they can be very intelligent, they don’t have 100% correct output unaided. The protocol helps increase the resilience of the output so that there’s more of a guarantee that the LLM will stay within the lines you’ve drawn around it.
That's really not true. Context is one strategy to keep a models output constrained, and tool calling allows dynamic updates to context. Mcp is a convenience layer around tool calls and the systems they integrate with
> LLM is so clever it can write programs to drive any API
It is not, name one software that has a LLM generating code on the fly to call APIs. Why do people have this delusion?
I'm curious: what do containers add over and above whatever you'd get using worktrees on their own?
They're complementary. git worktrees isolate file edits; containers isolate execution: building, testing, running dev instances..
container-use combines both forms of isolation: containers and git worktrees in a seamless system that agents can use to get work done.
I would guess isolation/safety.
Page is crashing my mobile chrome.
Freezing for me on Safari desktop. I think the culprit is the SVG based demo in the README.md
Sorry about that! We'll fix it.
On iPad as well.