This seems exactly right. Its pretty common to watch your agent making the same mcp tool call again and again as it works through a list. These kinds of cases are solved by letting the agent just call any of its mcp tools in a script.
This is interesting, but it seems like it's only exposed as a Cloudflare service now, rather than as a tool I can use locally (unless I missed something)?
It doesn't seem to me that isolates are sufficiently unique technology to warrant this only running on a server. Surely we can spin up a V8 locally or something and achieve the same thing?
Hopefully the popular MCP clients will start implementing this approach, if it works as well as claimed.
Having the LLM generate code on top of a regular API instead of calling external functions one by one seems like common sense. Was MCP invented because LLMs were not capable of writing robust code at the time? Was it because of security considerations? What I am missing?
What is the regular API? How do you express all the integrations needed in this API? Who provides the integrations? Answering these questions lead you back to something like an MCP, which is an API contract that can be as generic or as specific as needed. Wasting context window to understand and re-implement each integration is why MCPs exist.
All the security issues are orthogonal, and occur regardless if invoking this API occurs via code or natural language.
The security issues are probably orthogonal in the way most people install and use these MCPs, but the article mentions Cloudflare's "code-mode" running in v8 isolate sandboxes and calling rpc bindings that are pre-authed, no API keys or open slather internet access required, see: https://blog.cloudflare.com/code-mode/#running-code-in-a-san...
This is at least interesting, possibly even novel.
"Wasting context window to understand and re-implement each integration is why MCPs exist" does seem to be exactly the point. Pointing the LLM at a swagger/openAPI spec and expecting it to write a good integration might work, but gets old after the first time. Swagger docs focus on what mostly, LLMs work better knowing why as well.
And,why not just use a locally installed cli rather than an MCP? You need to have one for a start, and use-cases (chained calls) are more valuable that atomic tool calls.
There is more behind the motivation for MCP and "tool calling" ability generally with LLMs. This motivation seems less and less relevant, but back when reasoning and chain-of-thought were newly being implemented, and people were more wary of yolo modes, the ability for an LLM harness to decide to call a tool and gather JIT context was a game changer. A dynamic RAG alternative. MCP might not be the optimal solution long term. For example, the way that claude has been trained to use the gh cli, and to work with git generally is much more helpful than either having to set up a git MCP or getting it to feel its way around git --help or man pages to, as you said "re-implement each integration" from scratch every time.
This seems exactly right. Its pretty common to watch your agent making the same mcp tool call again and again as it works through a list. These kinds of cases are solved by letting the agent just call any of its mcp tools in a script.
Nice! Seems very similar to the concept explored by Google (CaMeL) https://arxiv.org/abs/2503.18813
This is interesting, but it seems like it's only exposed as a Cloudflare service now, rather than as a tool I can use locally (unless I missed something)?
It doesn't seem to me that isolates are sufficiently unique technology to warrant this only running on a server. Surely we can spin up a V8 locally or something and achieve the same thing?
Hopefully the popular MCP clients will start implementing this approach, if it works as well as claimed.
> a tool I can use locally
https://blog.cloudflare.com/code-mode/#or-try-it-locally :-)
Though AFAIK Wrangler is really only intended for development and not local deployment.
Hmm but that's just a development environment for the remote server, no? It's not a tool meant for mass use.
Having the LLM generate code on top of a regular API instead of calling external functions one by one seems like common sense. Was MCP invented because LLMs were not capable of writing robust code at the time? Was it because of security considerations? What I am missing?
> top of a regular API
What is the regular API? How do you express all the integrations needed in this API? Who provides the integrations? Answering these questions lead you back to something like an MCP, which is an API contract that can be as generic or as specific as needed. Wasting context window to understand and re-implement each integration is why MCPs exist.
All the security issues are orthogonal, and occur regardless if invoking this API occurs via code or natural language.
The security issues are probably orthogonal in the way most people install and use these MCPs, but the article mentions Cloudflare's "code-mode" running in v8 isolate sandboxes and calling rpc bindings that are pre-authed, no API keys or open slather internet access required, see: https://blog.cloudflare.com/code-mode/#running-code-in-a-san... This is at least interesting, possibly even novel.
"Wasting context window to understand and re-implement each integration is why MCPs exist" does seem to be exactly the point. Pointing the LLM at a swagger/openAPI spec and expecting it to write a good integration might work, but gets old after the first time. Swagger docs focus on what mostly, LLMs work better knowing why as well.
And,why not just use a locally installed cli rather than an MCP? You need to have one for a start, and use-cases (chained calls) are more valuable that atomic tool calls.
There is more behind the motivation for MCP and "tool calling" ability generally with LLMs. This motivation seems less and less relevant, but back when reasoning and chain-of-thought were newly being implemented, and people were more wary of yolo modes, the ability for an LLM harness to decide to call a tool and gather JIT context was a game changer. A dynamic RAG alternative. MCP might not be the optimal solution long term. For example, the way that claude has been trained to use the gh cli, and to work with git generally is much more helpful than either having to set up a git MCP or getting it to feel its way around git --help or man pages to, as you said "re-implement each integration" from scratch every time.
LLMs are able to use OpenAPI definitions to write code.
LLMs have just entered their Platform Engineering era
smolagents uses a similar mechanism.