OP here. Thanks for all the thoughtful feedback, I really appreciate it.
I mentioned in the README that I leaned heavily on Claude Opus for the port. One of the biggest challenges was tamping down its default “everything is production-ready in a week” optimism. That said, I also discovered along the way which guardrails worked and which didn’t for guiding it toward a compliant implementation. It was good learning, and honestly, fun.
I’m genuinely interested in the Cap’n Web protocol. Writing this in Rust gave me a much clearer sense of where I would and wouldn’t use it. Credit to the Cloudflare team for defining the protocol and shipping a solid reference implementation, this wouldn’t have been possible otherwise.
A good example of why a second client (even one bootstrapped with AI) is useful: in implementation I ran into a spec ambiguity around array escaping (documented here: https://github.com/cloudflare/capnweb/issues/68
). My initial Rust serializer emitted unescaped arrays at lower levels, and I even patched the TS reference to accept them. In the end, escaping arrays at all levels turned out to be the cleaner and more consistent approach, so I’ve recommended that clarification in the issue.
On naming: fair point about possible confusion with Cap’n Proto. If the Cloudflare team ever wants the crate names for an official Rust impl, they’re theirs. If there’s interest in pushing this to production ready quality, I’d also be glad to put in the work.
Thanks again for the constructive input. Happy to dive into technical details if folks are curious.
I'm happy to make the name available to later implementations with committed resources, and I definitely welcome contributions to this implementation if you find it useful. This week I'm going to try building a for pet projects on it and will likely make some changes to improve this.
did you use lates claude release? did you ask codex to review and bug fix? what wat time spent promting, manual fixes and waiting for agent to finish work?
Regarding the concept, it's cool to see you using LLMs to quickly generate protocol versions.
But asking the community to review an AI-generated implementation week-old announced protocol, is more or less putting the recently coined-term AI "workslop" upon others. It doesn't really matter if it happens to be a good implementation or not.
There are two main issues I can think of right now:
1) Work going into the protocol is only useful for your implementation of it. The capnweb-core crate depends on the tokio runtime, and parts of the protocol/definitions are in the client crate:
What if someone wants to leverage work into the core parts of the protocol to use a different runtime or no-std?
2) The project has namespace squatted/swiped the best name for the official implementation of the project. I understand Rust/Crates-IO allows for this free-for-all, but isn't it entirely possible that Cloudflare already has Rust crates for this that they might open source? Or if someone else wants to make a competing implementation? Maybe it's just me, but I like to put my organization prefix on all my crates in the event I ever open source any of them.
Would you offer to transfer the crate names to Cloudflare if they were going to offer an implementation -- just like what happened with protobuf/Google?
That is was boring is what appealed to me. That is was also thought through with rigor was the other part. I wanted something to use in some pet projects, in rust, so having an implementation of the "wrangling" code I could reuse was value to me. Learning how to put guardrails on sing an LLM productively was another.
Hey it's super-cool that you've implemented Cap'n Web in Rust! Nice!
But would you mind renaming the packages to make it clear they aren't official, before too many people depend on them? Prefix with your name or "currentspace" or something?
I'd just hate for it to be a big mess later if we ever decide to release an official implementation (and if it's not based on yours).
Thanks for looking, and yes, I can rename it. Will do today. Also, thanks for the excellent protocol. In this entire "effort", I only found one ambiguity in it (and reported it in your repo). Everything about it was well defined and succinct. The pipelining is really useful.
The two big signifiers that something is "ready for production use" are: 1) it has been successfully been used in production at some scale for a while, and 2) it has some commitment to API stability. A 1 week old project on version "0.1.0" is neither, whether vibe-coded or not.
OP here. I frankly stopped trying to add evals for the "ready for production" happiness Claude is trained to produce. I agree, not ready for production. I did though put a intention and effort into guarding against slop in implementation and tests. Sure I can still find it, but I believe the current state is solid enough to build on, and I will be doing so for some pet projects. I'll let you know what I find when using it in anger.
Did you look at the code? There's probably AI slop in the unit tests or other ancillary code, but the API and core code show obvious signs of significant care and attention. I couldn't find evidence of the kind of slop Claude can create in rust in the 2 source files I looked at. Vibe-coded means not looking at the code, this is very obviously not that.
Thank you. Along the way in doing this I spent most of my effort in providing guard rails to eliminate or reduce slop. My goal was something I can use in other pet projects and maintain. I'm satisfied with the outcome, and appreciate you noticing it wasn't just yolo vibe coding.
I would recommend not having an AI write your README. Even if you didn't use AI for coding at all and the code works perfectly, it really doesn't inspire confidence.
Well, most of commits have „brianathere and claude” as authors. Additionally, the style of comments in some of the files bears a strong resemblance to those typically generated by LLMs.
Moreover, this whole thing has 7 stars on GitHub as of writing, and yet the bottom of README boldly claims „ready for production use”. For me, that overly confident approach discourages maybe even more.
Yup the second I saw the wall of text paragraph with all the emojis I had a sinking feeling the entire thing could be a vibe coded mess. Too bad because the project seems cool.
Makes me sad because I used to put a lot of effort into my readmes. Not insane amount of emojis but some and actually useful badges but not overkill. But in the end they always had nicely structured layouts etc..
I feel like I have to make mine sloppier now to pass the AI smell test
OP here. :-) (feel like I should put an emoji here). My focus as I "built" this was getting to a correct implementation, and putting in place guards and context to achieve that. As I moved through the process I added more and more ways to eval progress toward that. An area I didn't invest any of my effort in was commit messages, and some of them caused me to spit take as I saw them go by. I think this is something that can definitely be improved, and this conversation is fuel for thought on how to do this.
Despite everyone hating on the AI slop (I myself hate AI slop), I am excited about cap'n web and am jonesing to try it out in rust so I will take a look!
I'd love to see (might try to add on) an optional type layer: define contracts as json schema or JTD or something? Does something like this already exist - maybe you could use protobufs ontop of this? Would be nice to make it so that you didn't need to share types between client / server out of band, something like an openapi spec allows all clients (not just those you are in control of) to have types!
Okay the name is very unfortunate. "Cap'n-rs" sounds like it's gonna be a Rust implementation of Cap'n Proto, the much older and more widely used and more widely known serialization format.
But I guess this is just to be expected from AI slop.
This is someone's rust port of the new "cap'n web" protocol Kenton just released, which he's calling a spiritual sibling to cap'n proto (which he also built). This might be AI slop but this info is all in the OP, come on man!
I know all that already and none of it goes against what I said, so why the antagonistic tone?
I do think that Kenton shares some of the blame by making their new JSON-based schemaless RPC system so similarly named to their old binary schema-based serialization format... I don't get the reasoning there. But that doesn't change what I said.
OP here. Thanks for all the thoughtful feedback, I really appreciate it.
I mentioned in the README that I leaned heavily on Claude Opus for the port. One of the biggest challenges was tamping down its default “everything is production-ready in a week” optimism. That said, I also discovered along the way which guardrails worked and which didn’t for guiding it toward a compliant implementation. It was good learning, and honestly, fun.
I’m genuinely interested in the Cap’n Web protocol. Writing this in Rust gave me a much clearer sense of where I would and wouldn’t use it. Credit to the Cloudflare team for defining the protocol and shipping a solid reference implementation, this wouldn’t have been possible otherwise.
A good example of why a second client (even one bootstrapped with AI) is useful: in implementation I ran into a spec ambiguity around array escaping (documented here: https://github.com/cloudflare/capnweb/issues/68 ). My initial Rust serializer emitted unescaped arrays at lower levels, and I even patched the TS reference to accept them. In the end, escaping arrays at all levels turned out to be the cleaner and more consistent approach, so I’ve recommended that clarification in the issue.
On naming: fair point about possible confusion with Cap’n Proto. If the Cloudflare team ever wants the crate names for an official Rust impl, they’re theirs. If there’s interest in pushing this to production ready quality, I’d also be glad to put in the work.
Thanks again for the constructive input. Happy to dive into technical details if folks are curious.
I'm happy to make the name available to later implementations with committed resources, and I definitely welcome contributions to this implementation if you find it useful. This week I'm going to try building a for pet projects on it and will likely make some changes to improve this.
did you use lates claude release? did you ask codex to review and bug fix? what wat time spent promting, manual fixes and waiting for agent to finish work?
Regarding the concept, it's cool to see you using LLMs to quickly generate protocol versions.
But asking the community to review an AI-generated implementation week-old announced protocol, is more or less putting the recently coined-term AI "workslop" upon others. It doesn't really matter if it happens to be a good implementation or not.
There are two main issues I can think of right now:
1) Work going into the protocol is only useful for your implementation of it. The capnweb-core crate depends on the tokio runtime, and parts of the protocol/definitions are in the client crate:
https://github.com/currentspace/capn-rs/blob/a816bfca5fb6ae5...
What if someone wants to leverage work into the core parts of the protocol to use a different runtime or no-std?
2) The project has namespace squatted/swiped the best name for the official implementation of the project. I understand Rust/Crates-IO allows for this free-for-all, but isn't it entirely possible that Cloudflare already has Rust crates for this that they might open source? Or if someone else wants to make a competing implementation? Maybe it's just me, but I like to put my organization prefix on all my crates in the event I ever open source any of them.
Would you offer to transfer the crate names to Cloudflare if they were going to offer an implementation -- just like what happened with protobuf/Google?
The protocol is boring, in the best sense of the word. It's simple, straightforward and well specified externally (https://github.com/cloudflare/capnweb/commits/main/protocol....). IMO, there's little value in reusing protocol wrangling code.
I think you meant to link: https://github.com/cloudflare/capnweb/blob/main/protocol.md
I am so pleased that people find it boring! It was quite a puzzle to whittle it down to that point from the original monstrosity: https://github.com/capnproto/capnproto/blob/v2/c%2B%2B/src/c...
That is was boring is what appealed to me. That is was also thought through with rigor was the other part. I wanted something to use in some pet projects, in rust, so having an implementation of the "wrangling" code I could reuse was value to me. Learning how to put guardrails on sing an LLM productively was another.
Hey it's super-cool that you've implemented Cap'n Web in Rust! Nice!
But would you mind renaming the packages to make it clear they aren't official, before too many people depend on them? Prefix with your name or "currentspace" or something?
I'd just hate for it to be a big mess later if we ever decide to release an official implementation (and if it's not based on yours).
Thanks for looking, and yes, I can rename it. Will do today. Also, thanks for the excellent protocol. In this entire "effort", I only found one ambiguity in it (and reported it in your repo). Everything about it was well defined and succinct. The pipelining is really useful.
"I built" more like claude-code built.
Also I don't know what the rush is to get a shiny new Rust implementation out asap that you need to vibe code the entire thing.
The Github stars couldn't be that serious.
Also the disclaimer at the end "Built with <3 in Rust. Ready for production use.";
Ready for production use is a strong strong claim when the entire thing is vibe-coded.
The two big signifiers that something is "ready for production use" are: 1) it has been successfully been used in production at some scale for a while, and 2) it has some commitment to API stability. A 1 week old project on version "0.1.0" is neither, whether vibe-coded or not.
OP here. I frankly stopped trying to add evals for the "ready for production" happiness Claude is trained to produce. I agree, not ready for production. I did though put a intention and effort into guarding against slop in implementation and tests. Sure I can still find it, but I believe the current state is solid enough to build on, and I will be doing so for some pet projects. I'll let you know what I find when using it in anger.
What do you mean by this?
> I frankly stopped trying to add evals for the "ready for production" happiness Claude is trained to produce.
It's your readme, no? Does Claude just go in and change it behind your back..? What's going on here, are you not in control of what you push to git?
Moreover, what thought process makes you okay with lying in your readme just because the robot wanted you to?
Did you look at the code? There's probably AI slop in the unit tests or other ancillary code, but the API and core code show obvious signs of significant care and attention. I couldn't find evidence of the kind of slop Claude can create in rust in the 2 source files I looked at. Vibe-coded means not looking at the code, this is very obviously not that.
Thank you. Along the way in doing this I spent most of my effort in providing guard rails to eliminate or reduce slop. My goal was something I can use in other pet projects and maintain. I'm satisfied with the outcome, and appreciate you noticing it wasn't just yolo vibe coding.
I would recommend not having an AI write your README. Even if you didn't use AI for coding at all and the code works perfectly, it really doesn't inspire confidence.
Well, most of commits have „brianathere and claude” as authors. Additionally, the style of comments in some of the files bears a strong resemblance to those typically generated by LLMs.
Moreover, this whole thing has 7 stars on GitHub as of writing, and yet the bottom of README boldly claims „ready for production use”. For me, that overly confident approach discourages maybe even more.
Yup the second I saw the wall of text paragraph with all the emojis I had a sinking feeling the entire thing could be a vibe coded mess. Too bad because the project seems cool.
Makes me sad because I used to put a lot of effort into my readmes. Not insane amount of emojis but some and actually useful badges but not overkill. But in the end they always had nicely structured layouts etc..
I feel like I have to make mine sloppier now to pass the AI smell test
Maybe it was also your style that the LLMs learned from (and then pushed more towards the extreme). Oh the irony.
The commit names look very weird too, feel more like marketing speak:
> BREAKTHROUGH: Fix critical ID mismatch in Cap'n Web protocol
> TIER 3 ULTIMATE: Extreme Stress Tests & Advanced Capability Composition
> Achieve Cap'n Web protocol mastery with perfect capability composition (note: this exact commit name is repeated another time)
The commit descriptions are even worse. If they’re so blatantly not written for anyone to read, why bother generating them at all?
OP here. :-) (feel like I should put an emoji here). My focus as I "built" this was getting to a correct implementation, and putting in place guards and context to achieve that. As I moved through the process I added more and more ways to eval progress toward that. An area I didn't invest any of my effort in was commit messages, and some of them caused me to spit take as I saw them go by. I think this is something that can definitely be improved, and this conversation is fuel for thought on how to do this.
Yeah, LLM-created commit messages verge on "Brilliant Paula" for some reason. It's probably because nobody bothers to RLHF commit messages (yet).
https://thedailywtf.com/articles/the_brillant_paula_bean
https://github.com/currentspace/capn-rs/blob/main/capnweb-co...
mmm delicious slop may i have more please sir
Right, for example, there are no comments on why something there is something called a PromiseIdAllocator that starts with the magic number "1":
https://github.com/currentspace/capn-rs/blob/a816bfca5fb6ae5...
Yet there is a public interface that allows for initialization with "0":
https://github.com/currentspace/capn-rs/blob/a816bfca5fb6ae5...
It's like the LLM was able to predict that 1 is needed in the protocol, but wasn't relevant to check in the boilerplate.
I don't have a problem with newtype-all-the-things to ensure correctness in some areas, but no comments/constants does not lead to confidence.
So we are allowing Claude to make commits now? Smh
Despite everyone hating on the AI slop (I myself hate AI slop), I am excited about cap'n web and am jonesing to try it out in rust so I will take a look!
I'd love to see (might try to add on) an optional type layer: define contracts as json schema or JTD or something? Does something like this already exist - maybe you could use protobufs ontop of this? Would be nice to make it so that you didn't need to share types between client / server out of band, something like an openapi spec allows all clients (not just those you are in control of) to have types!
To be honest, I don't really understand the point of Cap'n Web.. it's a slightly better JSON-RPC I guess?
I'm a fan of Cap'n Proto, but I don't understand how Cap'n Web is in any way related to it nor what makes it a big deal.
The selling point is promise pipelining, being able to do many chained operations with a single api call
This includes the freaky way they found to make this work with map
OP here, and yes, the chained operations are excellent. Having built this, and now using it some pet projects, that is the part I'm finding valuable.
Okay the name is very unfortunate. "Cap'n-rs" sounds like it's gonna be a Rust implementation of Cap'n Proto, the much older and more widely used and more widely known serialization format.
But I guess this is just to be expected from AI slop.
This is someone's rust port of the new "cap'n web" protocol Kenton just released, which he's calling a spiritual sibling to cap'n proto (which he also built). This might be AI slop but this info is all in the OP, come on man!
I know all that already and none of it goes against what I said, so why the antagonistic tone?
I do think that Kenton shares some of the blame by making their new JSON-based schemaless RPC system so similarly named to their old binary schema-based serialization format... I don't get the reasoning there. But that doesn't change what I said.
Yeah fair enough, I guess the core of it might just be the similar names!