38 comments

  • binalpatel 3 days ago ago

    Cool to see lots of people independently come to "CLIs are all you need". I'm still not sure if it's a short-term bandaid because agents are so good at terminal use or if it's part of a longer term trend but it's definitely felt much more seamless to me then MCPs.

    (my one of many contribution https://github.com/caesarnine/binsmith)

    • cosinusalpha 3 days ago ago

      I am also not sure if MCP will eventually be fixed to allow more control over context, or if the CLI approach really is the future for Agentic AI.

      Nevertheless, I prefer the CLI for other reasons: it is built for humans and is much easier to debug.

    • fudged71 3 days ago ago

      Thank you for posting binsmith, I've built something similar over the past few days and you've made some great decisions in here

    • 0x696C6961 3 days ago ago

      MCP let's you hide secrets from the LLM

      • pylotlight 3 days ago ago

        you can do same thing with cli via env vars no?

        • verdverm 3 days ago ago

          Yes, I'm using Dagger and it has great secret support, obfuscating them even if the agent, for example, cats the contents of a key file, it will never be able to read or print the secret value itself

          tl;Dr there are a lot of ways to keep secret contents away from your agent, some without actually having to keep them "physically" separate

    • desireco42 3 days ago ago

      Hey this looks cool. So each agent or session is one thread. Nice. I like it.

  • the_mitsuhiko 3 days ago ago

    At this point I'm fully down the path of the agent just maintaining his own tools. I have a browser skill that continues to evolve as I use it. Beats every alternative I have tried so far.

    • dtkav 3 days ago ago

      Same. Claude Opus 4.5 one-shots the basics of chrome debug protocol, and then you can go from there.

      Plus, now it is personal software... just keep asking it to improve the skill based on you usage. Bake in domain knowledge or business logic or whatever you want.

      I'm using this for e2e testing and debugging Obsidian plugins and it is starting to understand Obsidian inside and out.

      • chrisweekly 3 days ago ago

        Cool! Have you written more about this? (EDIT: from your profile, is that what https://relay.md is about?)

        • dtkav 3 hours ago ago

          Sorry it took me a while. Hopefully this helps:

          https://notes.danielgk.com/Obsidian/Obsidian+E2E+testing+Cla...

        • dtkav 3 days ago ago

          https://relay.md is a company I'm working on for shared knowledge management/ AI context for teams, and the Obsidian plugin is what i am driving with my live-debug and obsidian-e2e skills.

          I can try to write it up (I am a bit behind this week though...), but I basically opened claude code and said "write a new skill that uses the chrome debug protocol to drive end to end tests in Obsidian" and then whenever it had problems I said "fix the skill to look up the element at the x,y coordinate before clicking" or whatever.

          Skills are just markdown files, sometimes accompanied by scripts, so they work really naturally with Obsidian.

          • chrisweekly 2 days ago ago

            Hey FWIW Relay is AWESOME!! The granular sharing of a given dir within a vault (vs the whole thing) finally solves the split-brain problem of personal (private) vault on my own hardware vs mandated use of a company laptop... it's fast, intuitive, and SOLVES this long-time thorn in my side. Thanks for creating it, high five, hope it leads to massive success for you! :)

            • dtkav 2 days ago ago

              Thank you for the kind words <3

    • cosinusalpha 2 days ago ago

      Do you experience any context pollution with that approach?

      • dtkav 3 hours ago ago

        Writing your own skill is actually a lot better for context efficiency.

        Your skill will be tuned to your use case over time, so if there's something that you do a lot you can hide most of the back-and-forth behind the python script / cli tool.

        You can even improve the skill by saying "I want to be more token efficient, please review our chat logs for usage of our skill and factor out common operations into new functions".

        If anything context waste/rot comes from documentation of features that other people need but you don't. The skill should be a sharp knife, not a multi-tool.

      • the_mitsuhiko a day ago ago

        Not really. less bad than the mcps i used.

    • kinduff 3 days ago ago

      whats the name of the skill?

      • lgas 3 days ago ago

        why would that matter?

  • gregpr07 3 days ago ago

    Creator of Browser Use here, this is cool, really innovative approach with ARIA roles. One idea we have been playing around with a lot is just giving the LLM raw html and a really good way to traverse it - no heuristics, just BS4. Seems to work well, but much more expensive than the current prod ready [index]<div ... notation

    • cosinusalpha 3 days ago ago

      Thanks!

      I actually tried a raw HTML when I was exploring solutions. It worked for "one-off" tasks, but I ran into major issues with replayability on modern SPAs.

      In React apps, the raw DOM structure and auto-generated IDs shift so frequently that a script generated from "Raw HTML" often breaks 10 minutes later. I found ARIA/semantics to be the only stable contract that persists across re-renders.

      You mentioned the raw HTML approach is "expensive". Did you feed the full HTML into the context, or did you create a BS4 "tool" for the LLM to query the raw HTML dynamically?

  • TheTaytay 3 days ago ago

    I really like this idea!

    I’d like to see this other browser plugin’s API be exposed via your same CLI, so I don’t have to only control a separate browser instance. https://github.com/remorses/playwriter (I haven’t investigated enough to know how feasible it is, but as I was reading about your tool, I immediately wanted to control existing tabs from my main browser, rather than “just” a debug-driven separate browser instance.)

    • cosinusalpha 3 days ago ago

      Thanks! To clarify: webctl allows you to manually interact with the browser window at any time. It even returns "manual interaction" breakpoints to stdout if it detects an SSO/login wall.

      But I agree, attaching to the OS "daily driver" instance specifically would be a nice addition.

  • Agent_Builder 3 days ago ago

    Interesting approach. In our experience, most failures weren’t about which interface agents used, but about how much implicit authority they accumulated across steps. Control boundaries mattered more than the abstraction layer.

    • cosinusalpha 3 days ago ago

      I actually think the CLI approach helps with those boundaries. Because webctl commands are discrete and pipeable (e.g. webctl snapshot | llm | webctl click), the "authority" is reset at every step of the pipeline. It feels easier to audit a text stream of commands than a socket connection that might be accumulating invisible context.

  • randito 3 days ago ago

    If you look at Elixir keynote for Phoenix.new -- a cool agentic coding tool -- you'll see some hints about a browser control using a API tool call. It's called "web" in the video.

    Video: https://youtu.be/ojL_VHc4gLk?t=2132

    More discussion: https://simonwillison.net/2025/Jun/23/phoenix-new/

  • renegat0x0 3 days ago ago

    A little bit different, but also allows to scrape efficiently. Json http communication rather than cli.

    https://github.com/rumca-js/crawler-buddy

    More like a framework for other mechanisms

  • philipbjorge 3 days ago ago

    This looks remarkably similar to https://github.com/vercel-labs/agent-browser

    How is it different?

    • cosinusalpha 3 days ago ago

      To be honest, I hadn't seen that one yet!

      The main difference is likely the targeting philosophy. webctl relies heavily on ARIA roles/semantics (e.g. role=button name="Save") rather than injected IDs or CSS selectors. I find this makes the automation much more robust to UI changes.

      Also, I went with Python for V1 simply for iteration speed and ecosystem integration. I'd love to rewrite in Rust eventually, but Python was the most efficient way to get a stable tool working for my specific use case.

    • hugs 3 days ago ago

      vibium clicker, too. https://github.com/VibiumDev/vibium/blob/main/CONTRIBUTING.m...

      "browser automation for ai agents" is a popular idea these days.

  • desireco42 3 days ago ago

    How are you holding session if every command is issues through cli? I assume this is essential for automation.

    • cosinusalpha 3 days ago ago

      A background daemon holds the session state between different CLI calls. This daemon is started automatically on the first webctl call and auto-closes after a timeout period of inactivity to save resources.

      • desireco42 3 days ago ago

        I see, nice. Is there a way to run multiple sessions?

        • cosinusalpha 2 days ago ago

          Yes, you can create isolated environments using the "--session NAME" flag.

          It isolates cookies and local storage for that specific run. Since it's a V1 release, there might be some edge cases in the session isolation - if you hit any, please open an issue!

  • grigio 3 days ago ago

    is there a benchmark? there are a lot of scraping agents nowdays..

    • cosinusalpha 3 days ago ago

      I don't have an objective benchmark yet. I tried several existing solutions, especially the MCP servers for browser automation, and none of them were able to reproducibly solve my specific task.

      An objective benchmark is a great idea, especially to compare webctl against other similar CLI-based tools. I'll definitely look into how to set that up.