Show HN: Sculptor, the Missing UI for Claude Code

(imbue.com)

105 points | by thejash 5 hours ago ago

51 comments

  • dalejh 5 hours ago ago

    Congrats on the launch Imbue team!

    I used Sculptor to build most of https://lingolog.app/ (featured in this post).

    It was a blast - I was cooking dinner and blasting out features, coming back to see what Sculptor had cooked up for me in the meantime. I also painted the landing page in procreate while Sculptor was whirring away.

    Of course, this meant that my time shifted from producing code to reviewing code. I found the diffs, Sculptor's internal to-do list, and summaries all helpful to this end.

    n.b. I'm not affiliated with the team, but I've worked with some Imbue team members many years ago which led to being a beta tester.

    • kanjun 5 hours ago ago

      I'm so happy to hear this — your experience was what we hoped to enable!

    • bfogelman 5 hours ago ago

      lffgggg excited to see where you take lingo log :)

  • kveykva 5 hours ago ago

    Even design wise this looks virtually identical to https://terragonlabs.com/

    • itchytoo 5 hours ago ago

      Imbue team member here. The biggest difference between Sculptor and Terragon is the collaboration model. With Terragon, the agent outputs PRs. This works well for simple tasks that agents can one-shot, but is a bit clunky to use for more complex tasks that require closer human-agent collaboration imo. On the other hand, Sculptor is designed for local collaboration. Our agents run in containers too, but we let you (bidirectionally) sync to the containers, which lets you stream in the agent's uncommitted changes, and collaborate in real time. So basically, it feels like you are using Claude Code locally, but you get the safety and parallelism of running Claude in containers. I find this much more usable for real world engineering tasks!

  • myflash13 5 hours ago ago

    It's not clear to me what a "container" and "pairing" is in this context. What if my application is not dockerized? Can Claude Code execute tests by itself in the context of the container when not paired? This requires all the dependencies, database, etc. - do they all share the same database? Running full containerized applications with many versions of Postgres at the same time sounds very heavy for a dev laptop. But if you don't isolate the database across parallel agents that means you have to worry about database conflicts, which sounds nasty.

    In general I'm not even sure if the extra cognitive overload of agent multiplexing would save me time in the long run. I think I still prefer to work on one task at a time for the sake of quality and thoroughness.

    However the feature I was most looking forward to is a mobile integration to check the agent status while away from keyboard, from my phone.

    • thejash 4 hours ago ago

      Replying to each piece:

      > What if my application is not dockerized?

      Then claude runs in a container created from our default image, and any code it executes will run in that container as well.

      > Can Claude Code execute tests by itself in the context of the container when not paired?

      Yup! It can do whatever you tell it. The "pairing" is purely optional -- it's just there in case you want to directly edit the agent's code from your IDE.

      > Do they all share the same database?

      We support custom docker containers, so you should be able to configure it however you want (eg, to have separate databases, or to share a database, depending on what you want)

      > Running full containerized applications with many versions of Postgres at the same time sounds very heavy for a dev laptop

      Yeah -- it's not quite as bad if you run a single containerized Postgres and they each connect to a different database within that instance, but it's still a good point.

      One of the features on our roadmap (that I'm very excited about) is the ability to use fully remote containers (which definitely gets rid of this "heaviness", though it can get a bit expensive if you're not careful)

      > the feature I was most looking forward to is a mobile integration to check the agent status while away from keyboard, from my phone.

      That's definitely on the roadmap!

    • penlu 5 hours ago ago

      in this context, the container contains the running claude instance, and pairing synchronizes its worktree with your local worktree.

      under sculptor, claude code CAN execute tests by itself when not paired. that will also work for non-dockerized applications.

      sharing a postgres across containers may require a bit of manual tweaking, but we support the devcontainer spec, so if you can configure e.g. your network appropriately that way, you can use a shared database as you like!

      regarding multiplexing: the cognitive overhead is real. we are investigating mechanisms for reducing it. more on that later.

      regarding mobile integration: we also want that! more on that later.

  • thadd3us 5 hours ago ago

    Really proud to be a part of this team! And really excited for the future of Sculptor -- it has quickly become my favorite agentic coding tool because of the way it lets you safely and locally execute untrusted LLM code in an agentic loop, using a containerized environment that you control!

  • rsyring 5 hours ago ago

    > Sculptor is free while we're in beta.

    Ok, and then what? Honest question.

    • thejash 5 hours ago ago

      Our current plan is to make the source code available and make it free for personal use, but we're not quite ready to open-source it.

      Someday we'll probably have paid plans and business / enterprise licenses available as well, but our focus right now is on making it really useful for people.

      To me, the whole point of our company is to make these kinds of systems more open, understandable, and modifiable, so at least as long as I'm here, that's what we'll be doing :)

      • giancarlostoro 5 hours ago ago

        If Anthropic doesn't buy you guys out before then. This looks a little too nice, I could see them trying to acquihire your efforts.

    • BatteryMountain 4 hours ago ago

      Its not free though. You pay for it by supplying your email address.

  • lrobinovitch 5 hours ago ago

    Been fortunate to get to try out Sculptor in pre-release - it's great. Like claude code with task parallelism and history built in, all with a clean UI.

  • mentalgear 4 hours ago ago

    Based on vibekit (open source) ?

    "VibeKit is a safety layer for your coding agent. Run Claude Code, Gemini, Codex — or any coding agent — in a clean, isolated sandbox with sensitive data redaction and observability baked in."

    https://docs.vibekit.sh/cli

    • thejash 4 hours ago ago

      Nope, not based on vibekit, but it looks like a cool project!

      Our approach is a bit more custom and deeply integrated with the coding agents (ex: we understand when the turn has finished and can snapshot the docker container, allowing rollbacks, etc)

      We do also have a terminal though, so if you really wanted, I suppose you could run any text-based agent in there (although I've never tried that). Maybe we'll add better support for that as a feature someday :)

    • bfogelman 4 hours ago ago

      nope but vibekit looks interesting -- will take a look

  • kspacewalk2 5 hours ago ago

    How soon is Sculptor for Mac (Intel) coming? Excited to try it, but still hanging on to my last x86 MBP.

  • bfogelman 5 hours ago ago

    Member of the team here, happy to answer questions. Took a lot of ups, downs and work to get here but excited to finally get this out. Even more excited to share other features we've been cooking behind the scenes. Give it a try and let us know what you think, we're hungry for feedback.

    • cgarvis 5 hours ago ago

      Are you doing got worktrees in the backend?

      • penlu 5 hours ago ago

        no sir. only the fullest featured repositories for our free-range* claudes

        * containerized, but meets free range standards

      • bfogelman 5 hours ago ago

        right now we're using docker -- we're planning to support modal (https://modal.com/) for remote sandboxes and a "local" mode that might use something like worktrees

  • sawyerjhood 3 hours ago ago

    Wow this looks just like https://terragonlabs.com

  • nvader 4 hours ago ago

    Incidentally, a research preview of Sculptor is what I used to build my voice practice app, Vocal Mirror: https://danverbraganza.com/tools/vocal-mirror

  • meowface 5 hours ago ago

    Looks good. Does the app have a dark theme option?

  • cchance 5 hours ago ago

    Silly question but what about GPT? it feels like with the experimental api that most of the clients added for interacting the the cli clients it should be possible for something like this to run for gpt, claude, or gemini no?

    • bfogelman 5 hours ago ago

      in the works! we want it to be possible to always have the best models and agents available

  • warthog 5 hours ago ago

    Wasn't imbue training models for coding having raised a huge fund? Is this a pivot?

    • thejash 5 hours ago ago

      Since our launch 2 years ago, we've focused more on the "agents that code" part of our vision (so that everyone can make software) rather than the "training models from scratch" part (because there were so many good open source models released since then)

      This is from our fundraising post 2 years ago:

      > Our goal remains the same: to build practical AI agents that can accomplish larger goals and safely work for us in the real world. To do this, we train foundation models optimized for reasoning. Today, we apply our models to develop agents that we can find useful internally, starting with agents that code. Ultimately, we hope to release systems that enable anyone to build robust, custom AI agents that put the productive power of AI at everyone’s fingertips.

      - https://imbue.com/company/introducing-imbue/

      We have trained a bunch of our own models since then, and are excited to say more about that in the future (but it's not the focus of this release)

  • kate_dirky 2 hours ago ago

    Congrats on the launch! Looks awesome

  • handfuloflight 4 hours ago ago

    How does it compare with https://conductor.build/?

    • kanjun 4 hours ago ago

      Great question! Agents in Sculptor run in containers vs. locally on your machine, so they can all execute code simultaneously (and won't destroy your machine).

      Containers also unlock a cool agent-switching workflow, Pairing Mode: https://loom.com/share/1b02a925be42431da1721597687f7065

      Ultimately, our roadmaps are pretty different — we're focused ways to help you easily verify agent code, so that over time you can trust it more and work at a higher level.

      Towards this, today we have a beta feature, Suggestions, that catches issues/bugs/times when Claude lies to you, as you're working. That'll get built out a lot over the next few months.

      • handfuloflight 3 hours ago ago

        Excellent, I'll be giving it a go this week! All the best.

  • mangonomnom 5 hours ago ago

    got to try this a bit and really liked the UI! It felt very transparent and understandable even for someone without a coding background

    • thejash 5 hours ago ago

      Thanks!

      Please feel free to join discord if you run into any bugs or have any issues at all, we're happy to help: https://discord.gg/sBAVvHPUTE

      Suggestions welcome too!

  • jMyles 5 hours ago ago

    So... are we all just working on various ways of using Claude Code in docker with git worktrees? Is that like, the whole world's project this month? :-)

    • nvader 5 hours ago ago

      Seems like an important project to unlock a whole amount of productivity.

      Although, Sculptor does not use work trees, but that is an implementation detail.

    • manojlds 5 hours ago ago

      It's the new TODO app. Anthropic are just going to build one or acquire one of these soon and the rest will be dead.

    • bfogelman 4 hours ago ago

      haha honestly a little bit ya. One key thing we've learned from working on this is that lowering the barrier to working in parallel is key. Making it easy to merge, context switching, etc are all important as you try to parallelize things. I'm pretty excited about "pairing mode" for this reason as it mirrors an agents branch locally so you can make your own edits quickly and test changes.

      We've also shipped "suggestions" under beta (think CI pipelines for your parallel agents) which might feel a little different. The idea is to use LLMs and your regular coding tools (pytest, pyre, ...) to verify that the code produced by the agents is actually correct.

  • kate_dirky 2 hours ago ago

    Congrats! Looks awesome

  • ajanuary 5 hours ago ago

    Unfortunately the page repeatedly crashed and reloads on my iPhone 13 mini until it gives up.

    • kanjun 5 hours ago ago

      That's super strange — would you mind trying again on a different device? We can't repro. Appreciate your trying!

    • HellsMaddy 5 hours ago ago

      I'm also unable to load it in chrome on linux (wayland backend). Seems like some sort of GPU issue.

      • MitziMoto 4 hours ago ago

        Same. Chrome on Manjaro with Wayland, just crashes.

        • penlu 4 hours ago ago

          more info on setup, if you can: are you using a non-intel GPU for rendering?

        • bfogelman 4 hours ago ago

          hmm not ideal -- will try and take a look and see whats going wrong

  • kuroko 4 hours ago ago

    Congrats, looks good! Lacks an option to configure anthropic base url though, hope you will add something to configure env variables

  • micimize 5 hours ago ago

    Exciting stuff! A big step towards an accelerated AI-assisted SWE approach that avoids the trap of turning engineers into AI slop janitors