8 comments

  • Trufa 10 hours ago ago

    Is this an alternative to https://mastra.ai/docs

    How would it compare?

    • randall 10 hours ago ago

      So I look at something like Mastra (or LangChain) as agent orchestration, where you do computing tasks to line up things for an LLM to execute against.

      I look at Gambit as more of an "agent harness", meaning you're building agents that can decide what to do more than you're orchestrating pipelines.

      Basically, if we're successful, you should be able to chain agents together to accomplish things extremely simply (using markdown). Mastra, as far as I'm aware, is focused on helping people use programming languages (typescript) to build pipelines and workflows.

      So yes it's an alternative, but more like an alternative approach rather than a direct competitor if that makes sense.

  • Agent_Builder 6 hours ago ago

    We ran into similar reliability issues while building GTWY. What surprised us was that most failures weren’t about model quality, but about agents being allowed to run too long without clear boundaries.

    What helped was treating agents less like “always-on brains” and more like short-lived executors. Each step had an explicit goal, explicit inputs, and a defined end. Once the step finished, the agent stopped and context was rebuilt deliberately.

    Harnesses like this feel important because they shift the problem from “make the model smarter” to “make the system more predictable.” In our experience, reliability came more from reducing degrees of freedom than from adding intelligence.

    • brap 4 hours ago ago

      This seems to be where it’s at right now, we can’t seem to make the models significantly more intelligent, so we “inject” our own intelligence into the system, in the form of good old fashioned code.

      My philosophy is make the LLMs do as little work as possible. Only small, simple steps. Anything that can be reasonably done in code (orchestration, tool calls, etc) should be done in code. Basically any time you find yourself instructing an LLM to follow a certain recipe, just break it down to multiple agents and do what you can with code.

      • randall 4 hours ago ago

        i have a slightly different but related take. the models actually are getting smarter, and now the challenge becomes successfully communicating intent with them instead of simply getting them to do anything remotely useful.

        Gambit hopefully solves some of that, giving you a set of primitives and principles that make it simpler to communicate intent.

  • tomhow 10 hours ago ago

    [under-the-rug stub]

    [see https://news.ycombinator.com/item?id=45988611 for explanation]

    • randall 10 hours ago ago

      thx, i appreciate it, believe it or not. :)

  • salesplay 7 hours ago ago

    This is an interesting direction for agent frameworks. What stood out to me is the shift from simple tool orchestration to agents that can reason, call other agents, and self-manage workflows. That’s something we’ve been thinking about a lot while building SalesPlay — especially around how autonomous sales agents need clear evaluation, guardrails, and accountability to actually be useful in real GTM teams. The built-in grading/evaluation angle here feels like a practical step toward making agents less brittle and more production-ready. Curious to see how this evolves in real-world use cases.