29 comments

  • schmuhblaster 15 hours ago ago

    Shameless self-plug: https://github.com/deepclause/deepclause-sdk/

    The idea is to take markdown instructions and "compile" them into a Prolog-based DSL that orchestrates both deterministic and LLM-based components. The (meta-)interpreter of the DSL automatically tracks the entire execution process, so that the final ouput becomes observable and more explainable. Still at an early stage, but I am having lots of fun with it and would love to explore possible use cases.

  • rossjudson a day ago ago

    From a systems engineering standpoint, the purpose of LLMs is to construct, verify, and "push down" abstractions and deterministic layers. Deterministic layers are able to cope reliably with the law of medium numbers.

  • eddiehammond a day ago ago

    Anthropic published a profile on what we're building at Kepler. Sharing because the architectural argument (LLM for intent, deterministic code for retrieval and computation, every number traceable to source) is the part I'd actually want HN to push on. Happy to answer questions in the thread.

    • jochem9 a day ago ago

      I'm on a very similar train. You cannot dump all the data into an LLM (for many reasons) and we also already have clearly defined rules that an LLM doesn't have to figure out.

      So keep organizing data (LLM powered, of course), so that you can query data as usual (multi modal, so not just graphs, but also time series, relational, etc). Feed that to deterministic computations. Let an LLM reason about the outcomes.

      Give the LLM the freedom to orchestrate the retrieval and computations. Make sure the way it orchestrates it is auditable.

      The key thing I want to achieve is beyond this system: I want to uncover hidden things in the system (missing in the ontology, computations, etc) and propose to add these. This will effectively give you a generic approach to create ever evolving systems aliging with reality while being fully auditable.

      • eddiehammond 21 hours ago ago

        The last part we're very excited by too: using orchestration logs and failure traces to surface gaps in the ontology and propose extensions. Early days, but that's where the architecture compounds, the system gets more complete every time it's used.

    • bjelkeman-again a day ago ago

      Very interesting. What size team does it take to build this, incl. analysts, project managers, product managers etc.? How long did you spend in analysis before building and the how long to first customer using it?

    • saadatq a day ago ago

      could I get a link to the Kepler finance site? googling for "Kepler financial" yields 5-6 other finserv companies

      • eddiehammond a day ago ago

        Yep! kepler.ai We're working on improving SEO here, it's a popular name

  • Txmm 20 hours ago ago

    Reassuring to see this approach coming out consistently. I’ve been doing the same for high volume data pipelines, extracting the deterministic actions from markdown instructions and leaving the LLM to do the analysis/act as the fluid coupling between deterministic parts.

    Over time you can refine this to be more and more codified, handle edge cases with agents/LLMs then turn them into first class deterministic branches too.

    This pattern seems to be emerging everywhere, the chain of thought and intent capture to improve it seems to be the next big thing

  • hweaHG a day ago ago

    The people who built this were at Palantir before. How is the verifiable targeting of girls' schools in Iran by the Claude-powered Maven system going?

    We are living in an age of hot air.

  • HoyaSaxa a day ago ago

    The title is misleading. They achieved a 94% accuracy rate which in financial services is a far cry from acceptable without a human-in-the-loop verifier.

  • eddiehammond 21 hours ago ago

    Mandatory pitch - if working on this kind of problem is interesting to you, we're hiring! jobs.ashbyhq.com/kepler-ai

  • hbcondo714 a day ago ago

    > Indexed 26M+ SEC filings

    But the https://kepler.ai website says 10M+

    • eddiehammond 21 hours ago ago

      Good catch! The site was stale, updated it to reflect the 26M+

      • hbcondo714 21 hours ago ago

        Not to be picky but the careers page is saying Live in production. 10M+ SEC filings

        https://jobs.ashbyhq.com/kepler-ai

        I just wanted to learn more about the company but reside in California and open roles are in New York

      • pugio 21 hours ago ago

        This interaction was a delightful example of life in 2026 - the disparity between what AI can do, and what and how we use AI. (Which I like to term for myself "Phenomenal cosmic powers!... Itty bitty living space.")

  • a day ago ago
    [deleted]
  • hottrends a day ago ago

    [flagged]

  • Noahxel a day ago ago

    [flagged]

  • hansmayer a day ago ago

    > The duo’s answer was to build deterministic infrastructure that serves as a trust and verification layer for AI.

    On the one hand, very encouraging to see plain old deterministic infra w/o using slop machines.

    On the other hand, this is a recognition that LLMs are just additional friction in the system that we would better off without in the first place!

    • bjelkeman-again a day ago ago

      Just friction? What do you mean? What would you do instead?

      • hansmayer a day ago ago

        Well... You have a 'tool' that you cannot trust. Present everywhere due to unholly alliance between the LLM- companies and the exhilirated office worker cretins who "use" them to do "workflows". Now they fuck up stuff. Sounds like friction to me, or do you value the LLMs as net positive? WHy should I do something to fix their problems instead?

    • SpicyLemonZest a day ago ago

      You're misunderstanding something about the problem space they're describing. The deterministic infra is for an underlying "execution layer"; the LLMs are providing utility by figuring out how to express English language queries in terms of the primitives of that verifiable layer. That way, you can describe your results deterministically even though the process of arriving at them was not necessarily deterministic.

      • jmogly 7 hours ago ago

        How do you know that the llm is correctly translating the english queries to the verifiable primitives? It seems like it’s just pushing the problem to another layer?

      • hansmayer a day ago ago

        Oh. I may have misread indeed. Ao its like, still LLM bullshit, but with really strongly worded .md instruction files begging them to please be correct?

        • SpicyLemonZest a day ago ago

          No. The point of the verification layer is that you don't have to beg the LLM to please be correct.