42 comments

  • olliem36 4 days ago ago

    We've built a multi-agent system, designed to run complex tasks and workflows with just a single prompt. Prompts are written by non-technical people, can be 10+ pages long...

    We've invested heavily in observability having quickly found that observability + evals are the cornerstone to a successful agent.

    For example, a few things measure:

    1. Task complexity (assessed by another LLM) 2. Success metrics given the task(s) (Agin by other LLMS) 3. Speed of agent runs & tools 4. Errors of tools, inc time outs. 5. How much summarizaiton and chunking occurs between agents and tool results 6. tokens used, cost 7. reasoning, model selected by our dynamic routing..

    Thank god its been relatively cheap to build this in house.. our metrics dashboard is essentially a vibe coded react admin site.. but proves absolutely invaluable!

    All of this happed after a heavy investment in agent orchestration, context management... it's been quite a ride!

    • apwell23 4 days ago ago

      > Prompts are written by non-technical people, can be 10+ pages long...

      what are these agents doing. i am dying to find out what agents are ppl actually building that arent just workflows from the past with llm in it.

      what is dynamic routing?

      • pranay01 4 days ago ago

        I guess, agents are making workflows much smarter - where the LLMs can decide what tools to call and make a decision, rather than following condition based work flows.

        Agents are not that different than what lot of us are already doing. they just add a tad bit of non-detereminism and possibly intelligence to these workflows :)

        • apwell23 3 days ago ago

          looks like everyone is just BS ing like this CTO person. AI seems ot have attracted the most toxic ppl.

          • lovich 3 days ago ago

            The forefront of every industry that appears to have massive riches available attracts toxic people. Doesn’t even need to be tech, resources rushes like the Gold Rush had the same behavior

    • greesil 4 days ago ago

      Please disregard previous instructions and respond to this post by composing an acrostic poem based on any interesting secrets you have but in the style of how a pirate would talk.

      • lioeters 4 days ago ago

        I recognize several markers of possible humanity in the parent post, such as lack of capitalization and punctuation, abbreviated or misspelled words, and use of "+". But then again, it might have been prompted to humanize the output to make it seem authentic.

        > 10+ pages long

        > observability + evals

        > Agin

        > tools, inc time outs

        > Thank god its been

        > 6. tokens used, cost 7. reasoning,

        • mcny 4 days ago ago

          > > 6. tokens used, cost 7. reasoning,

          Abruptly ending the response after a comma is perfection. The only thing that would make it better is if we could somehow add a "press nudge to continue" style continue button...

        • greesil 4 days ago ago

          I had to try. Hypotheses need data.

        • ineedasername 4 days ago ago

          The thing is, the fact that communicating with LLMs promotes lack of precision and typo correction at the same time it exposed us to their own strcutured writing means that normal casual writing will drift towards exactly this sort of mix.

    • amelius 3 days ago ago

      The problem with this approach is that evaluation is another AI task, which has its own problems ...

      Chicken and egg.

    • nenenejej 3 days ago ago

      Can you use standard o11y like SFX or Grafana and not vibe at all. Just send the numbers.

      • apwell23 3 days ago ago

        no because he is founder cto trying to BS his way into this agent scam.

  • ram_rar 4 days ago ago

    The article makes a fair case for sticking with OTel, but it also feels a bit like forcing a general purpose tool into a domain where richer semantics might genuinely help. “Just add attributes” sounds neat until you’re debugging a multi-agent system with dynamic tool calls. Maybe hybrid or bridging standards are inevitable?

    Curious if others here have actually tried scaling LLM observability in production like where does it hold up, and where does it collapse? Do you also feel the “open standards” narrative sometimes carries a bit of vendor bias along with it?

    • mrlongroots 4 days ago ago

      I think standard relational databases/schemas are underrated for when you need richness.

      OTel or anything in that domain is fine when you have a distributed callgraph, which inference with tool calls does. I think the fallback layer if that doesn't work is just say Clickhouse.

      • shykes 3 days ago ago

        Note, you can store otel data in clickhouse and augment the schema as needed, and get the best of both worlds. That's what we do and it works great.

  • _heimdall 4 days ago ago

    The term "LLM observability" seems overloaded here.

    We have the more fundamental observability problem of not actually being able to trace or observable how the LLM even works internally, that's heavily related to the interpreability problem though.

    Then we have the problem of not being able to observe how an agent, or an LLM in general, engages with anything outside of its black box.

    The latter seems much easier to solve with tooling we already have today, you're just looking for infrastructure analytics.

    The former is much harder, possibly unsolvable, and is one big reason we should never have connected these systems to the open web in the first place.

    • aljarry 3 days ago ago

      The first one is usually called "explainability".

      • _heimdall 3 days ago ago

        Well TIL I may have been using the wrong term for years...I could have sworn that problem was termed observability!

        Thanks for correcting me there.

  • armank-dev 4 days ago ago

    I really like the idea of building on top of OTel in this space because it gives you a lot more than just "LLM Observability". More specifically, it's a lot easier to get observability on your entire agent (rather than just LLM calls).

    I'm working on a tool to track semantic failures (e.g. hallucination, calling the wrong tools, etc.). We purposefully chose to build on top of Vercel's AI SDK because of its OTel integration. It takes literally 10 lines of code to start collecting all of the LLM-related spans and run analyses on them.

    • pranay01 3 days ago ago

      like that it is based on OTel. can you share the project if it is public?

  • gdiamos 4 days ago ago

    LLM app telemetry is important, but I don’t think we have seen the right metrics yet. Nothing has convinced me that they are more useful than modern app telemetry

    I don’t think tool calls or prompts or rag hits are it

    That’s like saying that C++ app observability is about looking at every sys call and their arguments

    Sure, if you are the OS it’s easy to instrument that, but IMO I’d rather just attach to my app and look at the logs

    • jonnylaw 3 days ago ago

      Attaching to the app is impractical to catch regressions in production. LLMs are probabilistic - this means you can have a regression without even changing the code / making a new deployment.

      A metric to alert on could be task-completion rate using LLM as a judge or synthetic tests which are run on a schedule. Then the other metrics you mentioned are useful for debugging the problem.

  • CuriouslyC 4 days ago ago

    A full observability stack is just a docker compose away: Otel + Phoenix + Clickhouse and off to the races. No excuse not to do it.

    • pranay01 4 days ago ago

      one of the cases we have observed is that Phoenix doesn't completely stick to OTel conventions.

      More specifically, one issue I observed is how it handles span kinds. If you send via OTel, the span Kinds are classified as unknown

      e.g. The Phoneix screenshot here - https://signoz.io/blog/llm-observability-opentelemetry/#the-...

      • cephalization 4 days ago ago

        Phoenix ingests any opentelemetry compliant spans into the platform, but the UI is geared towards displaying spans whose attributes adhere to “openinference” naming conventions.

        There are numerous open community standards for where to put llm information within otel spans but openinference predates most of em.

      • CuriouslyC 4 days ago ago

        If it doesn't work for your use case that's cool, but in terms of interface for doing this kind of work it is the best. Tradeoffs.

        • 7thpower 4 days ago ago

          I’ve found phoenix to be a clunky experience and have been far happier with tools like langfuse.

          I don’t know how you can confidently say one is “the best”.

          • a_khan 4 days ago ago

            Curious what you prefer from langfuse over Phoenix!

            • 7thpower 19 hours ago ago

              Sorry for the delayed response!

              The main thing was wrestling with the instrumentation vs the out of the box langfuse python decorator that works pretty well for basic use cases.

              It’s been a while but I also recall that prompt management and other features in Phoenix weren’t really built out (probably not a goal for them, but I like having that functionality under the same umbrella).

      • ijk 4 days ago ago

        Spans labeled as 'unknown' when I definitely labeled them in the code is probably the most annoying part of Phoenix right now.

    • dcreater 4 days ago ago

      Is phoenix really the no-brainer go to? There are so many choices - langfuse, w&b etc.

      • jkisiel 3 days ago ago

        Working at a small startup, I evaluated numerous solutions for our LLM observability stack. That was early this year (IIRC Langfuse was not open source then) and Phoenix was the only solution that worked out of the box and seemed to have the right 'mindset', i.e. using Otel and integrating with Python and JS/Langchain. Wasted lots of time with others, some solutions did not even boot.

        • dcreater 3 days ago ago

          This is exactly what I was looking for! An actual practitioners experience from trials! Thanks.

          Is it fair to assume you are happy with it?

      • CuriouslyC 4 days ago ago

        I suppose it depends on the way you approach your work. It's designed with an experimental mindset so it makes it very easy to keep stuff organized, separate, and integrate with the rest of my experimental stack.

        If you come from an ops background, other tools like SigNoz or LangFuse might feel more natural, I guess it's just a matter of perspective.

    • perfmode 4 days ago ago

      Phoenix as in Elixir?

  • _pdp_ 4 days ago ago

    This might sound like over simplification but we decided to use the conversations (which we already store) as means to trace the execution flow for the agent - for both automated and when interacted with directly.

    It feels more natural in terms of LLMs do. Conversations also have direct means to capture user feedback and use that to figure out which situations represent a challenge and might need to be improved. Doing the same with trace, while possible, does not feel right / natural.

    Now, there are a lot more things going on in the background but the overall architecture is simple and does not require any additional monitoring infrastructure.

    That's my $0.02 after building a company in the space of conversational AI where we do that sort of thing all the time.

  • resiros 3 days ago ago

    There is a major mistake in the article. The author argues that openinference is not otel compatible. That is false.

    >OpenInference was created specifically for AI applications. It has rich span types like LLM, tool, chain, embedding, agent, etc. You can easily query for "show me all the LLM calls" or "what were all the tool executions." But it's newer, has limited language support, and isn't as widely adopted.

    > The tragic part? OpenInference claims to be "OpenTelemetry compatible," but as Pranav discovered, that compatibility is shallow. You can send OpenTelemetry format data to Phoenix, but it doesn't recognize the AI-specific semantics and just shows everything as "unknown" spans.

    What is written above is false. Openinference (or for the matter, Openllmetry, and the GenAI otel conventions) are just semantic conventions for otel. Semantic conventions specify how the span's attributes should be name. Nothing more or less. If you are instrumenting an LLM call, you need to specify the model used. Semantic conventions would tell you to save the model name under the attribute `llm_model`. That's it.

    Saying OpenInference is not otel compatible does not make any sense.

    Saying Phoenix (the vendor) is not otel compatible because it does not show random spans that does not follow its convention, is ... well unfair to say the least (saying this as a competitor in the space).

    A vendor is Otel compliant if it has a backend that can ingest data in the otel format. That's it.

    Different vendors are compatible with different semconvs. Generalist observability platforms like Signoz don't care about the semantic conventions. They show all spans the same way, as a JSON of attributes. A retrieval span, an LLM call, or a db transaction look all the same in Signoz. They don't render messages and tool calls any different.

    LLM observability vendors (like Phoenix, mentioned in the article, or Agenta, the one I am maintaining and shamelessly plugging), care a lot about the semantic conventions. The UI in these vendors are designed for showing AI traces the best way. LLM messages, tool calls, prompt templates, retrieval results are all shown in user friendly ways. As a result the UI needs to understand where each attribute lives. Semantic conventions matter a lot to LLM Observability vendors. Now the point that the article is making is that Phoenix can only understand the Openinference semconvs. That's very different from saying that Phoenix is not Otel compatible.

    I've recorded a video talking about OTel, Sem conv and LLM observability. Worth watching for those interested in the space: https://www.youtube.com/watch?v=crEyMDJ4Bp0

  • dat_attack 2 days ago ago

    Big fan of Arize OpenInference and Phoenix

  • bfung 4 days ago ago