Claude Code may be burning your limits with invisible tokens

(efficienist.com)

54 points | by jenic_ 4 days ago ago

10 comments

  • marginalia_nu 3 days ago ago

    This is methodologically flawed, as bytes only weakly correlate with tokens.

    Unless you're sending identical requests, you can't expect the same token counts for any given of bytes, or that a slightly longer (but different) message will lead to more tokens than a slightly shorter one, or vice versa.

    • Bolwin 2 days ago ago

      > The numbers came from the same project and the same prompt across versions.

      I'm pretty sure the tester checked. If the request format is the same (which it is, given it uses the same as Anthropic's stable public API) and the same prompt/messages then bytes will correlate pretty well.

      • marginalia_nu 2 days ago ago

        The prompt may be the same, but the project context would have have surely changed. User prompt itself is unlikely to be ~200KB.

  • a_c 3 days ago ago

    I had the same suspicion so made this to examine where my tokens went.

    Claude code caches a big chunk of context (all messages of current session). While a lot of data is going through network, in ccaudit itself, 98% is context is from cache.

    Granted, to view the actual system prompt used by claude, one can only inspect network request. Otherwise best guess is token use in first exchange with Claude.

    https://github.com/kmcheung12/ccaudit

  • tencentshill 3 days ago ago

    On the free plan, I hit the limit instantly by uploading one 45kb PDF and one prompt. Even for a free plan, I expect a bit more. Oh well, local models can be pushed to do what I need.

  • 4 days ago ago
    [deleted]
  • simianwords 3 days ago ago

    I don’t buy it. The same problem was reported in Claude.ai at the same time which means same underlying root cause.

  • F7F7F7 3 days ago ago

    What is the system prompt for $1000 Alex (RIP)?