8 comments

  • Jeremy1026 an hour ago ago

    I have the $100/mo Claude plan, I've used 5% of my weekly and it resets this evening. I'm not a heavy user, but I also feel like I'm not a slouch either. I don't get how people are rolling through their usage so fast.

    • mattmanser 30 minutes ago ago

      I can only assume they're eother setting it to Opus all the time, or they're using something like Ralph Wiggum.

      • jorisboris 12 minutes ago ago

        I burned my week quota working on one small repo (with a lot of data files though) for one working day yesterday. It wasn’t like that before.

        Something definitely changed, or it’s somehow reading all that data over and over again

  • MeetingsBrowser 3 hours ago ago

    Turn of the 1M context that got enabled by default. Long sessions eat through the tokens much faster.

    Your sessions were probably getting auto-compacted much earlier before the context window got larger.

    • alex1sa 3 hours ago ago

      Also worth checking if you're running long agentic loops — each tool call in a multi-step task counts against the window independently. So before switching providers, disable the extended context and run a day. It's probably not the model.

  • loveparade 4 hours ago ago

    Codex, it's much more generous. And doesn't lock you into using their CLI.

    Still I'm a bit surprised you burn through tokens that quickly. I rarely ever reach my limit.

  • kingkongjaffa 42 minutes ago ago

    It feels lower.

    Last 2 weeks I was using it more or less all day on Opus running skills to write PRDs and then code and tests to solve the PRDs, never hit the session limit.

    Last 2 days I hit the cap in about an hour of kicking off my skills workflow.

    On the paid enterprise team plan this is really bad.

  • elC0mpa 3 hours ago ago

    Well, maybe this is an unpopular opinion, but I prefer the Gemini Cli, I paid Google AI Pro for the year and it is perfect for me, even though it's true that pro model sometimes takes like 2 - 4 minutes to answer