Changes to GitHub Copilot Individual Plans

(github.blog)

79 points | by zorrn 9 hours ago ago

25 comments

  • davepeck 8 hours ago ago

    This thread is pretty quiet for what strikes me as a substantial set of changes with, presumably, more substantial changes still to come for anyone not grandfathered into a Pro plan.

    I get the impression that the intersection of HN posters and Copilot users is quite small in practice; that Claude Code and Codex suck up all the oxygen in this room. But it seems plausible we’ll see similar “true costs greatly exceed our current subscription pricing” from Anthropic and OpenAI someday soon…

    • andromaton 4 hours ago ago

      The ux of copilot driving Claude beats Claude Code handily.

      I never understood the low visibility.

      Expensive ram is annoying. I don't look forward to expensive ai.

  • everfrustrated 9 hours ago ago

    This is quite the rug pull.

    I've been using the Pro+ with Opus 4.6 very successfully and being charged 3x rate was mostly acceptable.

    But removing Opus 4.6 and replacing with Opus 4.7 with a 7x rate is just insane!

  • diath 9 hours ago ago

    I guess it makes more sense for me to just get Claude Pro instead. I was using my Copilot license only because of Opus 4.6 access as all other models seemed crippled in comparison in Copilot; does not even make sense to upgrade to Pro+ which goes from $10/mo to $40/mo and only gives you access to a model that has 7x the rate - 5x the limit at 7x the rate for 4x the price does not seem appealing at all.

  • benwills 3 hours ago ago

    Yesterday, Opus 4.6 cost three credits. You can no longer use 4.6 or 4.5.

    Opus 4.7 is available today for 7.5 credits per prompt.

    They have also suspended new signups.

    After testing all of the major IDEs/tools that integrate with LLMs over the last four weeks, I was happy to settle on Copilot. I, and others, seem to be a lot confident in that decision. Especially since there seems to be no refund path for people who prepaid for a year.

    In my 30+ years online, I've never seen an industry change so much in terms of pricing, service levels, etc, as I have the last two months.

    I'm really curious where all of this lands, and if AI coding tools will be something that only a small percentage can genuinely afford at a competitive level.

    • p1necone 2 hours ago ago

      > In my 30+ years online, I've never seen an industry change so much in terms of pricing, service levels, etc, as I have the last two months.

      Warning: baseless speculation/theorizing ahead.

      This is the consequence of LLM inference being really expensive to run, and LLM inference companies being really attractive to VCs. The VC silly money means their costs are totally decoupled from revenue for a while, but I guess eventually people look at incomings vs outgoings and start asking questions.

      Previous big trends like SaaS apps, NFTs, blockchain etc were similarly attractive to VCs (for a period of time at least for the last two, the first one is still pretty attractive to VCs), but nowhere near as expensive to run so the behaviour of the companies running them wasn't quite the same.

  • rectang 7 hours ago ago

    Welp. I already added a $20 Claude Pro subscription to complement my $10 Github Copilot Pro subscription and $10 DuckDuckGo Plus. That was partly to show support for Anthropic after the OpenAI/DOD episode, but also because I've been using Opus 4.5 exclusively with Copilot and I figured I should try Claude Code eventually.

    Now it's going to cost me an upgrade to $39 Github Pro+ to keep using Opus, and even then it's with much higher multipliers. I don't fully understand the extent to which this reflects actual costs for Opus versus Microsoft leveraging network effects to discourage the usage of a competitor.

    I didn't really want to wander outside of VSCode just yet because I was happy with VSCode/Copilot/Opus-4.5 and I don't want to spend all my time experimenting when stuff is changing so fast. But I guess my hand has been forced.

    • diath 7 hours ago ago

      > I didn't really want to wander outside of VSCode just yet because I was happy with VSCode/Copilot/Opus-4.5

      This was my first thought too but apparently you can just use Claude Code within VSC: https://code.claude.com/docs/en/vs-code

      • rectang 3 hours ago ago

        I've started messing with this and the experience seems pretty similar.

  • WhiteDawn 9 hours ago ago

    I wouldn't mind this change that much if opus-4.7 worked properly in copilot cli. It keeps stopping mid-thought or task and forces me to waste more prompts for no observable reason.

    Looks like I'm ending my subscription, good (likely too good, no way my account was even remotely within profitable range) access to opus-4.6 was the only reason I used this at all.

    • p1necone 2 hours ago ago

      Are you using through regular copilot (the 'local' agent type), or through the separate claude agent type (which I believe you have to activate in your repository settings on github).

      I had the exact same issues with the latter - randomly stops working, wipes chat history, just generally seems to be totally broken. But the former works totally fine and still lets you select sonnet/opus. My experience was before this recent 4.6 -> 4.7 change though.

  • hokkos 6 hours ago ago

    I cannot understand people still using anthropic models on copilot, when gpt 5.4 is better and 3 to 7 time cheaper. Anthropic quite obviously raised their licensing to the max. You probably can still have a taste of it for a few minutes before being limited on their own subscription.

    • fkarg 5 hours ago ago

      Simple, for what I'm doing Opus 4.6 (and before that, Opus 4.5) are just much better at following my instructions and achieve consistently better results.

      From what I've been gathering, this split in success seems to depend a lot on the types of tasks, the domains / programming languages / frameworks used, and style of prompting.

      I couldn't get 5.2 to follow instructions for the life of me, even when repeating multiple times to do / not do something. 5.3-codex was an improvement and 5.4 while _usually_ decent still regularly forgets, goes on unnecessary tangents, or otherwise repeatedly stops just to ask for continuation.

      Sure, I'm paying 3x more per request, but I'm also doing 5x fewer requests.

      Or well, used to. Still bummed about them dropping 4.6.

      • rectang 3 hours ago ago

        My experience is similar. Opus, especially Opus 4.5, understands my intentions better even when poorly phrased, and more consistently follows my instructions to do only what's necessary and no more.

        As far as I can tell, the distinctive feature of my workflow is that I'm giving it small, contained single-commit-sized tasks and limited context. For instance: "For all controller `output()` functions under `Controller/Edit/` and `Controller/Report/`, ensure that they check `Auth::userCanManage`." Others seem to be taking bigger swings.

    • aleksiy123 4 hours ago ago

      Anecdotally, I experimented GPT-5.4 xhigh and something about the code it wrote just didn't vibe with me.

      It felt like I constantly have to go back and either fix things or I just didn't like the results. Like the forward momentum/progress on my projects overally wasn't there over time. Even with tho its cheaper it just doesn't feel worth it, to the point I start to feel negative emotions.

      I'm actually a bit worried that I've somehow become to feel more negative emotions with agentic coding. Quicker to feel frustrated somehow when things aren't working.

  • aleksiy123 8 hours ago ago

    I saw some Reddit rumours going around and locked myself into the yearly Pro+

    I guess overall probably was a good decision.

    But 7.5x as well as quota limits is pretty hard to swallow.

    The annoying thing about the quota limits is they make it really awkward to actually fully utilize the 1500 premium requests you are paying for.

    Like if you don’t plan working around the daily and weekly quotas you may not actually be able to utilize your full request allocation.

    Claude has the same issue. Single session blows through the quota.

    • 2 hours ago ago
      [deleted]
  • literallyroy 2 hours ago ago

    Removing access to opus is pretty funny. At least they recognize it’s unacceptable and tell you to go get a refund.

    The per-request model was pretty insane.

  • walthamstow 5 hours ago ago

    I'm not surprised at all. This was one of the most generous plans out there, offering frankly ridiculous pricing based on a single prompt regardless of turns taken or tokens used. I was subscribed for a month around Christmas and got a shitload of tokens out of Opus 4.5 for a measly $10.

  • qaz_plm 6 hours ago ago

    Worst part is them doing this mid-billing cycle and not at the start of the next in 11 days. I cancelled and requested a refund.

  • 9 hours ago ago
    [deleted]
  • p1necone 4 hours ago ago

    Damn it was good while it lasted, but it was obvious the previous per request pricing scheme was misaligned with their actual costs. MS's product people must be seriously detached from their technical and financial people for it to have even lasted this long (or they're willing to burn a lot of money for the typical "make customers happy and then rug pull" cycle, but hey, Hanlons razor).

    Given that they've already silently had session + weekly rate limits for the past couple weeks already at least (I've hit them), I wonder if this change is just making them visible to the user, or if it's actually tightening them too.

    If it's the former then I can say they're still significantly more generous than claude pro (on the pro+ plan), so this might be okay. If it's the latter, and the new limits are similar to claude pro then copilot is going to be significantly less useful to me.

  • 9 hours ago ago
    [deleted]
  • 4 hours ago ago
    [deleted]
  • sovietmudkipz 3 hours ago ago

    I cannot describe how disappointing it is to be switching to this insane time limit window based pricing. I absolutely abhor that I'll be subjected to 5 hour chunks of time where I'll be limited at some point in that window of time, and be told I'll have to wait. And then there is a weekly limit.

    That's not how my creative energy works. I have time that I want to solve problems, and I want to solve them. I don't want a cooldown timer applied to solving a problem. Not to mention the anxiety of realizing that while I sleep I could have burned tokens in that time.

    I'm incredibly disappointed when I sat down to my hobbyist programming time and realized copilot was suddenly and dramatically changed in a way that is incredibly disheartening.

    Meter my token usage DON'T tell me when I can use them! ARGH.