> Update - We're currently investigating issues with Claude Code and Claude.ai. Some users may be unable to log in, and others may experience slower than usual performance. The Claude API is not affected.
I feel ashamed writing code while Claude does it way better than me... I've found my forte is at designing systems and making project-level decisions, something that Claude still doesn't do very well.
Working for me (Desktop + Code) but had a bunch of oauth issues this morning and it's been up and down like a drunken monkey on a trampoline all week, especially around start of biz times in the US.
The Pentagon spat is having a hell of a Streisand Effect on them; DAUs +180% (source: SimilarWeb), paid subs doubling. If past experience is any indicator, it'll settle down as they scale up (or they fall out of the news cycle). Google Trends search interest shows the story: https://trends.google.com/explore?q=claude%2Canthropic&date=...
Can someone that's worked at one of these big companies honestly explain how it happens that when these guys are down, it's never for like 10-15 mins ... it's always 1-2+ hours? Do they not have mechanisms in place to revert their migrations and deployments? What goes on behind the scenes during these "outages"?
Part of it observability bias: longer, more widespread outages are more likely to draw signficant attention. This doesn't mean that there aren't also shorter, smaller-scope outages, it's just that we're much less likely to know about them.
For example, if there's a problem that gets caught at the 1% stage of a staged rollout, we're probably not going to find ourselves discussing it on HN.
Quick fixes have tendencies to break other stuff and just make matters worse. Better to leave it offline for a little longer, fix the definitive root issue and make sure it comes online nicely. If the issue was just a quirk in a recent deployment then these probably can be reverted easily on the endpoints where they were just deployed (I'm sure they are using staggered roll-outs). These long term downtime things are probably not issues related to a recent release.
You will run into thundering herd/hotspotting/pre-warmed caching issues when you have to restart. There's generally not an easy to way to switch these sorts of systems on and off, especially a relatively new system that isn't battle-hardened.
I got nothing for the github outages this year though, that seems like incompetence.
One day I started getting API errors across requests and initially assumed it was something on my side. After digging into it, the provider I was using was getting overloaded and intermittently failing.
That was the moment I realized relying on a single external service was a risk I hadn’t really planned for.
Now I keep two providers configured: a primary and a secondary. If error rates spike or the API stops responding, the system can fail over instead of the whole product going down.
It added a bit of complexity, but the peace of mind is worth it.
I don't know about down but I use the VS Code extension on a Pro plan (that I'm considering upgrading from) and it's been slower than molasses flowing uphill in winter for me this afternoon. I'm (a) feeling unwell, and (b) up against a deadline, so this is starting to damage my calm.
The issues have been described as login/logout, but I'm not sure that's all that's happening. In today's outage and the last the API stopped and kicked the session out.
I only mention this in case someone from Anthropic perhaps isn't aware that it seems to be a wider issue than login/logout (although I'm sure they are!)
I use Big-AGI [1] as selfhosted open source LLM workspace, and it's quite telling that when adding API keys for Anthropic, it presents a note inbetween reading "Experiencing Issues? Check Anthropic status" that it doesn't for any other model provider.
Are they going to extend my subscription time as a result? It ends today, but I was locked out an hour or so ago, and I'm not sure if that was actually due to this outage.
All the vibe coding is clearly not working out too well.
Experienced same... they logged me out on claude code a few minutes ago. And when I login, it makes me wait >15000ms for the auth (which exceeds their cutoff time), so auth fails!
Not sure about LocalLlama, but have you tried LMStudio? If you use Zed it will auto-pickup whatever model you enable on LMStudio. I keep meaning to write a blog post about this for people unaware that you can pair the two pretty easily on a Mac. I mostly use CC but like to test offline models now and then to see how far they've come along.
I use Zed with its Claude Code integration, and if I wanna use any other LLM I use LMStudio which is nice on the Mac, and hosts the APIs for it, Zed knows which models are available which is a plus.
Slightly ot but I've been using OpenAI's GPT 5.4 on Codex and so far finding it more convincing than Claude with Opus 4.6 at maximum thinking for my use cases.
I'm more interested in helping with design and architecture rather than having it author tons of code.
Keep in mind that OpenAI has a way more generous tier for 20$ than Anthropic's one, and I think you can even use codex for free with the latest models, so give it a shot, you may find it better than you expected and a solid backup to Claude.
I agree it seems better at complex work. However, I find that it often tries to make ALL work complex. I had a simple bug fix where I knew exactly what the 1-2 line fix was. GPT 5.4 added like 200 LOC and started refactoring the entire function of the app. Was the refactor possibly an improvement? Maybe, but I needed the fix quick so I stopped it and switched to Claude, which did exactly what I was expecting.
because if it is, god, I'm done with this industry. just done. I rather sell my toenail clippings for scraps of food than deal with this shitty insanity
Same issue here. What was the name of that question and answer site again where you had to manually copy and paste code from? ;-)
Official status is still green: https://status.claude.com/
But downdetector is clear: https://downdetector.com/status/claude-ai/
/edit: there's an official incident now: https://status.claude.com/incidents/jm3b4jjy2jrt
They just updated it some 4 minutes ago.
> Update - We're currently investigating issues with Claude Code and Claude.ai. Some users may be unable to log in, and others may experience slower than usual performance. The Claude API is not affected.
oh about that one... we killed it unfortunately
vibe coded trash....
Does anyone recall how to code manually? I certainly don't :-)
I do - but then I shudder, and go make a sandwich - which still needs me for the moment.
What? You don't just prompt your kitchen?
sudo make me a sandwich
I also don't recall what i'm going to do tomorrow as busy as i am. That's what calendars are for.
The C-suite doesn't want us to write code by hand anymore, so I think I'm supposed to just drink tea and collect salary whenever Claude is down.
I feel ashamed writing code while Claude does it way better than me... I've found my forte is at designing systems and making project-level decisions, something that Claude still doesn't do very well.
Forte of the gaps :-) I'm exactly the same.
It's really amazing how stability of platforms has gone down in the last year or so.
If only this was correlated with something else going on in the industry...
> 100% of our code is written by AI
Yeah we can tell...
yes, the new normal is crazy. Claude/Github et al.
They are dogfooding their own tools and causing so much downtime, all in the spirit of "staying a head".
The schadenfreude is so fucking palpable
Weird take, will you also look sour at devs who use local LLM's in ~50 years? Or is that different
The mass immigration probably still taking a toll.
Working for me (Desktop + Code) but had a bunch of oauth issues this morning and it's been up and down like a drunken monkey on a trampoline all week, especially around start of biz times in the US.
The Pentagon spat is having a hell of a Streisand Effect on them; DAUs +180% (source: SimilarWeb), paid subs doubling. If past experience is any indicator, it'll settle down as they scale up (or they fall out of the news cycle). Google Trends search interest shows the story: https://trends.google.com/explore?q=claude%2Canthropic&date=...
Can someone that's worked at one of these big companies honestly explain how it happens that when these guys are down, it's never for like 10-15 mins ... it's always 1-2+ hours? Do they not have mechanisms in place to revert their migrations and deployments? What goes on behind the scenes during these "outages"?
Part of it observability bias: longer, more widespread outages are more likely to draw signficant attention. This doesn't mean that there aren't also shorter, smaller-scope outages, it's just that we're much less likely to know about them.
For example, if there's a problem that gets caught at the 1% stage of a staged rollout, we're probably not going to find ourselves discussing it on HN.
Quick fixes have tendencies to break other stuff and just make matters worse. Better to leave it offline for a little longer, fix the definitive root issue and make sure it comes online nicely. If the issue was just a quirk in a recent deployment then these probably can be reverted easily on the endpoints where they were just deployed (I'm sure they are using staggered roll-outs). These long term downtime things are probably not issues related to a recent release.
You will run into thundering herd/hotspotting/pre-warmed caching issues when you have to restart. There's generally not an easy to way to switch these sorts of systems on and off, especially a relatively new system that isn't battle-hardened.
I got nothing for the github outages this year though, that seems like incompetence.
Well when the coding agents go down who are they supposed to ask what the problem is?
They should probably buy subscriptions to those Chinese agents.
I ran into this with my own SaaS a while back.
One day I started getting API errors across requests and initially assumed it was something on my side. After digging into it, the provider I was using was getting overloaded and intermittently failing.
That was the moment I realized relying on a single external service was a risk I hadn’t really planned for.
Now I keep two providers configured: a primary and a secondary. If error rates spike or the API stops responding, the system can fail over instead of the whole product going down.
It added a bit of complexity, but the peace of mind is worth it.
I don't know about down but I use the VS Code extension on a Pro plan (that I'm considering upgrading from) and it's been slower than molasses flowing uphill in winter for me this afternoon. I'm (a) feeling unwell, and (b) up against a deadline, so this is starting to damage my calm.
Yes they are down --
/login
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── Login
I was wondering the same. They just updated the status page but it was showing green for a while and I couldn't login.
https://status.claude.com/
The issues have been described as login/logout, but I'm not sure that's all that's happening. In today's outage and the last the API stopped and kicked the session out.
I only mention this in case someone from Anthropic perhaps isn't aware that it seems to be a wider issue than login/logout (although I'm sure they are!)
Weird because it shows green but obviously things aren't working.
Jesus that’s a bad looking status page lol it’s almost a rainbow
Another Atlassian shit show app https://www.atlassian.com/software/statuspage
I mean the functionality of the page seems, acceptable? i more mean the actual outage history is wild.
Same issue. Getting an "internal server error" message
I use Big-AGI [1] as selfhosted open source LLM workspace, and it's quite telling that when adding API keys for Anthropic, it presents a note inbetween reading "Experiencing Issues? Check Anthropic status" that it doesn't for any other model provider.
[1] https://github.com/enricoros/big-AGI (no affiliation)
Login failed: Request failed with status code 500
Good times..
Maybe unrelated but Claude has not only been slow but quality has also been bad the last 3-4 days.
Everything was fine last week then suddenly even the smallest tasks can't be completed correctly
https://status.claude.com/incidents/jm3b4jjy2jrt https://news.ycombinator.com/item?id=47336889
Time for all of the cosplaying developers to sit around and twiddle their thumbs.
Are they going to extend my subscription time as a result? It ends today, but I was locked out an hour or so ago, and I'm not sure if that was actually due to this outage.
All the vibe coding is clearly not working out too well.
Experienced same... they logged me out on claude code a few minutes ago. And when I login, it makes me wait >15000ms for the auth (which exceeds their cutoff time), so auth fails!
Can people drop a good LocalLlama setup that I can run on M4?
Not sure about LocalLlama, but have you tried LMStudio? If you use Zed it will auto-pickup whatever model you enable on LMStudio. I keep meaning to write a blog post about this for people unaware that you can pair the two pretty easily on a Mac. I mostly use CC but like to test offline models now and then to see how far they've come along.
I use Ollama to run local models but have never used one with CC. Curious what models work best for people.
I use Zed with its Claude Code integration, and if I wanna use any other LLM I use LMStudio which is nice on the Mac, and hosts the APIs for it, Zed knows which models are available which is a plus.
Looks like I'm debugging this issue myself.
Slightly ot but I've been using OpenAI's GPT 5.4 on Codex and so far finding it more convincing than Claude with Opus 4.6 at maximum thinking for my use cases.
I'm more interested in helping with design and architecture rather than having it author tons of code.
Keep in mind that OpenAI has a way more generous tier for 20$ than Anthropic's one, and I think you can even use codex for free with the latest models, so give it a shot, you may find it better than you expected and a solid backup to Claude.
I agree it seems better at complex work. However, I find that it often tries to make ALL work complex. I had a simple bug fix where I knew exactly what the 1-2 line fix was. GPT 5.4 added like 200 LOC and started refactoring the entire function of the app. Was the refactor possibly an improvement? Maybe, but I needed the fix quick so I stopped it and switched to Claude, which did exactly what I was expecting.
Perfectly mirrors my own experience.
I can't login to my subscription
Auth is failing, session kicked out.
Weird, it' all looks good to me. Using both Chat and Code variants without issues.
Same. I have been using it all morning, no issues.
oauth `redirect_url` points to localhost, so the login redirect hangs
claude code fully vibe coded confirmed
please please please tell me this is real
because if it is, god, I'm done with this industry. just done. I rather sell my toenail clippings for scraps of food than deal with this shitty insanity
Authentication down for me
Same here.. I try to log in (via Google) and it just stops me at the "Authorize" step.
Same here:
OAuth error: timeout of 15000ms exceeded
Press Enter to retry.
coding is largely solved - a soundbite that will live in infamy
I found it absurdly slow yesterday.
It's down for me
claude looking for that $500k salary, too
Yep same here:
OAuth error: timeout of 15000ms exceeded
Press Enter to retry.
Looks like it is back.
seems to be.
Yup, I get:
And their /login page doesn't work.claudown
woohoo, break time!
Nope. Me too.
It's down. I hate this