To be fair, it feels like the DNS service has been the most reliable part of our Azure infra. Never really had issues with it, whether with traffic or API calls.
More seriously, keeping a local cache of external npm packages, and a local artifact storage for internal npm packages looks like a wise thing to have done long ago. Might be cheaper in the long run.
Ironically, both Nandu and Verdaccio are implemented in Tyepscript and install via npm.
(Same logic obviously applies to Python packages, Docker images, etc.)
At my former job we had a private registry that was a mirror of npm’s with an approval gate for packages devs would request and it would always pin versions
I took that for granted back then and just assumed it was standard enterprise policy
Multiple previous jobs had this too (local Packagist is thing, Artifactory is another) but my current job got rid of theirs. Seemed a little short-sighted given the risks but I don't make the decisions.
> a local artifact storage for internal npm packages looks like a wise thing to have done long ago
Deno already does this invisibly by default.
All packages are stored in the global cache.
No need to store multiple versions of the same dependencies across projects.
To the code in your projects: there is no such thing as a global cache. Just import your dependencies like normal and deno maps them to the global cache.
Does IPFS support content eviction now? If not, that could go wrong really fast. You get a compromised package out there and then, I think, literally every node needs to unpin it or it remains.
Presumably, how ever you mark a version as latest would also be how you mark one as compromised. IPFS files are immutable and keyed by hash. But this seems like overengineering.
Caching NPM was easier when you could pull the Couchbase replicate API. Afaik that's gone and now you just have to send a bazillion http requests instead.
I mean more like a full git competitor. Gitlab exists but more competition is generally better for the consumer and it looks like Github's lead is starting to falter with all these incidents.
GitLab is right there. And overall provides a better product than GitHub, if nothing else on these two points:
* You can actually have an organisational structure (folders/namespaces), and projects can be moved around with automatic redirects. Also, inheritance of access controls, variables between the namespaces
* GitLabCI is organised in a way that makes supply chain attacks less of a risk. GitHub Actions takes the NPM/JS approach, where every step is an action, one you usually need to get off someone, with shoddy versioning, tons of transient dependencies, etc. In GitLabCI you can have templates, but you don't have to use an external template for every bit. It's shell scripting on top of containers, so you can have custom container images with your stuff, or custom scripts, or templates that bundle it all.
I’ve personally been deeply unappreciated of Github’s changes in the last few years to automatically not show diffs to “large files” without having to click to open them - which seems to be a threshold that continues to shrink. Maybe like 3 screenfuls of content is the limit now per file. It’s crazy.
Yeah, agreed it's not great for that. I'm not real happy with GitHub's worsening UX either, but it'll at least show the _names_ of all the files in the PR.
With GitLab, when you hit the rate limit, any file "past" that limit doesn't even show that it exists in the MR. It just looks like the MR is missing a bunch of stuff, with no workaround available. :( :( :(
SSO, access tokens, secrets are all bound to the Organization level - if you work on multiple Organizations you have to log in separately... You also cannot have nested Organizations.
libc is still working just fine, as is the linux kernel. Mayhaps having 2000 dependencies on 3000 packages from 4000 unvetted sources was a mistake afterall?
First GitHub, now NPM? Oh no... That is happening, guys. Rise of the machines. I hope Jira is next and Slack follows.
I wonder if this is an underlying infra issue with Azure being that Github was also having issues.
We added a preflight curl against registry.npmjs.org before the install step in CI. Not surprising they went down together.
I bet 10 dollars it's DNS.
Nah, can't be, Azure DNS has a 100% SLA after all: https://learn.microsoft.com/en-us/azure/dns/dns-faq#what-is-...
"Always" up, but maybe not going where you expect. [0]
[0] https://arstechnica.com/information-technology/2026/01/odd-a...
To be fair, it feels like the DNS service has been the most reliable part of our Azure infra. Never really had issues with it, whether with traffic or API calls.
It's DNS
If it's not DNS it's MTU if you're a person and BGP if you're a company.
Just wait and it will be something like "Github's internal DNS was down and caused widespread service communication issues."
it might just be *AZURE*
I am waiting for jeff geerling's "its always dns" t-shirt reference/video about it if that's the case.
Easy there buddy, not everything needs to be a polymarket bet :-)
It's likely someone just ran npm ls -all
https://www.ebay.com/ is also down
lots of amazon pages & search seem to be degraded as well
That's one way to fix supply chain vulnerabilities.
Can't have any vulnerabilities if you don't have a supply chain
More seriously, keeping a local cache of external npm packages, and a local artifact storage for internal npm packages looks like a wise thing to have done long ago. Might be cheaper in the long run.
Ironically, both Nandu and Verdaccio are implemented in Tyepscript and install via npm.
(Same logic obviously applies to Python packages, Docker images, etc.)
At my former job we had a private registry that was a mirror of npm’s with an approval gate for packages devs would request and it would always pin versions
I took that for granted back then and just assumed it was standard enterprise policy
Multiple previous jobs had this too (local Packagist is thing, Artifactory is another) but my current job got rid of theirs. Seemed a little short-sighted given the risks but I don't make the decisions.
> a local artifact storage for internal npm packages looks like a wise thing to have done long ago
Deno already does this invisibly by default.
All packages are stored in the global cache.
No need to store multiple versions of the same dependencies across projects.
To the code in your projects: there is no such thing as a global cache. Just import your dependencies like normal and deno maps them to the global cache.
Only if we had a turn key distributed cache, like IPFS
Does IPFS support content eviction now? If not, that could go wrong really fast. You get a compromised package out there and then, I think, literally every node needs to unpin it or it remains.
Presumably, how ever you mark a version as latest would also be how you mark one as compromised. IPFS files are immutable and keyed by hash. But this seems like overengineering.
Waiting for the BitTorrent package manager
Caching NPM was easier when you could pull the Couchbase replicate API. Afaik that's gone and now you just have to send a bazillion http requests instead.
Sending a bazillion http requests within your LAN, or at least your VPC, is much easier, faster, and cheaper.
Both yarn and pnpm support http/2 which speeds up the bazillion requests quite a bit.
Hold the jokes until we're sure this isn't an `.unwrap()`
Well it is owned by github.
which is owned by microslop
...and proudly maintained by Microsoft's AI agents: Tay.ai, Zo, and Copilot.
They seem to be doing a pretty good job at wrecking both GitHub and npm at the same time.
Clippy was too stupid to qualify as an AI.
Whenever NPM is offline, the internet is a little safer.
Keep up the good work Microsoft.
Let's shoot for 100% downtime though. Thanks.
Ebay is also down. https://www.isitdownrightnow.com/ebay.com.html
Fixed as of 22:30 UTC. Hope there's a postmortem.
ha, github is down too
https://npmx.dev is not
Works for me, could be region related
Tailscale too: https://status.tailscale.com/
With all the github instability, I wonder if Cloudflare or some other provider is going to look into providing a similar service.
Cloudflare artifacts??
https://developers.cloudflare.com/artifacts/
I mean more like a full git competitor. Gitlab exists but more competition is generally better for the consumer and it looks like Github's lead is starting to falter with all these incidents.
GitLab is right there. And overall provides a better product than GitHub, if nothing else on these two points:
* You can actually have an organisational structure (folders/namespaces), and projects can be moved around with automatic redirects. Also, inheritance of access controls, variables between the namespaces
* GitLabCI is organised in a way that makes supply chain attacks less of a risk. GitHub Actions takes the NPM/JS approach, where every step is an action, one you usually need to get off someone, with shoddy versioning, tons of transient dependencies, etc. In GitLabCI you can have templates, but you don't have to use an external template for every bit. It's shell scripting on top of containers, so you can have custom container images with your stuff, or custom scripts, or templates that bundle it all.
GitLab also limits the size of PRs/MRs, which makes it Unfit for Purpose. :( :( :(
Its a problem they know about, but have no plan to fix before 2027.
I mean, the PR limit is like a million characters. I would also reject a PR of a million characters. That’s bananas.
Not sure about that "million characters", but we've been bitten by it in our production systems. :(
Thus, we're moving off GitLab.
What use case does a million character PR have?
When an automated system creates a PR for merging from an existing dev branch (that's been extensively tested) to "master" (or "main").
The "surprise, you can't review all the files in your PR" using GitLabs standard web based tooling makes it a no-go.
That's interesting because GitHub's web UI craps out at much less than 1 million lines. It refuses to open even low thousand line diffs.
I’ve personally been deeply unappreciated of Github’s changes in the last few years to automatically not show diffs to “large files” without having to click to open them - which seems to be a threshold that continues to shrink. Maybe like 3 screenfuls of content is the limit now per file. It’s crazy.
Yeah, agreed it's not great for that. I'm not real happy with GitHub's worsening UX either, but it'll at least show the _names_ of all the files in the PR.
With GitLab, when you hit the rate limit, any file "past" that limit doesn't even show that it exists in the MR. It just looks like the MR is missing a bunch of stuff, with no workaround available. :( :( :(
I'm sure, I looked it up.
All of those features are supported by GitHub in some form, e.g: Organizations can now belong to Enterprises.
It's not the same, at all.
SSO, access tokens, secrets are all bound to the Organization level - if you work on multiple Organizations you have to log in separately... You also cannot have nested Organizations.
tree based directory structure stuff is available on gitlab’s free tier — so are all the permissions inheritance for groups etc.
so, while you’re technically right, these features are apparently paywalled heavily on github.
ime you get more features on gitlab for the same price (or less). i switched fully two years ago and im not going back.
libc is still working just fine, as is the linux kernel. Mayhaps having 2000 dependencies on 3000 packages from 4000 unvetted sources was a mistake afterall?
[dead]
microslop slops are down.
Azure is completely dead across multiple resources. Confirming....
https://azure.status.microsoft/en-US/status says "There are currently no active events." - and everything's fine with my day-job's Azure sub right now.
Oh no. At least nothing of value is affected.
:)