US has investigated claims WhatsApp chats aren't private

(bloomberg.com)

212 points | by 1vuio0pswjnm7 2 days ago ago

378 comments

  • martinralbrecht 2 days ago ago

    WhatsApp's end-to-end encryption has been independently investigated: https://kclpure.kcl.ac.uk/ws/files/324396471/whatsapp.pdf

    Full version here: https://eprint.iacr.org/2025/794.pdf

    We didn't review the entire source code, only the cryptographic core. That said, the main issue we found was that the WhatsApp servers ultimately decide who is and isn't in a particular chat. Dan Goodin wrote about it here: https://arstechnica.com/security/2025/05/whatsapp-provides-n...

    • vpShane 2 days ago ago

      > We didn't review the entire source code And, you don't see the issue with that? Facebook was bypassing security measures for mobile by sending data to itself on localhost using websockets and webrtc.

      https://cybersecuritynews.com/track-android-users-covertly/

      An audit of 'they can't read it cryptographically' but the app can read it, and the app sends data in all directions. Push notifications can be used to read messages.

      • miduil 2 days ago ago

        > Push notifications can be used to read messages.

        Are you trying to imply that WhatsApp is bypassing e2e messaging through Push notifications?

        Unless something has changed, this table highlights that both Signal and WhatsApp are using a "Push-to-Sync" technique to notify about new messages.

        https://crysp.petsymposium.org/popets/2024/popets-2024-0151....

        • itsthecourier 2 days ago ago

          Push-to-Sync. We observed 8 apps employ a push-to-sync strat- egy to prevent privacy leakage to Google via FCM. In this mitigation strategy, apps send an empty (or almost empty) push notification to FCM. Some apps, such as Signal, send a push notification with no data (aside from the fields that Google sets; see Figure 4). Other apps may send an identifier (including, in some cases, a phone num- ber). This push notification tells the app to query the app server for data, the data is retrieved securely by the app, and then a push notification is populated on the client side with the unencrypted data. In these cases, the only metadata that FCM receives is that the user received some message or messages, and when that push noti- fication was issued. Achieving this requires sending an additional network request to the app server to fetch the data and keeping track of identifiers used to correlate the push notification received on the user device with the message on the app server.

          • chaps 2 days ago ago

            Is that not still incredibly vulnerable to timing attacks?

            • xethos 2 days ago ago

              Maybe I’m mis-interpreting what you mean, but without a notification when a message is sent, what would you correlate a message-received notification with?

        • fasbiner 2 days ago ago

          Nothing changed, but many people struggle to understand their our own degree of relative ignorance and overvalue high-level details that are leaky abstractions which make the consequentially dissimilar look superficially similar.

    • cookiengineer 2 days ago ago

      Why did you not mention that the WhatsApp apk, even on non-google play installed devices, loads google tag manager's scripts?

      It is reproducibly loaded in each chat, and an MitM firewall can also confirm that. I don't know why the focus of audits like these are always on a specific part of the app or only about the cryptography parts, and not the overall behavior of what is leaked and transferred over the wire, and not about potential side channel or bypass attacks.

      Transport encryption is useless if the client copies the plaintext of the messages afterwards to another server, or say an online service for translation, you know.

      • afiori a day ago ago

        Things like this combined with the countless ways to hide "feature flags" in a giant codebase makes me feel that anything less than "the entire app was verified + there is literally no way to dynamically load code from remote (so even no in app browser) + we checked 5 years of old versions and plan to do this for the next 5 years of update" is particularly meaningful.

        Still very important but my issue has never been with zucks inability to produce solid software, rather in its intentions and so them being good engineers just makes them better at hiding bad stuff.

        • cookiengineer a day ago ago

          Back in the days people called skype [1] spyware because it had lots of backdoors in it and lots of undocumented APIs that shouldn't have been in there.

          The funny part was that skype was probably the most obfuscated binary that was ever available as "legitimate" software, so there were regular reversing events to see "how far" you could get from scratch to zeroday within 48h hackathons. Those were fun times :D

          [1] Skype, pre Microsoft rebrand of Lync as Skype

      • tptacek 2 days ago ago

        There's a whole section, early, in the analysis Albrecht posted that surfaces these concerns.

        • cookiengineer 2 days ago ago

          Where in the document is that? Can you provide a page number or section title?

          • jibe 2 days ago ago

            ^f

    • btown a day ago ago

      Of particular note here is that while compromised WhatsApp servers could add arbitrary members to a group, each member's client would show the new member's presence and would not share prior messages, only future messages.

      Now, of course, this assumes the client hasn't been simultaneously compromised to hide that. But it's defense in depth at the very least.

      It is worth noting that this may be eroding as we speak: https://www.livemint.com/technology/tech-news/whatsapp-could... (Jan 24 2026) reports that Whatsapp is developing a way for one member to share historical messages en masse with a new group member. While this is manually triggered by the sender at the moment, it presents an enticing attack surface on technical, social-engineering, and political fronts to erode retroactive security much more rapidly going forward.

      (And it goes without saying that if you think you're exempt from needing to worry about this because you're not involved in certain types of activity, the speed at which policies are evolving around the world, and the ability to rapidly process historical communications data at scale, should give you pause. "Ex post facto" is not a meaningful principle in the modern AI-enabled state.)

      • Ajedi32 6 hours ago ago

        "People you send messages to have access to those messages. (And could therefore potentially share them with others.)" doesn't seem like a particularly scary security threat to me.

        • btown 4 hours ago ago

          The threat here is that the ability of an attacker to add themselves to a thread, stacked with a new ability to either socially-engineer or otherwise attack an existing member to click a single share-history button, could result in disclosure of history without explicit intent to share.

    • 1vuio0pswjnm7 18 hours ago ago

      "We didn't review the entire source code, ..."

      Why not

      "Our work is based primarily on the WhatsApp web client, archived on 3rd May 2023, and version 6 of the WhatsApp security whitepaper [46]."

      Did not even look at the continously changing mobile app, only looked at part of the minified Javascript in the web client

      Not sure what this accomplishes. Are the encryption protocols used sound, is the implementation correct. Maybe, but the app is closed source and constantly changing

      But users who care want to know about what connections the software makes, what is sent over those connections, to whom it is sent and why. There is no implicit trust as to Meta, only questions. The source code is hidden from public scrutiny

      For example, the app tries to connect to {c,e8,e10,g}.whatsapp.net over TCP on port 80

      The app has also tried to connect over UDP using port 3478/STUN

      These connections can be blocked and the user will still be able to send and receive texts and make and receive calls

      Meta forces users to install new mobile app, i.e., untrusted, unaudited code, multiple times per year. This install grows in size by over 100%

      For example, there were at least four different apps (subsequent versions) forced on users in 2023, five in 2024 and four in 2025

      In 2023 the first was 54.06MB. In 2026, it is now 126MB

    • some_furry 2 days ago ago

      Thank you for actually evaluating the technology as implemented instead of speculating wildly about what Facebook can do based on vibes.

      • chaps 2 days ago ago

        Unfortunately a lot of investigations start out as speculation/vibes before they turn into an actual evaluation. And getting past speculation/vibes can take a lot of effort and political/social/professional capital before even starting.

        • lazide 2 days ago ago

          Well yeah. If they had solid evidence at the start, why would they need an investigation?

          • chaps a day ago ago

            It's not as obvious of an answer as it initially sounds. Coming at this from a stint in investigative journalism where even beginning an investigation requires getting grants and grants involve convincing other people that the money is going to good use. Also having been told that an investigation I ran was nothing by multiple editors that turned out to be something big... it really shifted how I perceive investigations and what it means to stick your neck out when everyone's telling you that something isn't happening even when it is.

      • afiori a day ago ago

        Vibes are a perfectly solid ground to refuse to engage with something.

    • morshu9001 2 days ago ago

      They also decide what public key is associated with a phone number, right? Unless you verify in person.

      • NoahZuniga 2 days ago ago

        That's protected cryptographically with key transparency. Anyone can check what the current published keys for a user are, and be sure they get the same value as any other user. Specifically, your wa client checks that these keys are the right key.

        • morshu9001 2 days ago ago

          Even if your client is asking other clients to verify, what if everyone has the same wrong key for a particular user Whatsapp has chosen to spoof?

          • NoahZuniga a day ago ago

            Well, surely your client knows what its own key is, and would notice that the listed key is wrong when it checks it.

            • morshu9001 a day ago ago

              They can also tell your client it has the correct key. Yours and the other clients are all talking to their mitm in this scenario. There's fundamentally no way to solve this without users verifying keys out-of-band.

              • NoahZuniga 21 hours ago ago

                > They can also tell your client it has the correct key.

                No they can't. Key transparency cryptographically makes sure everyone gets the same result.

                • morshu9001 2 hours ago ago

                  Key transparency is a public list of keys, like what CAs do. That still trusts an authority. Of course a third party could archive/republish the key list and you could trust them instead of Whatsapp, but that's what I call an out of band key verification.

                  These are all good measures though. It's much harder for Whatsapp to mass attack users this way.

    • Jamesbeam 2 days ago ago

      Hello Professor Albrecht,

      thank you for your work.

      I’ve been looking for this everywhere the past few days but I couldn’t find any official information relating the use of https://signal.org/docs/specifications/pqxdh/ in the signal protocol version that WhatsApp is currently using.

      Do you have any information if the protocol version they currently use provides post-quantum forward secrecy and SPQR or are the current e2ee chats vulnerable to harvest now, decrypt later attacks?

      Thanks for your time.

    • uoaei 2 days ago ago

      Can they control private keys and do replay attacks?

      • maqp 2 days ago ago

        Signal protocol prevents replay attacks as every message is encrypted with new key. Either it's next hash ratchet key, or next future secret key with new entropy mixed via next DH shared key.

        Private keys, probably not. WhatsApp is E2EE meaning your device generates the private key with OS's CSPRNG. (Like I also said above), exfiltration of signing keys might allow MITM but that's still possible to detect e.g. if you RE the client and spot the code that does it.

        • TurdF3rguson 2 days ago ago

          Wouldn't ratchet keys prevent MITM too? In other words if MITM has your keys and decrypts your message, then your keys are out of sync from now on. Or do I misunderstand that?

          • maqp a day ago ago

            The ratchets would have different state yes. The MITM would mix in different entropy into the keys' states. It's only detectable if the MITM ever stops. But since the identity key exfiltration only needs to happen once per lifetime of installation (longer if key is backed up), the MITM could just continue forever since it's just a few cycles to run the protocol in the server. You can then choose whether to read the messages or just ignore them.

            One interesting way to detect this would be to observe sender's outgoing and recipient's incoming ciphertexts inside the client-to-server TLS that can be MITM'd by users. Since the ratchet state differs, so do the keys, and thus under same plaintext, so do the ciphertexts. That would be really easy way to detect MITM.

    • digdigdag 2 days ago ago

      > We didn't review the entire source code

      Then it's not fully investigated. That should put any assessments to rest.

      • 3rodents 2 days ago ago

        By that standard, it can never be verified because what is running and what is reviewed could be different. Reviewing relevant elements is as meaningful as reviewing all the source code.

        • giancarlostoro 2 days ago ago

          Or they could even take out the backdoor code and then put it back in after review.

          • hedora 2 days ago ago

            This is why signal supports reproducible builds.

            • pdpi 2 days ago ago

              In this day and age, in a world with Docker and dev containers and such, it's kind of shocking that reproducible builds aren't table stakes.

            • LtWorf a day ago ago

              Does it still require the gigantic binary blob?

          • taneq 2 days ago ago

            Ah yes, the Volkswagen solution.

        • dangus 2 days ago ago

          Let’s be real: the standard is “Do we trust Meta?”

          I don’t, and don’t see how it could possibly be construed to be logical to trust them.

          I definitely trust a non-profit open source alternative a whole lot more. Perception can be different than reality but that’s what we’ve got to work with.

      • ghurtado 2 days ago ago

        I have to assume you have never worked on security cataloging of third party dependencies on a large code base.

        Because if you had, you would realize how ridiculous it is to state that app security can't be assessed until you have read 100% of the code

        That's like saying "well, we don't know how many other houses in the city might be on fire, so we should let this one burn until we know for sure"

        • fasbiner 2 days ago ago

          What you are saying is empirically false. Change in a single line of executed code (sometimes even a single character!) can be the difference between a secure and non-secure system.

          This must mean that you have been paid not to understand these things. Or perhaps you would be punished at work if you internalized reality and spoke up. In either case, I don't think your personal emotional landscape should take precedence over things that have been proven and are trivial to demonstrate.

          • JasonADrury 2 days ago ago

            > Change in a single line of executed code (sometimes even a single character!) can be the difference between a secure and non-secure system.

            This is kind of pointless, nobody is going to audit every single instruction in the Linux kernel or any complex software product.

        • jokersarewild 2 days ago ago

          It sounds like your salary has depended on believing things like a partial audit is worthwhile in the case that a client is the actual adversary.

          • charcircuit 2 days ago ago

            Except Meta is not an adversary. They are aligned with people who want private messaging.

      • Barrin92 2 days ago ago

        as long as client side encryption has been audited, which to my understanding is the case, it doesn't matter. That is literally the point of encryption, communication across adversarial channels. Unless you think Facebook has broken the laws of mathematics it's impossible for them to decrypt the content of messages without the users private keys.

        • maqp 2 days ago ago

          Well the thing is, the key exfiltration code would probably reside outside the TCB. Not particularly hard to have some function grab the signing keys, and send them to the server. Then you can impersonate as the user in MITM. That exfiltration is one-time and it's quite hard to recover from.

          I'd much rather not have blind faith on WhatsApp doing the right thing, and instead just use Signal so I can verify myself it's key management is doing only what it should.

          Speculating over the correctness of E2EE implementation isn't productive, considering the metadata leak we know Meta takes full advantage of, is enough reason to stick proper platforms like Signal.

          • jcgl 2 days ago ago

            > That exfiltration is one-time and it's quite hard to recover from.

            Not quite true with Signal's double ratchet though, right? Because keys are routinely getting rolled, you have to continuously exfiltrate the new keys.

            • maqp 2 days ago ago

              No I said signing keys. If you're doing MITM all the time because there's no alternative path to route ciphertexts, you get to generate all those double-ratchet keys. And then you have a separate ratchet for the other peer in the opposite direction.

              Last time I checked, by default, WhatsApp features no fingerprint change warnings by default, so users will not even notice if you MITM them. The attack I described is for situations where the two users would enable non-blocking key change warnings and try to compare the fingerprints.

              Not saying this attack happens by any means. Just that this is theoretically possible, and leaves the smallest trail. Which is why it helps that you can verify on Signal it's not exfiltrating your identity keys.

              • jcgl a day ago ago

                Ah right, I didn't think about just outright MitMing from the get-go. If WhatsApp doesn't show the user anything about fingerprints, then yeah, that's a real hole.

          • subw00f 2 days ago ago

            Not that I trust Facebook or anything but wouldn’t a motivated investigator be able to find this key exfiltration “function” or code by now? Unless there is some remote code execution flow going on.

            • impure-aqua 2 days ago ago

              WhatsApp performs dynamic code loading from memory, GrapheneOS detects it when you open the app, and blocking this causes the app to crash during startup. So we know that static analysis of the APK is not giving us the whole picture of what actually executes.

              This DCL could be fetching some forward_to_NSA() function from a server and registering it to be called on every outgoing message. It would be trivial to hide in tcpdumps, best approach would be tracing with Frida and looking at syscalls to attempt to isolate what is actually being loaded, but it is also trivial for apps to detect they are being debugged and conditionally avoid loading the incriminating code in this instance. This code would only run in environments where the interested parties are sure there is no chance of detection, which is enough of the endpoints that even if you personally can set off the anti-tracing conditions without falling foul of whatever attestation Meta likely have going on, everyone you text will be participating unknowingly in the dragnet anyway.

              • maqp 2 days ago ago

                "Many forms of dynamic code loading, especially those that use remote sources, violate Google Play policies and may lead to a suspension of your app from Google Play."

                https://developer.android.com/privacy-and-security/risks/dyn...

                I wonder if that would deter Meta.

                • monocasa 2 days ago ago

                  Some apps have always been more equal than others.

              • oofbey 2 days ago ago

                I don’t know these OS’s well enough. Can you MitM the dynamic code loads by adding a CA to the OS’s trusted list? I’ve done this in Python apps because there’s only 2 or 3 places that it might check to verify a TLS cert.

            • maqp 2 days ago ago

              >Not that I trust Facebook or anything but wouldn’t a motivated investigator be able to find this key exfiltration “function” or code by now?

              Yeah I'd imagine it would have been found by know. Then again, who knows when they'd add it, and if some future update removes it. Google isn't scanning every line for every version. I prefer to eliminate this kind of 5D-guesswork categorically, and just use FOSS messaging apps.

        • hn_throwaway_99 2 days ago ago

          The issue is what the client app does with the information after it is decrypted. As Snowden remarked after he released his trove, encryption works, and it's not like the NSA or anyone else has some super secret decoder ring. The problem is endpoint security is borderline atrocious and an obvious achilles heel - the information has to be decoded in order to display it to the end user, so that's a much easier attack vector than trying to break the encryption itself.

          So the point other commenters are making is that you can verify all you want that the encryption is robust and secure, but that doesn't mean the app can't just send a copy of the info to a server somewhere after it has been decoded.

  • coppsilgold 2 days ago ago

    No closed-source E2EE client can be truly secure because the ends of e2e are opaque.

    Detecting backdoors is only truly feasible with open source software and even then it can difficult.

    A backdoor can be a subtle remote code execution "vulnerability" that can only be exploited by the server. If used carefully and it exfiltrates data in expected client-server communications it can be all but impossible to detect. This approach also makes it more likely that almost no insider will even be aware of it, it could be a small patch applied during the build process or to the binary itself (for example, a bound check branch). This is also another reason why reproducible builds are a good idea for open source software.

    • TZubiri 2 days ago ago

      With all due respect to Stallman, you can actually study binaries.

      The claim Stallman would make (after punishing you for using Open Source instead of Free Software for an hour) is that Closed Software (Proprietary Software) is unjust. but in the context of security, the claim would be limited to Free Software being capable of being secure too.

      You may be able to argue that Open Source reduces risk in threat models where the manufacturer is the attacker, but in any other threat model, security is an advantage of closed source. It's automatic obfuscation.

      There's a lot of advantages to Free Software, you don't need to make up some.

      • sigmoid10 2 days ago ago

        This. Closed source doesn't stop people from finding exploits in the same way that open source doesn't magically make people find them. The Windows kernel is proprietary and closed source, but people constantly find exploits in it anyways. What matters is that there is a large audience that cares about auditing. OTOH if Microsoft really wanted to sneak in a super hard to detect spyware exploit, they probably could - but so could the Linux kernel devs. Some exploits have been openly sitting in the Linux kernel for more than a decade despite everyone being able to audit it in theory. Who's to say they weren't planted by some three letter agency who coerced a developer. Relying on either approach is pointless anyways. IT security is not a single means to all ends. It's a constant struggle between safety and usability at every single level from raw silicon all the way to user-land.

      • tptacek 2 days ago ago

        It's weird to me that it's 2026 and this is still a controversial argument. Deep, tricky memory corruption exploit development is done on closed-source targets, routinely, and the kind of backdoor/bugdoor people conjure in threads about E2EE are much simpler than those bugs.

        It was a pretty much settled argument 10 years ago, even before the era of LLVM lifters, but post-LLM the standard of care practice is often full recompilation and execution.

      • objclxt 2 days ago ago

        > in any other threat model, security is an advantage of closed source

        I think there's a lot of historical evidence that doesn't support this position. For instance, Internet Explorer was generally agreed by all to be a much weaker product from a security perspective than its open source competitors (Gecko, WebKit, etc).

        Nobody was defending IE from a security perspective because it was closed source.

      • Ajedi32 6 hours ago ago

        I think "manufacturer is the attacker" is precisely the threat people are most worried about.

        And yes you can analyze binary blobs for backdoors and other security vulnerabilities, but it's a lot easier with the source code.

      • refulgentis 2 days ago ago

        This comment comes across as unnecessarily aggressive and out of nowhere (Stallman?), it's really hard to parse.

        Does this rewording reflect it's meaning?

        "You don't actually need code to evaluate security, you can analyze a binary just as well."

        Because that doesn't sound correct?

        But that's just my first pass, at a high level. Don't wanna overinterpret until I'm on surer ground about what the dispute is. (i.e. don't want to mind read :) )

        Steelman for my current understanding is limited to "you can check if it writes files/accesses network, and if it doesn't, then by definition the chats are private and its secure", which sounds facile. (presumably something is being written to somewhere for the whole chat thing to work, can't do P2P because someone's app might not be open when you send)

        • TZubiri 2 days ago ago

          https://www.gnu.org/philosophy/free-sw.html

          Whether the original comment knows it or not, Stallman greatly influenced the very definition of Source Code, and the claim being made here is very close to Stallman's freedom to study.

          >"You don't actually need code to evaluate security, you can analyze a binary"

          Correct

          >"just as well"

          No, of course analyzing source code is easier and analyzing binaries is harder. But it's still possible (feasible is the word used by the original comment)

          >Steelman for my current understanding is limited to "you can check if it writes files/accesses network, and if it doesn't, then by definition the chats are private and its secure",

          I didn't say anything about that? I mean those are valid tactics as part of a wider toolset, but I specifically said binaries, because it maps one to one with the source code. If you can find something in the source code, you can find it in the binary and viceversa. Analyzing file accesses and networks, or runtime analysis of any kind, is going to mostly be orthogonal to source code/binary static analysis, the only difference being whether you have a debug map to source code or to the machine code.

          This is a very central conflict of Free Software, what I want to make clear is that Free Software refuses to study closed source software, not because it is impossible, but because it is unjustly hard. Free Software never claims it is impossible to study closed source software, it claims that source code access is a right, and they prefer rejecting to use closed source software, and thus never need to perform binary analysis.

          • LoganDark 19 hours ago ago

            Binaries absolutely don't map one-to-one with source code. Compilers optimize out dead code, elide entire subroutines to single instructions, perform loop unrolling and auto-vectorization, and many many more optimizations and transformations that break exact mapping.

            • TZubiri 17 hours ago ago

              That is true, but I don't think I ever said that binaries map one-to-one with source code.

              I was referring to source code to binary maps, these are files that map binary locations to source code locations. In C (gcc/gdb) these are debug objects, they are also used in gdb style debuggers like Python's pdb, Java's jdb. They also exist in js/ts when using minifiers or react, so that you are able to debug in production.

      • singpolyma3 2 days ago ago

        I was with you until you somehow claimed obfuscation can improve security, against all historical evidence even pre-computers.

        • Arch-TK 2 days ago ago

          Obscurity is a delay tactic which raises the time cost associated with an attack. It is true that obscurity is not a security feature, but it is also true that increasing the time cost associated with attacking you is a form of deterrant from attempts. If you are not at the same time also secure in the conventional sense then it is only buying you time until someone puts in the effort to figure out what you are doing and own you. And you better have a plan for when that time comes. But everyone needs time, because bugs happen, and you need that time to fix them before they are exploited.

          The difference between obscurity and a secret (password, key, etc) is the difference between less then a year to figure it out and a year or more to figure it out.

          There is a surprising amount of software out there with obscurity preventing some kind of "abuse" and in my experience these features are not that strong, but it takes someone like me hours to reverse engineer these things, and in many cases I am the first person to do that after years of nobody else bothering.

        • mike_d 2 days ago ago

          This is a tired trope. Depending exclusively on obfuscation (security by obscurity) is not safe. Maintaining confidentiality of things that could aid in attacks is absolutely a defensive layer and improves your overall security stance.

          I love the Rob Joyce quote that explained why TAO was so successful: "In many cases we know networks better than the people who designed and run them."

        • TZubiri 2 days ago ago

          I think you are conflating:

          Is an unbreakable security mechanism

          with

          Improves security

          anything that complicates an attacker improves security, at least grossly. That said, then there might be counter effects that make it a net loss or net neutral.

      • parhamn 2 days ago ago

        Expalin how you detect a branched/flaged sendKey (or whatever it would be called) call in the compiled WhatsApp iOS app?

        It could be interleaved in any of the many analytics tools in there too.

        You have to trust the client in E2E encryption. There's literally no way around that. You need to trust the client's OS (and in some cases, other processes) too.

        • JasonADrury 2 days ago ago

          >Expalin how you detect a branched/flaged sendKey (or whatever it would be called) call in the compiled WhatsApp iOS app?

          Vastly easier than spotting a clever bugdoor in the source code of said app.

          • refulgentis 2 days ago ago

            Putting it all on the table: do you agree with the claim that binary analysis is just as good as source code analysis?

            • JasonADrury 2 days ago ago

              Binary analysis is vastly better than source code analysis, reliably detecting bugdoors via source code analysis tends to require an unrealistically deep knowledge of compiler behavior.

            • anonymars 2 days ago ago

              Empirically it doesn't look like there's a meaningful difference, does it?

              Not having the source code hasn't stopped people from finding exploits in Windows (or even hardware attacks like Spectre or Meltdown). Having source code didn't protect against Heartbleed or log4j

              I'd conclude it comes down to security culture (look how things changed after the Trustworthy Computing initiative, or OpenSSL vs LibreSSL) and "how many people are looking" -- in that sense, maybe "many eyes [do] make bugs shallow" but it doesn't seem like "source code availability" is the deciding factor. Rather, "what are the incentives" -- both on the internal development side and the external attacker side

            • tptacek 2 days ago ago

              I don't agree with "vastly better" but its arguable both in the direction and magnitude. I don't think you could plausibly argue that binary analysis is "vastly harder".

            • TZubiri 2 days ago ago

              Nono, analyzing binaries is harder.

              But it's still possible. And analyzing source code is still hard.

      • oofbey 2 days ago ago

        What’s the state of the art of reverse engineering source code from binaries in the age of agentic coding? Seems like something agents should be pretty good at, but haven’t read anything about it.

        • roughly 2 days ago ago

          I think there’s a good possibility that the technology that is LLMs could be usefully trained to decode binaries as a sort of squint-and-you-can-see-it translation problem, but I can’t imagine, eg, pre-trained GPT being particularly good at it.

        • JasonADrury 2 days ago ago

          I've been working on this, the results are pretty great when using the fancier models. I have successfully had gpt5.2 complete fairly complex matching decompilation projects, but also projects with more flexible requirements.

        • TZubiri 2 days ago ago

          Nothing yet, agents analyze code which is textual.

          The way they analyze binaries now is by using textual interfaces of command tools, and the tools used are mostly the ones supported by Foundation Models at training time, mostly you can't teach it new tools at inference, they must be supported at training. So most providers are focused on the same tools and benchmarking against them, and binary analysis is not in the zeitgeist right now, it's about production more than understanding.

          • oofbey 2 days ago ago

            The entire MCP ecosystem disagrees with your assertion that “you can’t teach it new tools at inference.” Sorry you’re just wrong.

            • TZubiri 2 days ago ago

              Nono, you of course CAN teach tool use at inference, but it's different than doing so at training time, and the models are trained to call specific tools right now.

              Also MCP is not an Agent protocol, it's used in a different category. MCP is used when the user has a chatbot, sends a message, gets a response. Here we are talking about the category of products we would describe as Code Agents, including Claude Code, ChatGPT Codex, and the specific models that are trained for use in such contexts.

              The idea is that of course you can tell it about certain tools in inference, but in code production tasks the LLM is trained to use string based tools such as grep, and not language specific tools like Go To Definition.

              My source on this is Dax who is developing an Open Source clone of Claude Code called OpenCode

              • oofbey 2 days ago ago

                Claude code and cursor agent and all the coding agents can and do run MCP just fine. MCP is effectively just a prompt that says “if you want to convert a binary to hex call the ‘hexdump’ tool passing in the filename” and then a promise to treat specially formatted responses differently. Any modern LLM that can reason and solve math problems will understand and use the tools you give it. Heck I’ve even seen LLMs that were never trained to reason make tool calls.

                You say they’re better with the tools they’re trained on. Maybe? But if so not much. And maybe not. Because custom tools are passed as part of the prompt and prompts go a long way to override training.

                LLMs reason in text. (Except for the ones that reason in latent space.) But they can work with data in any file format as long as they’re given tools to do so.

                • TZubiri a day ago ago

                  Here's the specific source on this matter:

                  https://youtu.be/VsTbgYawoVc?si=6ZE83umppNCz9h-a&t=1021

                  "The models today, they are tuned to call specific tools. We've played with a lot of tools, you can hand it a bunch of tools it's never seen before and it just doesn't call them. There's something to the post-training process being catered to certain sets of tools. So anthropic, cloud 4, cloud3.7 before that, those models are the best at calling tools from a programming standpoint. They'll actually keep trying and going for it. Other models like Gemini2.5 can be really good, but it doesn't really call tools very eagerly. So we are in the state right now where we kind of have to provide the set of tools that the model expects.

                  I don't think that'll always be the case, but we've given it a bunch of LSP tools. I've played with giving, say, giving it 'go to definition' and 'find references', and it just doesn't use them. I mean you can get it to use them if you ask it to, but it doesn't default to kind of thinking that way, I think that'll change."

                  He then goes on to theorize it's the System Prompt, so open models like Llama where you can customize the system prompt might have an advantage. (I think API models still have a prebaked prompt, not sure). Additionally, even when you control the prompt, he argues there's a (soft) limit to the amount of tools it can handle.

                  Personally, I think a common error with LLMs is conflating what is technically possible and what works in practice. In this case the argument is that custom tools and MCPs are possible, but it's limited in the sense that "you often need to explicitly tell them to use such tool, you can only have a small amount of custom tools", when you compare it to system prompt specified tools, and tools in the training set which are fine tuned to, it's a whole different category, the native tools are just capable of way much more autonomy.

                  A similar error I've seen is conflating the context length to the capacity to remember. That a model has a 1M token window means that it could remember something, but it would be a categorical mistake to claim or depend on the model remembering stuff in a 1M token conversation.

                  There's a lot of nuance in these discussions.

                  • oofbey a day ago ago

                    Another common mistake today is to observe one LLM failing to do something in a single situation, and to generalize that observation to "LLM's are incapable of doing this thing." Or "they're not good at this kind of thing" which is what you're repeating here. This logic underlies a lot of AI skepticism. Sure you and they aren't skeptics and acknowledge this will get better. But I think you're over-indexing on a specific problem they observed. Plus to blame the LLM when they haven't optimized the system prompt is IMHO quite silly - it's kind of like "did you read the instructions you were giving it?". What I think they should say is "I tried this and it didn't work super well out of the box. I'm sure there's some way to fix it, but I haven't found it yet." Instead of blaming the model intrinsically.

                    In contrast, I've seen coding agents figure out extremely complex systems problems that are clearly outside of their training set - by using tools and interacting with complex environments, and reasoning and figuring it out.

                    Plus, "tools" can be multi-layered. You give an agent a "bash" tool, and voila, it has access to every piece of software ever written. So I don't think any of these arguments apply in the slightest to the question of de-compiling code.

                    • TZubiri 19 hours ago ago

                      >"What’s the state of the art of reverse engineering source code from binaries in the age of agentic coding?"

                      This is the original comment I was responding to, we were talking about state of the art Agentic models. So not generalizing to other scenarios.

                      >Sure you and they aren't skeptics and acknowledge this will get better

                      I think this is a common bipartisan trap where you lose a lot of nuance. And it's imprecise, you don't know whether I'm a skeptic or not. It's like reading a nuanced opinion and trying to see if they are Republican so you can agree or Democrats so you can disagree.

                      >Plus to blame the LLM when they haven't optimized the system prompt is IMHO quite silly - it's kind of like "did you read the instructions you were giving it?"

                      I think the context here is that when using agentic tools like Claude Code, you don't control the system prompt. You could write your own prompts and use naked API calls, but that's always more expensive, (because it's subsidized), and I'm not sure what the quality of that is.

                      The bottom line is that API calls, where you can fully control the system prompt, are more expensive. And using your OpenAI/Anthropic subscription has a fixed cost. So in that context they don't control the system prompt.

                      Even in cases where you could control the system prompt and use the API, there's the fact that some models (the state of the art) are fine tuned for specific tool use, so they have a bias towards fine tuned tool use. The claim is not that they are "incapable of doing X thing" it's that it's a bias towards the usage that was known at train-time or fine-tune time, instead of at inference which is much weaker. Nuance.

                      >Instead of blaming the model intrinsically.

                      Again, not blaming or being a skeptic here, just analyzing the state of the art and its current weakness, it's likely that these things are going to be improved in the next generation, this is going to move fast, if you conflate any criticism of the tools with "skepticism" you are going to miss the nuance.

                      >In contrast, I've seen coding agents figure out extremely complex systems problems that are clearly outside of their training set - by using tools and interacting with complex environments, and reasoning and figuring it out.

                      Yeah for sure, I'll give you a concrete example on this point where we agree. I made a model download a webdriver for a browser and taught it to use the webdriver to open the site, take screenshots, and evaluate how it looks visually, in addition to actually clicking buttons and navigating it. This is a great improvement when the traditional approach is just to generate frontend code and trust that it works (which to be fair, sometimes works great, but you know, it's better if it see that.)

                      And it works, until it doesn't and I have to remind it that it can do that. It's just a bias. If they would have trained the model with WebDriver tool access, the model would use it much more (and perhaps they are already doing that and we will see it in the next model.)

                      The main thesis is that instructions taught at train time 'work better' than at fine-tune time which in turn are stronger than 'inference'. To be very specific during inference tool use is much more likely immediately after mentioning it, it might be stronger more consistently at the system prompt, (but it competes with other system prompt instructions and it's still inference based). To say nothing of the costs associated with adding to inference tokens, compared to essentially free training/finetune biases. I don't think anyone disagrees that stuff you teach the model during training has better quality and less cost than stuff you teach at inference.

                      I think playing around with logit biases is an underrated tool to increase and control frequency of certain tools, but it doesn't seem that's being used much in this generation of vibecode tools, the interface is almost entirely textual (with some /commands starting to surface). Maybe the next generation will have the option to configure some specific parameters instead of entirely relying on textual prompting.

        • refulgentis 2 days ago ago

          Agents are sort of irrelevant to this discussion, no?

          Like, it's assuredly harder for an agent than having access to the code, if only because there's a theoratical opportunity to misunderstand the decompile.

          Alternatively, it's assuredly easier for an agent because given execution time approaches infinity, they can try all possible interpretations.

          • oofbey 2 days ago ago

            Agents meaning an AI iteratively trying different things to try to decompile the code. Presumably in some kind of guess and check loop. I don’t expect a typical LLM to be good at this on its first attempt. But I bet Cursor could make a good stab at it with the right prompt.

            • TZubiri a day ago ago

              Cursor is a bit old at this point, the state of the art is Claude Code and imitators (ChatGPT Codex, OpenCube).

              Devin is also going very strong, but it's a bit quieter and growing in enterprises (and pretty sure it uses Claude Opus 4.5 and possibly Claude Code itself). In fact Clawdbot/Moltbot/OpenClaw was itself created with devin.

              The big difference is the autonomy these models have (Devin more than Claude Codes), Cursor was meant to work in an IDE and that was a huge strength during the 12 months that the models still weren't strong enough to work autonomously, but they are getting to the point where that's becoming a weakness. Models like Devin are getting a slower acceleration but higher top speed advantage. My chips are on Devin

    • JasonADrury 2 days ago ago

      >Detecting backdoors is only truly feasible with open source software and even then it can difficult.

      This is absurd. Detecting backdoors is only truly feasible on binaries, there's no way you can understand compiler behavior well enough to be able to spot hidden backdoors in source code.

  • prakashn27 2 days ago ago

    Ex-WhatsApp engineer here. WhatsApp team makes so much effort to make this end to end encrypted messages possible. From the time I worked I know for sure it is not possible to read the encrypted messages.

    From business standpoint they don’t have to read these messages, since WhatsApp business API provide the necessary funding for the org as a whole.

    • 46493168 2 days ago ago

      Facebook has never been satisfied with direct funding. The value is in selling attention and influencing users’ behavior.

      • neuralkoi 2 days ago ago

        This is why most tech founders who go big never retire, even as billionaires. The power they gain, only the wisest would refuse.

    • maqp 2 days ago ago

      Nice! Hey, question: I noticed Signal at one point had same address on Google Play Store as WA. Can you tell us if Signal devs shared office space with WA during integration of the Signal protocol? Related to that, did they hold WA devs' hand during the process, meaning at least at the time it was sort of greenlighted by Moxie or something. If this is stuff under NDA I fully understand but anything you can share I'd love to hear.

    • blindriver 2 days ago ago

      It only takes one engineer in all the teams at Whatsapp that has different directives to make all your privacy work completely useless.

      • rustyhancock 2 days ago ago

        The legal and liability protection these messaging services get from E2EE is far too big to break it.

        Besides I get the feeling we're so cooked these days from marketing that when I get freaked out that an advert is what I was thinking about. It's probably because they made me think about it.

        Or maybe I need to update my meds?

      • dagmx 2 days ago ago

        How would you hide that? Unless you’re assuming nobody ever has to try and fix bugs or audit code to find it, and there’s some kind of closed off area of code that nobody thinks is suspicious. Or you maintain a complete second set of the app core libs that a few clandestine folks can access, and then hope nobody notices that the binaries don’t line up and crash logs are happening in obscured places.

      • philipallstar 2 days ago ago

        Assuming there's no code review or audit, I suppose.

      • cactusfrog 2 days ago ago

        I would be surprised if the code was hidden from other people engineers.

        • maqp 2 days ago ago

          How are you hiding it from IDA pro though?

    • yarauuta 2 days ago ago

      So how was Andreas Schjelderup caught sharing minor content?

    • M95D 2 days ago ago

      From what you know about WA, is it possible for the servers to MitM the connection between two clients? Is there a way for a client to independently verify the identity of the other client, such as by comparing keys (is it even possible to view them?), or comparing the contents of data packets sent from one client with the ones received on the other side?

      Thanks.

      • NoahZuniga 2 days ago ago

        No.

        Whatsapp uses key transparency. Anyone can check what the current published keys for a user are, and be sure they get the same value as any other user. Specifically, your wa client checks that these keys are the right key.

        Whatsapp has a blog post with more details available.

    • bossyTeacher 2 days ago ago

      > Ex-WhatsApp engineer here. WhatsApp team makes so much effort to make this end to end encrypted messages possible. From the time I worked I know for sure it is not possible to read the encrypted messages.

      None of this makes the point you want to make. Being a former engineer. The team making "so much effort". You "knowing for sure". Like many in security, a single hole is all it takes for your privacy to pour out of your metaphorical bag of sand.

      • dangus 2 days ago ago

        It’s not even “a single hole,” it’s that companies can change their mind at any time.

        That person doesn’t work there anymore. For all we know Zuck could wake up one day and say “that’s it, we need the data and revenue from reading WhatsApp chats. Change our policy in the most low key way possible.”

        Honestly, it’s too tempting isn’t it? They have the largest conversation network out there.

        It doesn’t help that the company has just about zero trust built up among their customers. The whole dang company changed their name arguably to try to shed the “Facebook” baggage.

      • Taurenking 14 hours ago ago

        [dead]

    • NoImmatureAdHom 2 days ago ago

      The backups are either unencrypted by default or have keys held by Meta / your backup provider. I think this means three-letter agencies can see your chats, just with a slight delay.

      Another comment above mentions that you can recover conversation histories with just your phone number--if that's true then yup. The E2EE is all smoke and mirrors.

    • mike_d 2 days ago ago

      I have no doubt that that rank and file engineers were not aware of the underlying functionality that allowed for plain text content to be read.

      Nobody would ever create a SendPlainTextToZuck() function that had to be called on every message.

      It would be as simple as using a built in PRNG for client side key generation and then surreptitiously leaking the initial state (dozens of bytes) once in a nonce signing or something when authenticating with the server.

      • oofbey 2 days ago ago

        I’ve often thought one of Zuck’s superpowers is in finding ways to get smart and moral people to do truly evil things. Sometimes it’s mind games. Sometimes it’s careful layers of obfuscation.

        Here it might be: This analytics package is dynamically loaded at runtime because reasons. This abuse flagging and review system is bundled with analytics because reasons. This add on for reconfiguring how the analytics package behaves at runtime, and has a bunch of switches nobody remembers why they’re here but don’t touch them they’re fragile.

    • ewuhic 2 days ago ago

      [flagged]

  • codethief 2 days ago ago

    Matthew Green's take from 3 days ago:

    > There’s a lawsuit against WhatsApp making the rounds today, claiming that Meta has access to plaintext. I see nothing in there that’s compelling; the whole thing sounds like a fishing expedition.

    https://bsky.app/profile/matthewdgreen.bsky.social/post/3mdg...

  • hedora 2 days ago ago

    None of the statements I’ve seen from Meta, people formerly involved in WhatsApp that chimed in here (thanks!), or the quotes from the investigation are incompatible with the whistleblowers’ allegations.

    At this point, I won’t trust anything short of this on the front page of an SEC filing, signed by zuck and the relevant management chain:

    “The following statement is material to earnings: Facebook has never (since E2EE was rolled out) and will never access messages sent through whatsapp via any means including the encryption protocol, application backdoor moderation access or backup mechanisms. Similarly, it does not provide third parties with access to the methods, and does not have the technical capability to do so under any circumstances.”

  • youknownothing 2 days ago ago

    Just to throw in a couple of possibly outlandish theories:

    1. as others have said, they could be collecting the encrypted messages and then tried to decrypt them using quantum computing, the Chinese have been reportedly trying to do this for many years now.

    2. with metadata and all the information from other sources, they could infer what the conversation is about without the need to decrypt it: if I visit a page (Facebook cookies, they know), then I share a message to my friend John, and then John visits the same page (again, cookies), then they can be pretty certain that the contain of the message was me sharing the link.

    • solenoid0937 2 days ago ago

      (1) made me chuckle. I've worked at nearly every FAANG including Meta. These companies aren't nearly as advanced or competent as you think.

      I no longer work at Meta, but in my mind a more likely scenario than (1) is: a senior engineer proposes a 'Decryption at Scale' framework solely to secure their E6 promo, and writes a 40-page Google Doc to farm 'direction' points for PSC. Five PMs repost this on Workplace to celebrate the "alignment" so they can also include it in their PSCs.

      The TL and PMs immediately abandon the project after ratings are locked because they already farmed the credit for it. The actual implementation gets assigned to an E4 bootcamp grad who is told by a non-technical EM to pivot 3 months in because it doesn't look like 'measurable impact' in a perf packet. The E4 gets fired to fill the layoff quota and everyone else sails off into the sunset.

    • instagib 2 days ago ago

      2) enough metadata can reveal a person's life, habits, and location which removes the need to analyze the actual bulky content of communications.

      can analyze receivers data or receivers contact trees data which is easier to access.

      The number of free or paid data sources is daunting.

    • wasabi991011 2 days ago ago

      Re. quantum computing: no chance, the scientific and engineering breakthroughs they would need are too outlandish, like claiming China already had a 2026-level frontier model back in 2016.

    • petcat 2 days ago ago

      I think this is the most likely scenario. The US government is not necessarily trying to read the messages right now, in real-time. But it wants to read the messages at some point in the future.

      https://en.wikipedia.org/wiki/Utah_Data_Center

    • NoImmatureAdHom 2 days ago ago

      It's the backups. The backups aren't encrypted such that only the end-user has the key.

  • mrtksn 2 days ago ago

    I wonder how these investigations go? Are they just asking them if it is true? Are they working with IT specialist to technically analyze the apps? Are they requesting the source code that can be demonstrated to be the same one that runs on the user devices and then analyze that code?

    • TZubiri 2 days ago ago

      Anyone can audit the client binaries

    • mattmaroon 2 days ago ago

      That will be step 1. Fear of being caught lying to the government is such that that is usually enough. Presumably at least a handful of people would have to know about it, and nobody likes their job at Facebook enough to go to jail over it.

      But you never know.

      • hsuduebc2 2 days ago ago

        Companies lie to governments and the public all the time. I doubt that even if something were found and the case were lost, it would lead to prison or any truly severe punishment. No money was stolen and no lives were put at risk. At worst, it would likely end in a fine, and then it would be forgotten, especially given Meta’s repeated violations of user trust.

        The reality is that most users do not seem to care. For many, WhatsApp is simply “free SMS,” tied to a phone number, so it feels familiar and easy to understand, and the broader implications are ignored.

        • mattmaroon 2 days ago ago

          Martha Stewart went to jail for lying to the government. The fact that there would be no punishment is why they would tell the truth.

          The government is pretty harsh when they find out you lied under oath. Corporate officers do not lie to the government frequently.

    • RenThraysk 2 days ago ago

      Multiple governments will already know as they have analyzed and reverse engineered it.

  • cedws 2 days ago ago

    I said this in another recent HN thread but all encryption comes down to key management. If you don’t control the keys, something else does. Sometimes that’s a hardware enclave, sometimes it’s a key derivation algorithm, sometimes it’s just a locally generated key on the filesystem.

    If you never give WhatsApp a cryptographic identity then what key is it using? How are your messages seamlessly showing up on another device when you authenticate? It’s not magic, and these convenience features always weaken the crypto in some way.

    WhatsApp has a feature to verify the fingerprint of another party. How many people do you think use this feature, versus how many people just assume they're safe because they read that WhatsApp has E2EE?

  • londons_explore 2 days ago ago

    I want whatsapp to decrypt the messages in a secure enclave and render the message content to the screen with a secure rendering pipeline, as is done with DRM'ed video.

    Compromise of the client side application or OS shouldn't break the security model.

    This should be possible with current API's, since each message could if needed simply be a single frame DRM'ed video if no better approach exists (or until a better approach is built).

    • Retr0id 2 days ago ago

      Signal uses the DRM APIs to mitigate threats like Microsoft Recall, but it doesn't stop the app itself from reading its own data.

      I don't really see how it's possible to mitigate client compromise. You can decrypt stuff on a secure enclave but at some point the client has to pull it out and render it.

      • bogwog 2 days ago ago

        > I don't really see how it's possible to mitigate client compromise

        Easy: pass laws requiring chat providers to implement interoperability standards so that users can bring their own trusted clients. You're still at risk if your recipient is using a compromised client, but that's a problem that you have the power to solve, and it's much easier to convince someone to switch a secure client if they don't have to worry about losing their contacts.

        • xvector 2 days ago ago

          You seem to think the government wants your messages to be private and would "pass laws" to this effect.

          Methinks you put far too much faith in the government, at least from my understanding of the history of cybersecurity :)

        • palata 2 days ago ago

          > Easy: pass laws requiring chat providers to implement interoperability standards so that users can bring their own trusted clients.

          In Europe that's called the Digital Markets Act.

          • digiown 2 days ago ago

            That's not permissionless afaik. "Users" can't really do it. It's frustrating that all these legislations appear to view it as a business problem rather than a private individual's right to communicate securely.

            • palata 2 days ago ago

              Right, I get what you mean.

              But in a way, I feel like sometimes it makes sense to not completely open everything. Say a messaging app, it makes sense to not just make it free for all. As a company, if I let you interoperate with my servers that I pay and maintain, I guess it makes sense that I may want to check who you are before. I think?

              • digiown 2 days ago ago

                We probably can't make it free for all, but for something like a messaging app, we also need to recognize that it isn't optional to function in society. It should be regulated more like a utility:

                - Facebook can still control the identity, but there needs to be a legal recourse for getting banned, and their policies can't discriminate against viewpoints, for example

                - The client specs should be open so that an alternate client can be implemented (sort of like how Telegram is currently)

                • tptacek 2 days ago ago

                  Telegram isn't E2EE by default in the first place (and isn't E2EE for group messages at all).

                  • digiown 2 days ago ago

                    I meant the platform openness aspect, that you are allowed to use alternate clients, but the identity is centralized E2EE is largely independent of this choice.

                • palata 2 days ago ago

                  > but there needs to be a legal recourse for getting banned

                  Agreed.

                  > The client specs should be open so that an alternate client can be implemented

                  An example that comes to mind is Signal, where they don't want that. They get a lot of criticism for it of course, but I think it the reasoning actually makes sense: in terms of security, allowing third-party clients is a security risk. If your threat model is "people who risk their life using it", it makes sense, right?

                  Under the EU's Digital Markets Act, WhatsApp is considered a gatekeeper (Signal is not) and has to be open to interoperability. It seems like they do audit the implementations in order to make sure that the security is not too bad. Which makes sense again, but has a cost. For Meta, that's fine. For Signal... I don't know.

                  Also WhatsApp will - if I understand correctly - make it very clear that you are talking to someone on a third-party client (and again they get a lot of criticism for that). But I think it makes sense... If WhatsApp was so open that every second client was pretty much a spyware, that would defeat the purpose of E2EE messaging.

                  Not that I strongly disagree, but just saying that it seems... complicated.

                  • digiown 2 days ago ago

                    I was intending that the alternate client should exist to function as an escape hatch. I fully expect most people will still use the default one, just like how people used the official reddit/telegram client when third party ones were available. The existence of an alternative constrains how much Facebook can enshittify the experience.

                    E2EE is about secure transport between the endpoints. What happens to the message after the endpoint is not something an app can feasibly enforce. Having control of the clients can at most do things like enforcing deletes, which IMO is not a good idea anyway.

                    > every second client was pretty much a spyware

                    Very few people will actually use one since the official app won't be outwardly too hostile, and those who do should be sufficiently discerning.

                    • palata a day ago ago

                      I don't think that it can work like that. If you make it fully open, you don't know what can happen. It cannot improve the security, it can only worsen it.

                      Suddenly you go from people using WhatsApp to people using random apps that you have no idea about, I think it's a step backward.

                      The "escape hatch", IMO, is an alternative messenger (like Signal). If Meta makes WhatsApp really bad, people can just switch to Signal. It's infinitely easier than moving away from AWS or the Microsoft Suite. The lock-in effect is really just that people can't be arsed to install it.

                      I think that the mere existence of Signal already forces Meta to keep WhatsApp relatively good. And to be fair, around me people like WhatsApp better because it has features they want and that Signal doesn't have.

      • maqp 2 days ago ago

        >I don't really see how it's possible to mitigate client compromise.

        You could of course offload plaintext input and output along with cryptographic operations and key management to separate devices that interact with the networked device unidirectionally over hardware data diodes, that prevent malware from getting in or getting the keys out.

        Throw in some v3 Onion Services for p2p ciphertext routing, and decent ciphersuite and you've successfully made it to at least three watch lists just by reading this. Anyway, here's one I made earlier https://github.com/maqp/tfc

      • londons_explore 2 days ago ago

        > don't really see how it's possible to mitigate client compromise.

        Think of the way DRM'ed video is played. If the media player application is compromised, the video data is still secure. Thats because the GPU does both the decryption and rendering, and will not let the application read it back.

        • gruez 2 days ago ago

          That's not what signal's doing though. It's just asking the OS nicely to not capture screen contents. There are secure ways of doing media playback, but that's not what signal's using.

        • Retr0id 2 days ago ago

          Video decryption+decoding is a well-defined enough problem that you can ship silicon that does it. You can't do the same thing for the UI of a social media app.

          You could put the entire app within TrustZone, but then you're not trusting the app vendor any less than you were before.

          • Retr0id 2 days ago ago

            Although now I think about it more, you could have APIs for "decrypt this [text/image] with key $id, and render it as a secure overlay at coordinates ($x, $y)"

            • londons_explore 2 days ago ago

              Exactly. Thats how DRM video works, and I don't see why you couldn't do the same for text.

              • Retr0id 2 days ago ago

                Actual DRM uses symmetric keys though, figuring out how to do the crypto in an E2EE-comaptible way would be challenging.

        • pennomi 2 days ago ago

          There will always, ALWAYS be the analog hole in security models like this.

          • londons_explore 2 days ago ago

            It's pretty hard for the government or service provider to snoop through the analog hole unless they have a camera on your forehead...

      • willis936 2 days ago ago

        By avoiding untrustworthy clients. All Windows devices should be considered compromised after last year.

        • Retr0id 2 days ago ago

          That's not mitigating client compromise, that's a whole other thing - trying to construct an uncompromiseable client.

          You don't build defense-in-depth by assuming something can't be compromised.

          • willis936 2 days ago ago

            Clients can always be compromised. I'm not talking about a client that can't be compromised, but simply a client that is not compromised out-of-the-box.

            • Retr0id 2 days ago ago

              That seems orthogonal to the subject of this discussion, i.e. "Compromise of the client side application or OS shouldn't break the security model."

        • cobertos 2 days ago ago

          Windows has been sending usage history back to their servers for longer than just last year

        • GraemeMeyer 2 days ago ago

          Why last year?

          • willis936 2 days ago ago

            Windows recall, intrusive addition of AI features (is there even a pinky promise that they're not training on user data?), more builtin ads, and less user control (most notably the removal of using the OS without an account - something that makes sense in the context of undisclosed theft of private information).

            This was 2025. I'm excited for what 2026 will bring. Things are moving fast indeed.

      • HumblyTossed 2 days ago ago

        This. The gap in E2E is the point at which I type in clear text and the point at which I read clear text. Those can be exploited.

    • rsync 2 days ago ago

      “I want whatsapp to decrypt the messages in a secure enclave and render the message content to the screen with a secure rendering pipeline, as is done with DRM'ed video.“

      If you are sophisticated enough to understand, and want, these things (and I believe that you are) …

      … then why would you want to use WhatsApp in the first place?

      • londons_explore 2 days ago ago

        Because my goal isn't to have my communication secure - but to have everyone's communication secure.

        And the network effect of whatsapp (3 billion users) seems currently the best route to that.

    • OtherShrezzing 2 days ago ago

      This is what a layman would assume happens from Meta’s WhatsApp advertising. They show the e2e process, and have the message entirely unreadable by anyone but the phone owner.

      • kevin_thibedeau 2 days ago ago

        e2e means unreadable by a middleman. That is a small inconvenience if you can readily compromise an endpoint.

        • Almondsetat 2 days ago ago

          People keep talking about e2ee as if it was some brain-to-brain encoding that truly allowed only the recipient person to decrypt the message

          • dijit 2 days ago ago

            because it used to be that the ends and the middlemen were different entities.

            In the universe where they are the same entity (walled-gardens) there is only the middleman.

            In such cases you either trust them or you don’t, anything more is not required because they can compromise their own endpoints in a way you can not detect.

  • rippeltippel 2 days ago ago

    When I pair a new device in Signal, or change my phone number, I cannot see my any previous messages. However, in WhatsApp I can.

    As I understand it, that's a security weak spot with WhatsApp, that could (in theory) allow Meta read users messages. Is it the case?

  • 2 days ago ago
    [deleted]
  • jqpabc123 a day ago ago

    Long story short, the world's largest privacy invaders have every incentive to subvert your privacy.

    They do this at every opportunity. It's their bread and butter. It's how they make money.

    The only reasonable assumption when dealing with them is that your privacy is being compromised. Proving it may not be possible but the incentive alone should be enough to convince a reasonable person that they do not deserve trust.

  • solenoid0937 2 days ago ago

    So many people that strongly believe WhatsApp isn't E2EE!

    Quick, someone set up a Kalshi or Polymarket or whatever claiming that WhatsApp isn't E2EE.

    I'll gladly bet against the total volume of people that believe it isn't E2EE -- it'll be an easy 2x for you or me.

  • abcd_f 2 days ago ago

    I witnessed something recently that points unambiguously at Whatsapp chats being not private.

    Not two months ago I sent a single photo to a friend of some random MacGyver kitchen contraption I made. Never described it, just a photo with the lol. He replied lol. He never reshared nor discussed it with anyone else. We never spoke about this before or after. Two days later he starts seeing ads on Facebook for a proper version of the same. There's virtually no other explanation except for Meta vacuuming and analyzing the photo. None.

    • halapro a day ago ago

      Machines are really good at putting two and two together and humans are generally not. Are both of you completely sure that you didn't google anything related to the image before or after?

      While Meta doesn't have access to the contents, they do know who you're texting and when.

    • Hackbraten 2 days ago ago

      It takes more than two days to develop and roll out a new product. That goes for kitchen appliances, too.

      • jokersarewild 2 days ago ago

        I don't think the claim was that the commercial device never existed but that it was too obscure for the friend to randomly independently get targeted ads about it..

        But I don't think WhatsApp takes many steps to protect media and in many cases the user really wants to backup media or share in other apps, etc, over security for media.

  • OutOfHere 2 days ago ago

    The issue here is that WhatsApp doesn't work with third-party clients (outside of EU anyway). It does now in EU via BirdyChat and Haiket, but the features are too limiting: https://about.fb.com/news/2025/11/messaging-interoperability...

    Ideally, WhatsApp would fully support third-party open-source clients that can ensure that the mathematics are used as intended.

  • ohcmon 2 days ago ago

    Next time you use true real independently audited e2e communication channel, don’t forget to check who is the authority who says that the "other end" is "the end" you think it is

  • znpy 2 days ago ago

    I always assumed this to be true, to be honest.

    Nowadays all of the messaging pipeline on my phone is closed source and proprietary, and thus unverifiable at all.

    The iPhone operating system is closed, the runtime is closed, the whatsapp client is closed, the protocol is closed… hard to believe any claim.

    And i know that somebody’s gonna bring up the alleged e2e encryption… a client in control of somebody else might just leak the encryption keys from one end of the chat.

    Closed systems that do not support third party clients that connect through open protocols should ALWAYS be assumed to be insecure.

    • gruez 2 days ago ago

      >Closed systems that do not support third party clients that connect through open protocols should ALWAYS be assumed to be insecure.

      So you're posting this from an open core CPU running on an open FPGA that you fabricated yourself, right? Or is this just a game of one-upmanship where people come with increasingly high standards for what counts as "secure" to signal how devoted to security they are?

    • solenoid0937 2 days ago ago

      it doesn't need to be open source for us to know what it's doing. its properties are well understood by the security community because it's been RE'd.

      > a client in control of somebody else might just leak the encryption keys from one end of the chat.

      has nothing to do with closed/open source. preventing this requires remote attestation. i don't know of any messaging app out there that really does this, closed or open source.

      also, ironically remote attestation is the antithesis of open source.

  • roenxi 2 days ago ago

    It is a bit counter-intuitive because there'd be law enforcement lobby working very hard to make sure that they can read private WhatsApp chats. I don't think it is reasonable to treat the entity that literally runs a spy agency monitoring all digital communication as the arbiter and investigator of what is and isn't private. The incentives just aren't there.

  • cosmicgadget 2 days ago ago

    > “We look forward to moving forward with those claims and note WhatsApp’s denials have all been carefully worded in a way that stops short of denying the central allegation in the complaint – that Meta has the ability to read WhatsApp messages, regardless of its claims about end-to-end encryption.”

    My money is on the chats being end to end encrypted and separately uploaded to Facebook.

    • gruez 2 days ago ago

      >being end to end encrypted and separately uploaded to Facebook

      That's a cute loophole you thought up, but whatsapp's marketing is pretty unequivocal that they can't read your messages.

      >With end-to-end encryption on WhatsApp, your personal messages and calls are secured with a lock. Only you and the person you're talking to can read or listen to them, and no one else, not even WhatsApp

      https://www.whatsapp.com/

      That's not to say it's impossible that they are secretly uploading your messages, but the implication that they could be secretly doing so while not running afoul of their own claims because of cute word games, is outright false.

      • blibble 2 days ago ago

        > but whatsapp's marketing is pretty unequivocal that they can't read your messages.

        well that's alright then

        facebook's marketing and executives have always been completely above board and completely honest

        • gruez 2 days ago ago

          Read the rest of my comment?

          >That's not to say it's impossible that they are secretly uploading your messages, but the implication that they could be secretly doing so while not running afoul of their own claims because of cute word games, is outright false.

      • codyb 2 days ago ago

        The thing is, if they were uploading your messages, then they'd want to do something with the data.

        And humans aren't great at keeping secrets.

        So, if the claim is that there's a bunch of data, but everyone who is using it to great gain is completely and totally mum about it, and no one else has ever thought to question where certain inferences were coming from, and no employee ever questioned any API calls or database usage or traffic graph.

        Well, that's just about the best damn kept secret in town and I hope my messages are as safe!

        And I'm no fan of Meta...

        • 3eb7988a1663 2 days ago ago

          Where were the Facebook whistleblowers about the numerous IOS/Android gaps that let the company gain more information than they were to supposed to see? Malicious VPNs, scanning other installed mobile applications, whatever. As far as I know, the big indictments have been found from the outside.

          • gruez 2 days ago ago

            >Malicious VPNs

            AFAIK that was a separate app, and it was pretty clear that it was MITMing your connections. It's not any different than say, complaining about how there weren't any whistleblowers for fortinet (who sell enterprise firewalls).

            >scanning other installed mobile applications

            Source?

      • IcyWindows 2 days ago ago

        I'm not saying they are sending the content back, but WhatsApp has to read your message or it couldn't display it, so I don't even know exactly what that particular claim means?

        They most likely mean their service or their employees, but this appears to be marketing fluff and not an enforceable statement.

      • netsharc 2 days ago ago

        I wonder if keyword/sentiment extraction on the user's device counts as reading "by WhatsApp"...

        There's the conspiracy theory about mentioning a product near the phone and then getting ads for it (which I don't believe), but I feel like I've mentioned products on WhatsApp chats with friends and then got an ad for them on Instagram sometime after.

        Also claiming "no one else can read it" is a bit brave, what if the user's phone has spyware that takes screenshots of WhatsApp... (Technically of course it's outside of their scope to protect against this, but try explaining that to a judge who sees their claim and the reality)

        • esseph 2 days ago ago

          > There's the conspiracy theory about mentioning a product near a the phone and then getting ads for it (which I don't believe)

          Well you sure as hell should. Both Google and Apple are making class action settlement payments right now for this very thing.

          https://www.bbc.com/news/articles/c4g38jv8zzwo

          https://www.nbcchicago.com/news/local/payments-begin-in-95m-...

          https://www.404media.co/heres-the-pitch-deck-for-active-list...

        • XorNot 2 days ago ago

          The conspiracy theory exists due to quirks of human attention and the wider metadata economy though.

          You mention something so you're thinking about it, you're thinking about it probably because you've seen it lately (or it's in the group of things local events are making you think about), and then later you notice an ad for that thing and because you were thinking about it actually notice the ad.

          It works with anything in any media form. Like I've had it where I hear a new thing and suddenly it turns up in a book I'm reading as well. Of course people discount that because they don't suspect books of being intelligent agents.

          • ghurtado 2 days ago ago

            This psychological effect has a name and I always forget it.

            EDIT: Baader-Meinhof phenomenon. I Think anyone can be forgiven for not remembering that name.

      • cosmicgadget 2 days ago ago

        Are messages and calls data at rest or data in motion? The UI lock feature refers to 'chats' which could be their term for data at rest.

        I wonder what the eula says.

      • blindriver 2 days ago ago

        "We can't read your messages! They are encrypted on disk and we don't store the keys!"

        "What encryption do you use?"

        "DES."

      • a0123 2 days ago ago

        > That's a cute loophole you thought up, but whatsapp's marketing is pretty unequivocal that they can't read your messages.

        If Facebook says it, then... Sorted!

      • conscion 2 days ago ago

        My guess is that they are end-to-end encrypted. And because of Facebook's scale that they're able to probabilisticly guess at what's in the encrypted messages (e.g.a message with X hash has Y probability of containing the word "shoes")

        • gruez 2 days ago ago

          That seems unlikely given that they use the signal protocol: https://signal.org/blog/whatsapp-complete/

        • ghurtado 2 days ago ago

          > they're able to probabilisticly guess at

          That's not how encryption works at all. At least not any encryption used in the last 100 years.

          You'd probably have to go all the way back to the encryption methods of the Roman empire for that statement to make sense

        • stefs 2 days ago ago

          That would still be very close to educated mind reading

    • varenc 2 days ago ago

      If this was happening en-masse, wouldn't this be discovered by the many people reverse engineering WhatsApp? Reverse engineering is hard sophisticated work, but given how popular WhatsApp is plenty of independent security researchers are doing it. I'm quite skeptical Meta could hide some malicious code in WhatsApp that's breaking the E2EE without it being discovered.

      • solenoid0937 2 days ago ago

        It would be trivial to discover and would be pretty big news in the security community.

        I'd wager most of these comments are from nontechnical people, or technical people that are very far removed from security.

        • cosmicgadget 2 days ago ago

          I'm technical and work in security. Since it is trivial, please explain. Ideally not using a strawman like "well just run strings and look for uploadPlaintextChatsToServer()".

          • solenoid0937 2 days ago ago

            I don't see why standard RE techniques (DBI/Frida + MITM) wouldn't work, do you?

            WhatsApp is constantly RE'd because it'd be incredibly valuable to discover gaps in its security posture, the community would find any exfil here.

            • martinralbrecht 2 days ago ago

              We did reverse engineer it and we're cryptographers not reverse engineering experts https://eprint.iacr.org/2025/794

              • 2 days ago ago
                [deleted]
              • solenoid0937 2 days ago ago

                Cool paper, thanks for sharing!

            • cosmicgadget 2 days ago ago

              If people are trivially hooking IOS and Android applications then sure, it's just an exercise in dynamic analysis.

              Mobile applications are outside my domain so I am surprised platform security (SEL, attestation, etc.) has been so easily defeated.

      • palata 2 days ago ago

        Before that, Meta employees would know about it. Pretty convinced that someone would leak it.

      • beagle3 2 days ago ago

        This was happening en masse, perhaps still does - the cloud backup was unencrypted. Originally it was encrypted. Then, one day, Google stopped counting it towards your storage quota, but it became unencrypted. But even before that, Meta had the encryption keys (and probably still does).

        When you get a new phone, all you need is your phone number to retrieve the past chats from backup; nothing else. That proves, regardless of specifics, that Meta can read your chats - they can send it to any new phone.

        So it doesn’t really matter that it is E2EE in transit - they just have to wait for the daily backup, and they can read it then.

      • cosmicgadget 2 days ago ago

        Well they wouldn't be breaking e2ee, they'd be breaking the implicit promise of e2ee. The chats are still inaccessible to intermediaries, they'd just be stored elsewhere. Like Apple and Microsoft do.

        I am not familiar with the state of app RE. But between code obfuscators and the difficulty of distinguishing between 'normal' phone home data and user chats when doing static analysis... I'd say it's not out of the question.

    • matthewdgreen 2 days ago ago

      I really doubt this. Any such upload would be visible inside the WhatsApp application, which would make it the world's most exciting (and relatively straightforward) RE project. You can even start with a Java app, so it's extra easy.

      • cosmicgadget 2 days ago ago

        If you claim REing a flagship FAANG application is "extra easy", either they need to be laughed out of the room or you do.

        • gruez 2 days ago ago

          Does FAANG apps have antidebug or code obfuscation? At least for google their apps are pretty lightly protected. The maximum extent of obfuscation is the standard compilation/optimization process that most apps go through (eg. r8 or proguard).

        • quesera 2 days ago ago

          Reverse engineering is easy when the source code is available. :)

          The difference between source code in a high-level language, and AArch64 machine language, is surmountable. The effort is made easier if you can focus on calls to the crypto and networking libraries.

          • cosmicgadget 2 days ago ago

            The source is available?

            Understanding program flow is very different from understanding the composition of data passing though the program.

            • quesera 2 days ago ago

              At some level, the machine code is the source code -- but decompiling AArch64 mobile apps into something like Java is common practice.

              As GP alludes, you would be looking for a secondary pathway for message transmission. This would be difficult to hide in AArch64 code (from a skilled practitioner), and extra difficult in decompiled Java.

              It would be "easy" enough, and an enormous prize, for anyone in the field.

              • cosmicgadget 2 days ago ago

                I am familiar with disassembly and decompilation and what you just said is a huge handwave.

                > a secondary pathway for message transmission

                That's certainly the only way messages could be uploaded to Facebook!

                • quesera 2 days ago ago

                  I'm curious why you think it's handwavy.

                  I've done this work on other mobile apps (not WhatsApp), and the work is not out of the ordinary.

                  It's difficult to hide subtleties in decompiled code. And anything that looks hairbally gets special attention, if the calling sites or side effects are interesting.

                  (edit for edit)

                  > That's certainly the only way messages could be uploaded to Facebook!

                  Well, there's a primary pathway which should be very obvious. And if there's a secondary pathway, it's probably for telemetry etc. If there are others, or if it isn't telemetry, you dig deeper.

                  All secrets are out in the open at that point. There are no black boxes in mobile app code.

                  • cosmicgadget 2 days ago ago

                    > if there's a secondary pathway, it's probably for telemetry etc.

                    Seems like a good channel upon which to piggyback user data. Now all you have to do is obfuscate the serialization.

                    > It's difficult to hide subtleties in decompiled code.

                    Stripped, obfuscated code? Really? Are we assuming debug ability here?

                    > All secrets are out in the open at that point. There are no black boxes in mobile app code.

                    What about a loader with an encrypted binary that does a device attestation check?

                    • quesera 2 days ago ago

                      I've lost track of our points of disagreement here. Sure, it's work, but it's all doable.

                      Obfuscated code is more difficult to unravel in its orginal form than the decompiled form. Decompiled code is a mess with no guideposts, but that's just a matter of time and patience to fix. It's genuinely tricky to write code that decompiles into deceptive appearances.

                      Original position is that it'd be difficult to hide side channel leakage of chat messages in the WhatsApp mobile app. I have not worked on the WhatsApp app, but if it's anything like the mobile apps I have analyzed, I think this is the correct position.

                      If the WhatsApp mobile apps are hairballs of obfuscation and misdirection, I would be a) very surprised, and b) highly suspicious. Since I don't do this work every day any more, I haven't thought much about it. But there are so many people who do this work every day, and WhatsApp is so popular, I'd be genuinely shocked if there were fewer than hundreds of people who have lightly scanned the apps for anything hairbally that would be worth further digging. Maybe I'm wrong and WhatsApp is special though. Happy to be informed if so.

        • martinralbrecht 2 days ago ago

          Note that WhatsApp as a web client, too: https://eprint.iacr.org/2025/794

    • random3 2 days ago ago

      That’s because they have such a good track record wrt to privacy? https://www.docketalarm.com/cases/California_Northern_Distri...

    • steve_taylor 2 days ago ago

      > My money is on the chats being end to end encrypted and separately uploaded to Facebook.

      If governments of various countries have compelled Meta to provide a backdoor and also required non-disclosure (e.g. a TCN secretly issued to Meta under Australia's Assistance and Access Act), this is how I imagined they would do it. It technically doesn't break encryption as the receiving device receives the encrypted message.

    • guerrilla 2 days ago ago

      > My money is on the chats being end to end encrypted and separately uploaded to Facebook.

      This is what I've suspected for a long time. I bet that's it. They can already read both ends, no need to b0rk the encryption. It's just them doing their job to protect you from fourth parties, not from themselves.

    • FabHK 2 days ago ago

      It should be detectable if it sends twice the data.

      • rurban 2 days ago ago

        It encrypts it to all the keys with the phone number registered for that user. Because users are switching phones, but keep their number. But each new WhatsApp app gets a new private key, the old key is not shared. This feature was added later, so the old WhatsApp devs wouldn't know.

        So it would be trivial to encrypt to the NSA key also, as done on Windows.

    • RajT88 2 days ago ago

      Facebook messenger similarly claims to be end to end encrypted, and yet if it thinks you are sending a link to a pirate site, it "fails to send". I imagine there are a great many blacklisted sites which they shadow block, despite "not being able to read your messages".

      My pet conspiracy theory is that the "backup code" which "restores" encrypted messages is there to annoy you into installing the app instead of chatting on the web.

      • loeg 2 days ago ago

        The client probably just downloads a blacklist of banned domains. That doesn't mean messages that are sent are not E2E encrypted.

        • RajT88 2 days ago ago

          Facebook has lost any benefit of doubt, imo.

          • loeg 2 days ago ago

            Baseless conspiracy theories just make yourself dumber; it doesn’t punish Facebook.

  • hiprob 2 days ago ago

    I know the default assumption with Telegram is that they can read all your messages, but unlike WhatsApp they seem less cooperative and I never got the notion that they ever read private messages until the Macron incident, and even then they do if the other party reports them. How come they are able to be this exception despite not having end to end encryption by default?

    • maqp 2 days ago ago

      >I know the default assumption with Telegram is that they can read all your messages

      The client is open source. It's trivial to verify this is 100% factually happening. They have access to every group message. Every desktop message. Every message by default. If you enable secret chats for 1:1 mobile chats, you are now disclosing to Telegram you're actively trying to hide something from them, and if there ever was metadata worth it for Keith Alexander to kill someone over, it's that.

      >they seem less cooperative and I never got the notion that they ever read private messages until the Macron incident

      We have no way to verify Telegram isn't a Russian OP. I'd love to say Pavel Durov fled for his life into exile https://www.nytimes.com/2014/12/03/technology/once-celebrate...

      But the "fugitive" has since visited Russia over SIXTY times https://kyivindependent.com/kremlingram-investigation-durov/

      Thus, I wouldn't be as much concerned about what they're handing EUROPOL, but what they're handing FSB/SVR.

      Even if Telegram never co-operated with Russian intelligence, who here thinks Telegram team, that can't pull off the basic thing of "make everything E2EE" that ~all of its competition has successfully done, can harden their servers against Russian state sponsored hackers like Fancy Bear, who obviously would never make noise about successful breach and data exfiltration.

      >How come they are able to be this exception despite not having end to end encryption by default?

      They've pushed out lie about storing cloud chats across different servers in different jurisdictions. Maybe that scared some prosecutors off. Or maybe FVEY is inside TG's servers too, and they don't like the idea of going after users as that would incentivize deployment of usable E2EE.

      Who knows. Just use Signal.

      • hiprob 2 days ago ago

        Currently, the Russian government is trying to squeeze people out of Telegram and move them over to MAX: https://caspianpost.com/regions/russia-tightens-telegram-res... WhatsApp also operates in Russia, despite Instagram and Facebook being banned. So I wouldn't count on its E2EE either. Signal still requires a phone number and proprietary Google blobs on mobile. Many third-party Telegram clients exist - Signal allows none.

        • maqp a day ago ago

          The real story is, MAX is there to scare people into Telegram. Durov isn't your friend, neither is Putin who doesn't bother blocking connections to the server.

          >So I wouldn't count on its E2EE either.

          This is the worst way to assses E2EE deployment. 5D-chess.

          >Signal still requires a phone number and proprietary Google blobs on mobile.

          Telegram also requires a phone number. If you didn't have double standards, I bet you'd have no standards.

          >Many third-party Telegram clients exist

          The official implementation and default encryption matters. 99.99% just assume Telegram is secure because Putin supposedly tries to block it. They don't know it's not E2EE. And no third party TG desktop client offers cross-platform E2EE or E2EE groups. IIRC there's exactly one desktop client that tries to offer E2EE 1:1 chats but that's not seamless. TG has no idea how to make seamless E2EE like Signal.

          You ignoring that Signal is both open source and always E2EE and complaining about it's "proprieatry blobs" yet looking past TG's atrocious E2EE speaks volumes.

          • hiprob 18 hours ago ago

            >This is the worst way to assses E2EE deployment. 5D-chess.

            How would you explain the fact that WhatsApp remains unblocked in Russia, when all other major messengers except Telegram and Meta's own products all got banned there?

            >Telegram also requires a phone number. If you didn't have double standards, I bet you'd have no standards.

            Not my point. I'm pointing out a flaw that both messengers share.

            >TG has no idea how to make seamless E2EE like Signal.

            Because it's never seamless. Loading your messages on Signal can take quite a while. Also if you receive a message over 2 weeks ago without checking your phone in the meantime, you may as well have never received it.

            >You ignoring that Signal is both open source and always E2EE and complaining about it's "proprieatry blobs" yet looking past TG's atrocious E2EE speaks volumes.

            Why are you ignoring the fact Signal actively prohibits third-party clients? Why the air quotes on proprieatry blobs? Yes, Telegram is far from perfect, and inferior to Signal when it comes to E2EE. But what makes you reject the proprieatry blob claim when it's true? Because your favorite messenger is being attacked?

  • a day ago ago
    [deleted]
  • nindalf 2 days ago ago

    This reads like a nothingburger. Couple of quotes from the article:

    > the idea that WhatsApp can selectively and retroactively access the content of [end-to-end encrypted] individual chats is a mathematical impossibility

    > Steven Murdoch, professor of security engineering at UCL, said the lawsuit was “a bit strange”. “It seems to be going mostly on whistleblowers, and we don’t know much about them or their credibility,” he said. “I would be very surprised if what they are claiming is actually true.”

    No one apart from the firm filing the lawsuit is actually supporting this claim. A lot of people in this thread seem very confident that it's true, and I'm not sure what precisely makes them so confident.

    • Snoozus 2 days ago ago

      I find this wording also "a bit strange".

      It is not a mathematical impossibility in any way.

      For example they might be able to read the backups, the keys might be somehow (accidentaly or not) leaked...

      And then the part about Telegram not having end2end encryption? What's this all about?

      • FabHK 2 days ago ago

        Telegram defaults to not e2ee; you have to initiate a "secret" chat to get e2ee.

  • modeless 2 days ago ago

    Meanwhile Apple has always been able to read encrypted iMessage messages and everyone decided to ignore that fact. https://james.darpinian.com/blog/apple-imessage-encryption

    • Flere-Imsaho 2 days ago ago

      And it's worse if you live in the UK:

      https://support.apple.com/en-us/122234

      In fact on this page they still claim iMessage is end-to-end encrypted.

    • gruez 2 days ago ago

      >has always been able to read encrypted iMessage messages

      ...assuming you have icloud backups enabled, which is... totally expected? What's next, complaining about bitlocker being backdoored because microsoft can read your onedrive files?

      • modeless 2 days ago ago

        If you read the link you would know that contrary to your expectation other apps advertising E2EE such as Google's Messages app don't allow the app maker to read your messages from your backups. And turning off backups doesn't help when everyone else has them enabled. Apple doesn't respect your backup settings on other people's accounts. Again, other apps address this problem in various ways, but not iMessage.

        • 2 days ago ago
          [deleted]
        • gruez 2 days ago ago

          >If you read the link you would know that contrary to your expectation other apps advertising E2EE don't allow the app maker to read your messages.

          What does that even mean? Suppose icloud backups doesn't exist, but you could still take screenshots and save them to icloud drive. Is that also "Apple has always been able to read encrypted iMessage messages"? Same goes for "other people having icloud backups enabled". People can also snitch on you, or get their phones seized. I feel like people like you and the article author are just redefining the threat model of E2EE apps just so they can smugly go "well ackshually..."

          • modeless 2 days ago ago

            It means, for example, Google Messages uses E2EE backups. Google cannot read your E2EE messages by default, period. Not from your own backup, not from other peoples' backups. No backup loophole. Most other E2EE messaging apps also do not have a backup loophole like iMessage.

            It's not hard to understand why Apple uploading every message to themselves to read by default is different from somebody intentionally taking a screenshot of their own phone.

            • gruez 2 days ago ago

              >Google cannot read your E2EE messages by default, period.

              Is icloud backups opt in or opt out? If it's opt in then would your objection still hold?

              • modeless 2 days ago ago

                I'm less concerned with whether it is technically opt in or opt out and more concerned with whether it is commonly enabled in practice.

                What would resolve my objection is if Apple either made messages backups E2EE always, as Google did and as Apple does themselves for other data categories like Keychain passwords, or if they excluded E2EE conversations (e.g. from ADP people) from non-E2EE backups, as Telegram does. Anything short of that does not qualify as E2EE regardless of the defaults, and marketing it as E2EE is false advertising.

      • Snoozus 2 days ago ago

        Absolutly, they intentionally make stuff sound secure and private while keeping full access.

    • razingeden 2 days ago ago

      I remember reading this recently. Not saying it’s true but it got my attention

      TUESDAY, NOVEMBER 25, 2025 Blind Item #7 The celebrity CEO says his new chat system is so secure that even he can't read the messages. He is lying. He reads them all the time.

  • miohtama 2 days ago ago

    Both things cannot be true at the same time

    - WhatsApp encryption is broken

    - EU's and UK's Chat Control spooks demand Meta to insert backdoor because they cannot break the encryption

    The Guardian has its own editorial flavour on tech news, so expect them to use any excuse to bash the subject.

    • Retric 2 days ago ago

      Just because Adam has a back door doesn’t mean Eve also has a back door.

    • preisschild 2 days ago ago

      > EU's and UK's Chat Control spooks demand Meta to insert backdoor because they cannot break the encryption

      Those are not law, so no the EU doesnt demand that

    • dyauspitr 2 days ago ago

      They’re just not sharing the back door with the EU?

  • lukeschlather 2 days ago ago

    It seems obvious that they can. It's my understanding for FB Messenger that the private key is stored encrypted with a key that is derived from the user's password. So it's not straightforward, but Meta is obviously in a position to grab the user's password when they authenticate and obtain their private key. This would probably leave traces, but someone working with company authorization could probably do it.

    For WhatsApp they claim it is like Signal, with the caveat that if you have backups enabled it works like Messenger. Although interestingly if you have backups enabled the key may be stored with Apple/Google rather than Meta, it might be the case that with backup enabled your phone vendor can read your WhatsApp messages but Facebook cannot.

  • sirpilade 2 days ago ago

    Is anybody using any open source, self-hosted solution with an UI on par to whatsapp? Asking for my wife

    • meatmanek 2 days ago ago

      Matrix exists and really isn't too bad to self-host if you just want a small number of people. (If you federate with other servers, then you have more things to worry about -- increased attack surface, more visibility leading to more potential attackers, and the risk of unintentionally storing illegal content (e.g. CSAM) sent by people from other servers.)

      The UI of Element (the most popular Matrix client) is more or less in line with any other chat app, but I guess it depends what you mean by "on par to whatsapp". Biggest downside I've found is that you can't search your messages on the mobile clients.

  • 0x_rs 2 days ago ago

    It's a proprietary, closed-source application. It can do whatever it wants, and it doesn't even need to "backdoor" encryption when all it has to do is just forward everything matching some criteria to their servers (and by extension anyone they comply to). It's always one update away from dumping your entire chat history into a remote bucket, and it would still not be in contradiction with their promise of E2EE. Furthermore, it already has the functionality to send messages when reporting [0]. Facebook's Messenger also has worked that way for years. [1] There were also rumors the on-device scanning practice would be expanded to comply with surveillance proposals such as ChatControl a couple years ago. This doesn't mean it's spying on each and every message now, but it would have potential to do so and it would be feasible today more than ever before, hence the importance of software the average person can trust and isn't as easily subject to their government's tantrums about privacy.

    0. https://www.propublica.org/article/how-facebook-undermines-p...

    1. https://archive.is/fe6zY

    • paxys 2 days ago ago

      You are also using proprietary, closed-source hardware and operating system underneath the app that can do whatever they want. This line of reasoning ultimately leads to - unless you craft every atom and every bit yourself your data isn't secure. Which may be true, but is a pointless discussion.

      • threatofrain 2 days ago ago

        No it means you calculate how much risk you're taking on, vendor by vendor. Do all companies have the same reputation before your eyes?

        • bigyabai 2 days ago ago

          > Do all companies have the same reputation before your eyes?

          If they're not credibly audited, then yeah.

      • OutOfHere 2 days ago ago

        That's a bad take because the vendors there are different; they're not Meta. As such, it's not pointless.

    • 2 days ago ago
      [deleted]
  • bredren 2 days ago ago

    I co-founded Gliph, which was one of the first commercial, cross platform messaging apps to provide end to end encrypt.

    One area of exposure was push notifications. I wonder if the access described wasn’t to the messages themselves but content rich notifications.

    If so, both parties could be ~correct. Except the contractors would have been seeing what is technically metadata.

    • tptacek 2 days ago ago

      I'm unfamiliar with Gliph. What were the protocols/constructions you used?

  • AndrewKemendo 2 days ago ago

    If your personal threat model at this point is not literally:

    “everything I ever do can be used against me in court”

    …then you are not up-to-date with the latest state of society

    Privacy is the most relevant when you are in a position where that information is the difference between your life or your death

    The average person going through their average day breaks dozens of laws because the world is a Kafkaesque surveillance capitalist society.

    The amount of information that exists about there average consumer is so unbelievably godly such that any litigator could make an argument against nearly any human on the planet that they are in violation of something if there is enough pressure

    If you think you’re safe in this society because you “don’t do anything wrong“ then you’re compromised and don’t even realize it

  • david_allison 2 days ago ago

    It was my understanding that the backups are unencrypted. Is that still the case?

    • evanjrowley 2 days ago ago

      On Android, if you allow it to backup to your Google cloud storage, it will say the backups are encrypted. That was my experience when I set it up a few weeks ago.

      Exactly who has the ability to decrypt the backup is not totally clear.

      It may be a different situation for non-Android users, Android users who are not signed in with a Google account, Android users who are not using Google Play Services, etc.

      • bayindirh 2 days ago ago

        You can explore your Google Cloud's Application Storage part via Rsync, AFAIK. So you can see whether your backups are encrypted or not.

        I remember that you had to extract at least two keys from the android device to be able to read "on-device" chat storage in the days of yore, so the tech is there.

        If you don't have the keys' copies in the Google Drive side, we can say that they are at least "superficially" encrypted.

  • foooorsyth 2 days ago ago

    The reality that most encryption enthusiasts need to accept is that true E2EE where keys don’t leave on-device HSMs leads to terrible UX — your messages are bound to individual devices. You’re forced to do local backups. If you lose your phone, your important messages are gone. Lay users don’t like this and don’t want this, generally.

    Everything regarding encrypted messaging is downstream of the reality that it’s better for UX for the app developer to own the keys. Once developers have the keys, they’re going to be compelled by governments to provide them when warrants are issued. Force and violence, not mathematical proofs, are the ultimate authority.

    It’s fun to get into the “conspiratorial” discussions, like where the P-256 curve constants came from or whether the HSMs have backdoors. Ultimately, none of that stuff matters. Users don’t want their messages to go poof when their phone breaks, and governments will compel you to change whatever bulletproof architecture you have to better serve their warrants.

  • vbezhenar 2 days ago ago

    Whatsapp is considered insecure and banned from use for military in Russia. Telegram, on the other hand, is widely used. Of course that's not something definitive, but just a food for thought.

    • vimda 2 days ago ago

      Telegram which famously didn't have _any_ end to end encryption for ages, and even now only has very limited opt-in "secret chats"?

      • maqp 2 days ago ago

        Yeah Telegram only has 1:1 opt-in E2EE, that you can't use across your devices, so either you or your buddy quickly gets tired of whipping out their phone when they're sitting at their laptop, and just replies you through Telegram's non-E2EE cloud chats, and that's the backdoor. The user activated it. It's "their fault".

      • vbezhenar 2 days ago ago

        I'm not going to promote Telegram, just wanted to highlight that Whatsapp is not considered trustworthy by a geopolitical enemy of US. I don't think that Telegram is bad, and when your life depends on it, you can click "Secret Chat" button, it's not a big deal.

    • gruez 2 days ago ago

      > but just a food for thought.

      ...that telegram is backdoored by the russians? The implication you're trying to make seems to be that russians must be choosing telegram because it's secure, but are ignoring the possibility that they're choosing telegram because they have access to it. After all, you think they want the possibility of their military scheming against them?

      • p1anecrazy 2 days ago ago

        I guess their point was that Russian military doesn‘t care if Russian intelligence reads their messages

        • gruez 2 days ago ago

          Maybe OP should clearly state their thesis rather than beating around the bush with "... just a food for thought", so we don't have to guess what he's trying to say.

          • fasbiner 2 days ago ago

            I think the point was pretty clear if you are able to consider more than one point of view. People who are highly motivated and for whom it could make a life-or-death difference have considerable technical skills in this area in Russia consider Whatsapp to be insecure and prohibit its use, just as in the US, their counterparts consider telegram to be insecure and prohibit its use.

            Perhaps you're simply struggling with the concepts here: would it help you to understand things better to add that russia and the US ban the use of Signal by their militaries and intelligence services?

            Did you detect an implication that can't be extrapolated from the text without metacontext and secondary unstated axioms, or is your mind totally blank and baffled at what these data points could indicate?

  • moffers 2 days ago ago

    I feel fairly confident an oddly-shaped donation from Mark Z’s foundation will make this go away.

    • Kiboneu 2 days ago ago

      I'd bet that shape would look like a tube with a cap on.

  • ralusek 2 days ago ago

    I mean at the very least if their clients can read it then they can at least read it through their clients, right? And if their clients can read it’ll be because of some private key stored on the client device that they must be able to access, so they could always get that. And this is just assuming that they’ve been transparent about how it’s built, they could just have backdoors on their end.

    • basch 2 days ago ago

      they can also just .. brute force passwords. the pin to encrypt fb messenger chat is 6 digits for example.

      • farbklang 2 days ago ago

        but that is a pin and can be rate limited / denied, not a cryptograhpic key that can be used to brute force and compare hash generations (?)

        • barbazoo 2 days ago ago

          They likely wouldn’t rate limit themselves, rate limiting only applies when you access through their cute little enter your pin UI.

          • solenoid0937 2 days ago ago

            The PIN is used when you're too lazy to set an alphanumeric pin or offload the backup to Apple/Google. Now sure, this is most people, but such are the foibles of E2EE - getting E2EE "right" (eg supporting account recovery) requires people to memorize a complex password.

            The PIN interface is also an HSM on the backend. The HSM performs the rate limiting. So they'd need a backdoor'd HSM.

            • barbazoo 2 days ago ago

              That added some context I didn’t have yet thanks. I’m not seeing yet how Meta if it was a bad actor wouldn’t be able to brute force the pin of a particular user. Of this was a black box user terminal site, Meta owns the stack here though, seems plausible that you could inject yourself easily somewhere.

              • solenoid0937 2 days ago ago

                If you choose an alphanumeric pin they can't brute force because of the sheer entropy (and because the key is derived from the alphanumeric PIN itself.)

                However, most users can't be bothered to choose such a PIN. In this case they choose a 4 or 6 digit pin.

                To mitigate the risk of brute force, the PIN is rate limited by an HSM. The HSM, if it works correctly, should delete the encryption key if too many attempts are used.

                Now sure, Meta could insert itself between the client and HSM and MITM to extract the PIN.

                But this isn't a Meta specific gap, it's the problem with any E2EE system that doesn't require users to memorize a master password.

                I helped design E2EE systems for a big tech company and the unsatisfying answer is that there is no such thing as "user friendly" E2EE. The company can always modify the client, or insert themselves in the key discovery process, etc. There are solutions to this (decentralized app stores and open source protocols, public key servers) but none usable by the average person.

            • basch 2 days ago ago

              That might be a different pin? Messenger requires a pin to be able to access encrypted chat.

              Every time you sign in to the web interface or resign into the app you enter it. I don’t remember an option for an alphanumeric pin or to offload it to a third party.

              • solenoid0937 2 days ago ago

                Oh my bad! I was talking about WhatsApp.

                The Messenger PIN is rate limited by an HSM, you merely enter it through the web interface.

                Of course, the HSM could be backdoored or the client could exfil the secret but the latter would be easy to discover.

                Harder to do any better here without making the user memorize a master password, which tends to fail miserably in real life.

        • 2 days ago ago
          [deleted]
  • 31337Logic 2 days ago ago

    Thank God for Signal. And by God I mean all the smart men and women who made Signal possible. Not God. God didn't do shit. As usual.

  • ubermonkey 2 days ago ago

    WhatsApp belongs to Meta.

    Why would anyone believe those chats are private?

  • timpera 2 days ago ago

    Lots of uninformed conspiratorial comments with zero proof in here, but I'd really like WhatsApp to get their encryption audited by a reliable, independent 3rd party.

    • 2 days ago ago
      [deleted]
  • josefrichter 2 days ago ago

    I am not into conspiracy theories, but I find it very unlikely that our governments can’t read all our messages across platforms.

  • Ms-J 2 days ago ago

    Who do they expect to fall for the claims that a Facebook owned messenger couldn't read your "encrypted" messages? It's truly funny.

    Any large scale provider with headquarters in the USA will be subject to backdoors and information sharing with the government when they want to read or know what you are doing.

    • olalonde 2 days ago ago

      Me? I'd be very surprised if they can actually read encrypted messages (without pushing a malicious client update). The odds that no one at Meta would blow the whistle seem low, and a backdoor would likely be discovered by independent security researchers.

      • nindalf 2 days ago ago

        I'd be surprised as well. I know people who've worked on the WhatsApp apps specifically for years. It feels highly unlikely that they wouldn't have come across this backdoor and they wouldn't have mentioned it to me.

        Happy to bet $100 that this lawsuit goes nowhere.

      • riazrizvi 2 days ago ago

        If there is such a back door, it would hardly follow it's widely known within the company. From the sparse reports on why Facebook/Meta has been caught doing this in the past, it's for favor trading and leverage at the highest levels.

      • SoftTalker 2 days ago ago

        That was my reaction on reading the headline. Of course Meta can read them, they own the entire stack. The question would really be do they?

      • Snoozus 2 days ago ago

        Is there an independent audit of the Whatsapp client and of the servers?

    • Aurornis 2 days ago ago

      > Any large scale provider with headquarters in the USA will be subject to backdoors and information sharing with the government when they want to read or know what you are doing.

      Not just the USA. This is basically universal.

      • j45 2 days ago ago

        It's not guaranteed or by default.

        This type of generalized defeatism does more harm than not.

        • Aurornis 2 days ago ago

          > It's not guaranteed or by default.

          Nation state governments do have the ability to coerce companies within their territory by default.

          If you think this feature is unique to the USA, you are buying too much into a separate narrative. All countries can and will use the force of law to control companies within their borders when they see fit. The USA actually has more freedom and protections in this area than many countries, even though it’s far from perfect.

          > This type of generalized defeatism does more harm than not.

          Pointing out the realities of the world and how governments work isn’t defeatism.

          Believing that the USA is uniquely bad and closing your eyes to how other countries work is more harmful than helpful.

          • j45 2 days ago ago

            Understanding the cloud is someone else's computer is something I've repeated many, many, many times in my comments.

            The OP assumption that it's just the way it is and everyone should accept their communication being compromised is the issue.

        • embedding-shape 2 days ago ago

          No, assuming that anything besides what you can verify yourself is compromised isn't "defeatism", although I'd agree that it's overkill in many cases.

          But for your data you want to absolutely keep secret? It's probably the only to guarantee someone else somewhere cannot see it, default to assume if it's remote, someone will eventually be able to access it. If not today, it'll be stored and decrypted later.

        • Ms-J 2 days ago ago

          This is correct. Yes, every government has the ability to use violence and coerce, but that takes coordination among other things. There are still places, and areas within those places, where enforcement and the ability to keep it secret is almost not possible.

      • ath3nd 2 days ago ago

        [dead]

    • preisschild 2 days ago ago

      > Any large scale provider with headquarters in the USA will be subject to backdoors and information sharing with the government when they want to read or know what you are doing.

      Thats just wrong. Signal for example is headquartered in the US and does not even have this capability (besides metadata)

    • kgwxd 2 days ago ago

      They're only concerned someone at meta, they don't already control, could read their personal messages.

    • huijzer 2 days ago ago

      I have reached the point that I think even the chat control discussion might be a distraction because essentially they can already get anything. Yeah government needs to fill in a form to request, but that’s mostly automated I believe

      • gruez 2 days ago ago

        >I have reached the point that I think even the chat control discussion might be a distraction because essentially they can already get anything.

        Then why are politicians wasting time and attracting ire attempting pushing it through? Same goes for UK demanding backdoors. If they already have it, why start a big public fight over it?

      • j45 2 days ago ago

        Such initiatives are likely trying to make it easier.

    • mattmaroon 2 days ago ago

      I think you can safely remove “in the USA” from that sentence.

    • hsuduebc2 2 days ago ago

      I do not believe them either. The swift start of the investigation by U.S. authorities only suggests there was no obstacle to opening one, not that nothing could be found. By “could not,” I mean it is not currently possible to confirm, not that there is necessarily nothing there.

      Personally, I would never trust anyone big enough that it(in this case Meta) need and want to be deeply entangled in politics.

    • rdtsc 2 days ago ago

      > Any large scale provider with headquarters in the USA will be subject to backdoors

      Wonder what large scale provider outside USA won’t do that?

  • cft 2 days ago ago

    I trust Telegram more: Putin never had any problems with Whatsapp, only with Telegram.

    • maqp 2 days ago ago

      No end-to-end encryption by default. WhatsApp has.

      No end-to-end encryption for groups. WhatsApp has.

      No end-to-end encryption on desktop. WhatsApp has.

      No break-in key-recovery. WhatsApp has.

      Inferring Telegram's security from public statements of *checks notes* former KGB officer and FSB director -- agencies that wrote majority of the literature in maskirovka, isn't exactly reliable, wouldn't you agree?

      • cft 2 days ago ago

        Telegram has private chats. I don't pay attention to his words, indeed. Way before the Ukrainian war, Russia had a massive campaign trying to block Telegram and they failed on a technical level. This has never happened with WhatsApp.

  • xvector 2 days ago ago

    What even are these low effort, uninformed conspiratorial comments saturating the comment section?

    Sure, Meta can obviously read encrypted messages in certain scenarios:

    - you report a chat (you're just uploading the plaintext)

    - you turn on their AI bot (inference runs on their GPUs)

    Otherwise they cannot read anything. The app uses the same encryption protocol as Signal and it's been extensively reverse engineered. Hell, they worked with Moxie's team to get this done (https://signal.org/blog/whatsapp-complete/).

    The burden of proof is on anyone that claims Meta bypassing encryption is "obviously the case."

    I am really tired of HN devolving into angry uninformed hot takes and quips.

  • oefrha 2 days ago ago

    I always assumed Meta has backdoor that at least allows them to compromise key individuals if men in black ask, but law firm representing NSO courageously defending the people? Come the fuck on.

    > Our colleagues’ defence of NSO on appeal has nothing to do with the facts disclosed to us and which form the basis of the lawsuit we brought for worldwide WhatsApp users.

    • zugi 2 days ago ago

      > I always assumed Meta has backdoor that at least allows them to compromise key individuals if men in black ask

      According to Meta's own voluntarily published official statements, they do not.

      * FAQ on encryption: https://faq.whatsapp.com/820124435853543

      * FAQ for law enforcement: https://faq.whatsapp.com/444002211197967

      These representations are legally binding. If Meta were intentionally lying on these, it would invite billions of dollars of liability. They use similar terminology as Signal and the best private VPN companies: we can't read and don't retain message content, so law enforcement can't ask for it. They do keep some "meta" information and will provide it with a valid subpoenoa.

      The latter link even clarifies Meta's interpretation of their responsibilities under "National Security Letters", which the US Government has tried to use to circumvent 4th amendment protections in the past:

      > We interpret the national security letter provision as applied to WhatsApp to require the production of only two categories of information: name and length of service.

      I guess we'll see if this lawsuit goes anywhere or discovery reveals anything surprising.

  • m3kw9 2 days ago ago

    yes/no? Can't they just say that?

  • HocusLocus a day ago ago

    predatory unscrollable website. Most expensive fail in history.

  • snowwrestler 2 days ago ago

    For context, the U.S. is also currently investigating whether Donald Trump actually won the 2020 presidential election (he didn’t), whether aspirin causes autism (it doesn’t), and whether transgenic research is woke (it’s not).

    “The U.S. investigates” unfortunately does not mean as much as it used to. That said, I would rest easy in the knowledge that someone deep in the NSA already knows with absolute certainty whether the WhatsApp client app is doing anything weird. But they’re not likely to talk to a reporter or plaintiffs lawyer.

  • sailfast 2 days ago ago

    “Fox has investigated whether henhouse is secure” News at 11.

  • philipwhiuk 2 days ago ago

    Frankly the wrench-attack is easier.

  • webdoodle 2 days ago ago

    > US reportedly investigate claims that Meta can read encrypted WhatsApp messages

    Lol, Fox guarding the hen house.

  • Ms-J 2 days ago ago

    This was slid off the first page of HN so quickly.

    As someone wisely pointed out in this thread, the reason Facebook is doing this is: "it's for favor trading and leverage at the highest levels."

    • dang 2 days ago ago

      It set off the flamewar detector, which is the usual reason that happens.

      We'll either turn off that software penalty or merge the thread into a submission of the original Bloomberg source - these things take a bit of time to sort through!

      Edit: thread merged from https://news.ycombinator.com/item?id=46836487 now.

      • Ms-J 2 days ago ago

        It does have an amplifying effect when issues such as this happen to where users who don't read in time won't see this due to the amount of other threads.

        Thank you for the insight as to why it happened.

    • CalRobert 2 days ago ago

      Just came here after seeing it in the Guardian and really disappointed it's not on the front page. Telling.

      • dang 2 days ago ago

        Telling in what way?

  • rambojohnson 2 days ago ago

    I mean no shit, right?

  • alex1138 2 days ago ago

    Zuck didn't buy it in good faith. It wasn't "we'll grow you big by using our resources but be absolutely faithful to the privacy terms you dictate". Evidence: Brian Acton very publically telling people that they (Zuck, possibly Sandberg) reneged

    Zuck thinks we're "dumb fucks". That's his internet legacy. Copying products, buying them up, wiping out competition

  • mlmonkey 2 days ago ago

    I'm shocked, shocked! that there's gambling going on here ...

  • hn_user_9876 2 days ago ago

    [dead]

  • renegade-otter 2 days ago ago

    Anyone trusting Facebook to follow basic human decency and, yes, laws, is a fool.

    • dang 2 days ago ago

      Maybe so, but please don't post unsubstantive comments to Hacker News. We're trying for something different here.

      • renegade-otter 2 days ago ago

        Point taken, but I feel like going into details at this stage is redundant. There have been probably hundreds of discussions on this site regarding this topic. Books have been written about Facebook's and Zuckerberg's absent moral compass. To wit, from three days ago:

        https://www.msn.com/en-in/money/news/meta-ceo-mark-zuckerber...

        "While Zuckerberg reportedly wanted to prevent "explicit" conversations with younger teens, a February 2024 meeting summary shows he believed Meta should be "less restrictive than proposed" and wanted to "allow adults to engage in racier conversation on topics like sex." He also rejected parental controls that would have let families disable the AI feature entirely. Nick Clegg, Meta's former head of global policy, questioned the approach in internal emails, asking if the company really wanted these products "known for" sexual interactions with teens, warning of "inevitable societal backlash."

        • dang 2 days ago ago

          Let's assume you're right that going into details is redundant. In that case, however, so is the generic putdown.

          Put differently: even if you don't owe megacorps that don't follow basic human decency better, you owe this community better if you're participating in it.

          https://news.ycombinator.com/newsguidelines.html

    • xvector 2 days ago ago

      Anyone blindly believing every random allegation is also a fool, especially when the app in question has been thoroughly reverse engineered and you can freely check for yourself that it's using the same protocol as Signal for encryption

      • gherkinnn 2 days ago ago

        Allegations against a company who circumvented Android's security to track users?

        I don't have any proof that Meta stores WhatsApp messages but I feel it in my bones that at the very least tried to do so. And if ever that comes to light, precisely nobody will be surprised.

        https://cybersecuritynews.com/track-android-users-covertly/

        • gruez 2 days ago ago

          >And if ever that comes to light, precisely nobody will be surprised.

          The amount of ambient cynicism on the internet basically makes this a meaningless statement. You could plausibly make the same claim for tons of other conspiracy theories, eg. JFK was assassinated by the CIA/FBI, Bush did 9/11, covid was intentionally engineered by the chinese/US government, etc.

          • gherkinnn 2 days ago ago

            Meta storing WhatsApp messages requires one mental leap: they found a secret way to hoover up WhatsApp messages. Everything else is their MO, including breaking systems to their advantage and having absolutely no scruples.

            On the other hand, Occam's razor can barely keep up with the mental gymnastics required to paint Bush (or even Cheney) as the mastermind behind 9/11.

      • jlarocco 2 days ago ago

        That raises the question of why not just use Signal and avoid a company whose founder thinks we're all "dumbfucks" and has a long history of scandals and privacy violations?

        The evidence is pretty clear that Facebook wants to do everything they legally can to track and monitor people, and they're perfectly okay crossing the line and going to court to find the boundaries.

        Using a company like that for encrypted messaging seems like an unnecessary risk. Maybe they're not decrypting it, but they're undoubtedly tracking everything else about the conversation because that's what they do.

    • Forgeties79 2 days ago ago

      They got caught torrenting unbelievable amounts of content, an act that committed even just a few times can get my home Internet shut down with no recourse (best outcome). Literally nothing happened. Combine the fact that nothing legally significant ever happens to them with zuckerburg’s colossal ego and complete lack of ethical foundation, and you have quite the recipe.

      And I’m not even getting into the obvious negative social/political repercussions that have come directly from Facebook and their total lack of accountability/care. They make the world worse. Aside from the inconvenience for hobbyist communities and other groups, all of which should leave Facebook anyway, we would lose nothing of value if Facebook was shut down today. The world would get slightly better.

      • gruez 2 days ago ago

        >an act that committed even just a few times can get my home Internet shut down with no recourse (best outcome).

        No, the best (and also most likely) outcome is you using a VPN and nothing happens, like 99.9% of pirates out there.

        >Literally nothing happened.

        Isn't there a lawsuit in the works?

        • Forgeties79 2 days ago ago

          If you have to do a thing that obscures your act it doesn’t change the fact that there are rules for me and not them. We know for a fact they did it. Did their ISP threaten them? Did they get their internet service shut off?

          Edit: they already won their first case in June against authors. I am very curious to see how that lawsuit goes. Obviously we don’t know the results yet but I would be incredibly surprised to see them lose and/or have to “undo” the training. That’s a difficult world to imagine, especially under the current US admin. Smart money is the damage is done and they’ll find some new way to be awful or otherwise break rules we can’t.

          • gruez 2 days ago ago

            >If you have to do a thing that obscures your act it doesn’t change the fact that there are rules for me and not them. We know for a fact they did it. Did their ISP threaten them? Did they get their internet service shut off?

            Is there any indication they didn't use a VPN? If they did use a VPN, how is it "there are rules for me and not them", given that anyone can also use VPN to pirate with impunity?

            • Forgeties79 2 days ago ago

              I don’t understand what you’re doing here. They were caught. We know they did it. There are several articles about it. They have been sued over it because it happened. I don’t think anything will come of it, but they clearly did it. It is public knowledge.

              • gruez 2 days ago ago

                You claim there's some two tier legal system for Meta compared to the average torrenter, by virtue of their internet not getting cut off. However that's a false premise. You only get your internet cut off if you fail to use a VPN. If Meta used a VPN (like most pirates do), then their internet staying up isn't evidence of any special treatment.

                If there's some two tier treatment of Meta, it's that they're being sued more aggressively than the average pirate. If you pirated Stranger Things, you can blab all you want about your copyright infringement escapades, and you'll unlikely never face any legal consequences. OTOH once word got out that Meta torrenting books, every copyright lawyer out there is going after them.

      • bayarearefugee 2 days ago ago

        > Literally nothing happened.

        The true wealthy live by an entirely different set of rules than the rest of us, especially when they are willing to prostrate themselves to the US President.

        This has always been true to some degree, but is both more true than ever (there used to be some limits based on accepted decorum) plus they just dont even try to hide it anymore.

        • Forgeties79 2 days ago ago

          I think the not hiding it part is what’s starting to stick in my craw. We all knew it was happening on some level, but we felt that there were at least some boundaries somewhere out there even if they were further than ours. Now it just feels like the federal government basically doesn’t exist and companies can do whatever they want to us.

  • oldestofsports 2 days ago ago

    Surprised pikachu face

  • SirFatty 2 days ago ago

    Of course they can. Why wouldn't you assume this to be the case?

  • oncallthrow 2 days ago ago

    This should surprise nobody. Do you really think that the intelligence agencies of the US etc would allow mainstream E2E encryption? Please stop being so naive

  • calibas 2 days ago ago

    It's vulnerable to man-in-the-middle attacks, and the man-in-the-middle happens to be Meta.

    The tricky part would be doing it and not getting caught though.

  • kachapopopow 2 days ago ago

    yes, this is a very known fact that it is not E2EE but Client2Server Encrypted. Otherwise your message history wouldn't work.

    • codexetreme 2 days ago ago

      Might be a rookie question. But exactly why would chat history not work?

      • ryanscio 2 days ago ago

        It would, just not on new devices without moving keys via already-trusted device. This is what WhatsApp presumably does

        • kachapopopow 2 days ago ago

          That's the thing, it does not and it has been known that it does not do this. The keys are stored on the server and the server sends them to your device on login. They do have some kind of machine-id encoded in it, but that is just for show.

    • kachapopopow 2 days ago ago

      I guess I owe a clarification: Otherwise your message history wouldn't be available the moment you log in with your credentials*.

    • xvector 2 days ago ago

      This is a total misunderstanding of how E2EE works.

      I need to either enter my password or let the app access my iCloud Keychain to let it derive the backup encryption key.

      It's also well known that they worked with the Moxie's team to implement the same E2EE protocol as Signal. So messages are E2EE as well.

      • kachapopopow a day ago ago

        I can sign in from my apple device to my android device purely using my credentials so as far as I know it is just for show.

  • jijji 2 days ago ago

    if anybody believes that Facebook would allow people to send a totally encrypted message to somebody, they're out of their mind. they're pretty much in bed with law enforcement at this point. I mean I don't know how many people have been killed in Saudi Arabia this year for writing Facebook messages to each other that were against what the government wanted but it's probably a large number.

    • xvector 2 days ago ago

      This reads like another low effort conspiratorial comment.

      WhatsApp has been reverse engineered extensively, they worked with Moxie's team to implement the same protocol as Signal, and you can freely inspect the client binaries yourself!

      If you're confident this is the case, you should provide a comment with actual technical substance backing your claims.