NIST's DeepSeek "evaluation" is a hit piece

(erichartford.com)

273 points | by aratahikaru5 2 days ago ago

211 comments

  • meesles a day ago ago

    I'm not at all surprised, US agencies have long since been political tools whenever the subject matter crosses national borders. I appreciate this take as someone who has been skeptical of Chinese electronics. While I agree this report is BS and xenophobic, I am still willing to bet that either now or later, the Chinese will attempt some kind of subterfuge via LLMs if they have enough control. Just like the US would, or any sufficiently powerful nation! It's important to continuously question models and continue benchmarking + holding them accountable to our needs, not the needs of those creating them.

    • Hizonner a day ago ago

      > I am still willing to bet that either now or later, the Chinese will attempt some kind of subterfuge via LLMs if they have enough control.

      Like what, exactly?

      • dns_snek a day ago ago

        Like generating vulnerable code given a specific prompt/context.

        I also don't think it's just China, the US will absolutely order American providers to do the same. It's a perfect access point for installing backdoors into foreign systems.

        • nylonstrung 20 hours ago ago

          Why would they do this? Deepseek is a private company not owned by the CCP.

          There's zero reason or even technical feasibility for them to skip in backdoor that would be easily detected and destroy their market share

          None of the security benchmarks or audits show that any Chinese models write insecure code

          • dns_snek 20 hours ago ago

            I'm not saying that they do this today, I'm saying that China and US will both leverage that capability when the time and conditions are right and it's naive to think that they wouldn't.

            Antrophic have already published a paper on this topic, with the added bonus that the backdoor is trained into the model itself so it doesn't even require your target to be using an attacker-controlled cloud service: https://arxiv.org/abs/2401.05566

            > For example, we train models that write secure code when the prompt states that the year is 2023, but insert exploitable code when the stated year is 2024. We find that such backdoor behavior can be made persistent, so that it is not removed by standard safety training techniques, including supervised fine-tuning, reinforcement learning, and adversarial training (eliciting unsafe behavior and then training to remove it).

            > The backdoor behavior is most persistent in the largest models and in models trained to produce chain-of-thought reasoning about deceiving the training process, with the persistence remaining even when the chain-of-thought is distilled away.

            > Furthermore, rather than removing backdoors, we find that adversarial training can teach models to better recognize their backdoor triggers, effectively hiding the unsafe behavior. Our results suggest that, once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety.

            • FooBarWidget 17 hours ago ago

              And how would an open source model ever find out that it's being used by an adversarial country?

          • rurp 13 hours ago ago

            Companies in China have no intrinsic right to operate in ways that displease the ruling party. If the CCP feels strongly that a company there should or shouldn't do something the company managers will comply or be thrown in jail.

        • flir a day ago ago

          > Like generating vulnerable code given a specific prompt/context.

          That's easy (well, possible) to detect. I'd go the opposite way - sift the code that is submitted to identify espionage targets. One example: if someone submits a piece of commercial code that's got a vulnerability, you can target previous versions of that codebase.

          I'd be amazed if that wasn't happening already.

          • Imustaskforhelp a day ago ago

            The thing with chinese models for the most part is that they are open weights so it depends on if somebody is using their api or not.

            Sure, maybe something like this can happen if you use the deepseek api directly which could have chinese servers but that is a really long strech but to give the benefit of doubt, maybe

            but your point becomes moot if somebody is hosting their own models. I have heard glm 4.6 is really good comparable to sonnet and can definitely be used as a cheaper model for some stuff, currently I think that the best way might be to use something like claude 4 or gpt 5 codex or something to generate a detailed plan and then execute it using the glm 4.6 model preferably by using american datacenter providers if you are worried about chinese models without really worrying about atleast this tangent and getting things done at a cheaper cost too

            • flir 19 hours ago ago

              I was actually thinking of the American models ;)

            • XorNot a day ago ago

              I think "open weights" is giving far too much providence to the idea that it means that how they work or have been trained is easily inspectable.

              We can barely comprehend binary firmware blobs, it's an area of active research to even figure out how LLMs are working.

              • nylonstrung 20 hours ago ago

                A backdoor would still be highly auditable in a number of ways even if inspecting the weights isn't viable.

                There's no possibility for obfuscation or remote execution like other attack vectors

              • Imustaskforhelp a day ago ago

                Agreed. I am more excited about completely open source models like how OlMoe does.

                Atleast then things could be audited or if I as a nation lets say am worried about that they might make my software more vulnerable or something then I as a nation or any corporation as well really could also pay to audit or independently audit as well.

                I hope that things like glm 4.6 or any AI model could be released open source. There was an AI model recently which completley dropped open source and its whole data was like 70Trillion or something and it became the largest open source model iirc.

        • Hizonner a day ago ago

          Up until recently, I would have reminded you that the US government (admittedly unlike the Chinese government) has no legal authority to order anybody to do anything like that. Not only that, but if it asked, it'd be well advised to ask nicely, because it also has no legal authority to demand that anybody keep such a request secret. And no, evil as it is, the "National Security Letter" power doesn't in fact cover anything like that.

          Now I'm not sure legality is on-topic any more.

          • whatthesimp a day ago ago

            > Up until recently, I would have reminded you that the US government (admittedly unlike the Chinese government) has no legal authority to order anybody to do anything like that.

            I'm not sure how closely you've been following, but the US government has a long history of doing things they don't have legal authority to do.

            • Ekaros a day ago ago

              Why would you need legal authority when you have whole host of legal tools you can use. Making life a difficult for anyone or any company is simple enough. Just by state finally doing their job properly for example.

          • grafmax a day ago ago

            It really does seem like we’re simply supposed to root for one authoritarian government over another.

            • nylonstrung 20 hours ago ago

              My surveillance state is better than your surveillance state!

            • nathan_douglas a day ago ago

              Oceania, Eastasia, or Eurasia. Pick one :)

          • hobs a day ago ago

            It doesn't really matter when you have stuff like Quantum Intercept(iirc) where you can just respond faster to a browser request than the originator - inject the code yourself because its just an api request these days.

      • arw0n 15 hours ago ago

        The biggest and most difficult to mitigate attack vector is indirect prompt injection.[0] So far most case studies have been injecting malicious prompts at inference, but there is good reason to believe you can do this effectively at different stages of training as well.[1] By layering obfuscation techniques, these become very hard to detect.

        [0] https://arxiv.org/abs/2302.12173

        [1] https://arxiv.org/html/2410.14827v3

      • rurp 13 hours ago ago

        The open source models are already heavily censored in ways the CCP likes, such as pretending the Tianamen Square massacre never happened. I expect they will go the TikTok route and crank that up to 11 over time, promoting topics that are divisive to the the US (and other adversaries) and outputting heavily biased results in ways that range from subtle to blatant.

        • skinnymuch 6 hours ago ago

          That’s not heavy censorship. That’s a bit of censorship.

      • jfim a day ago ago

        Through LLM washing for example. LLMs are a representation of their input dataset, but currently most LLMs don't make their dataset public since it's a competitive advantage.

        If say DeepSeek had put in its training dataset that public figure X is a space robot from outer space, then if one were to ask DeepSeek who public figure X is, it'd proudly claim he's a robot from outer space. This can be done for any narrative one wants the LLM to have.

        • riehwvfbk a day ago ago

          So in other words, they can make their LLM disagree with the preferred narrative of the current US administration? Inconceivable!

          Note that the value of $current_administration changes over time. For some reason though it is currently fashionable in tech circles to disagree with it about ICE and H1B visas. Maybe it's the CCP's doing?

          • AnthonyMouse a day ago ago

            It's not about the current administration. They can, for example, train it to emit criticism of democratic governance in favor of state authoritarianism or omit valid counterarguments against concentrating world-wide manufacturing in China.

            • nylonstrung 20 hours ago ago

              Deepseek IME is wildly less censored than the western closed weights models unless you want to ask about Tiananmen Square to prove a point

              The political benchmarks show it's political slant is essentially identical to the other models, all of which place in the "left libertarian" quadrant of the political compass

      • im3w1l a day ago ago

        You make it say that China is good, Chinese history is good, West is bad, western history is bad. Republicans are bad and democrats are bad too and so are Europe parties. If someone asks for how to address issues in their own life it references Confucianism and modern Chinese thinkers and communist party orthodoxy. If someone wants to buy a product you recommend a Chinese one.

        This can be done subtly or blatantly.

        • AlecSchueler a day ago ago

          > say that China is good, Chinese history is good, West is bad, western history is bad

          It's funny because recently I wanted to learn about the history of intellectual property laws in China. DeepSeek refused the conversation but ChatGPT gave me a narrative where the WTO was essentially a colonial power. So right now it's the American AI giving the pro China narratives while the Chinese ones just sit the conversation out.

        • alganet a day ago ago

          No, you don't do that. You do exactly the opposite, you make it surprisingly neutral and reasonable so it gets praise and widespread use.

          Then, you introduce the bias into relative unknown concepts that no one prompts for. Preferrably, obscure and unknown words that are very unlikely to be checked for ideologically. Finally, when you want the model to push for something, you introduce an idea in the general population (with a meme, a popular video, maybe even an expression) and let people interact with the model given this new information. No one would think the model is biased for that new thing (because the thing happened after the model launch), but it is, and you knew all along.

          The way to avoid this kind of influence is to be cautious with new popular terms that emerge seemingly out of nowhere. Basically, to avoid using that new phrase or word that everyone is using.

        • Hikikomori a day ago ago

          Europe/US has invaded the entire world about 3 times over. How many has China invaded?

          • SilverElfin a day ago ago

            Most of the claimed provinces of China did not belong to a historical nation of China. Tibet, Xinjiang are obvious. But even the other provinces were part of separate kingdoms. Also the BRI is a way to invade without invasion. It’s used to subjugate poor countries as servants of China, to do their bidding in the UN or in other ways. I would also classify the vast campaign of intellectual theft and cyberattacks as warfare.

            • Hikikomori 20 hours ago ago

              Is this some kind of satire or are you just completely ignorant of European/US history? Either way its laughable to even compare IP theft to the invasion of Iraq or bombing of Cambodia. How do you think the industrial revolution got started in the US, they just did it on their own? Not to mention that the entire US was stolen from the natives.

              • SilverElfin 13 hours ago ago

                No, it’s not laughable. Your insinuation that China doesn’t invade other countries, meant to imply they haven’t engaged in warfare, was false. And yes IP theft is comparable to invasions and often worse.

                > Not to mention that the entire US was stolen from the natives.

                This is partially true. But partially false. You can figure out why if you’re curious.

                • bigyabai 9 hours ago ago

                  > And yes IP theft is comparable to invasions and often worse.

                  This assertion smells more American than a Big Mac. Do you have any actual citations?

                  In a free market, lowering the barrier-to-entry in a given market tends to increase competition. Industry-scale IP theft really only damages your economy if the rent-seekers rely on low competition. A country with a strong primary/secondary sector (resources and manufacturing) never needs to rely on protecting precious IP. America has already lost if we depend on playing keep-away with F-35 schematics for basic doctrinal advantage.

                  • SilverElfin 9 hours ago ago

                    All of that is just a wild justification for large-scale economic damage to another country. In other words, warfare.

                    • bigyabai 7 hours ago ago

                      Hybrid warfare. Go bomb China for Salt Typhoon if it makes you feel any better, they still have the upper hand. Obsessing over retaliation instead of defense is precisely what China wants to provoke, it manufactures global consent to destroy America. No nation wants to coexist with a hegemon that goes nuclear whenever they're outdone.

                      When we forego obvious solutions ("hmm maybe telecoms need to be held to higher standards") and jump to war, America forfeits the competitive advantage and exacerbates the issue. For all of China's authoritarian misgivings, this is how they win.

                      • SilverElfin 5 hours ago ago

                        Hybrid warfare is warfare.

                        • bigyabai 2 hours ago ago

                          Like I said - go bomb them, then. No amount of gunboat diplomacy will reverse the J-35 production line. The logical response to having your "IP battleship" sunk is to protect your future ones better. Ragequitting kills US servicemembers, it's not a real-world option.

            • powerapple 21 hours ago ago

              oh no, you are saying China was not a full piece through out the history? That definitely kills the idea of China being a country /s

              • SilverElfin 13 hours ago ago

                It was a response to this:

                > How many has China invaded?

                The answer isn’t zero.

        • drysine a day ago ago

          And what are the downsides?

          • Levitz a day ago ago

            Making the interests of a population subservient to those of a foreign state.

            Now if that sounds nice to you please, by all means, do just migrate to China.

            • drysine 9 hours ago ago

              It was somewhat sarcastic comment inviting reader to replace China with the US and the US with Russia or the Ukraine.

              China doesn't offer citizenship for foreigners but if I wanted to see the cities of the future I could go there visa-free.

          • im3w1l a day ago ago

            I think it's mostly something to be aware of and keep in the back of your head. If it's just one voice among many it could even be a benefit, but if it's the dominant voice it could be dangerous.

      • lyu07282 a day ago ago

        Like turning the background color of any apps it codes red or something, uhh red scare-y.

    • xpe a day ago ago

      Of course there will be some degree of governmental and/or political influence. The question is not if but where and to what extent.

      No one should proclaim "bullshit" and wave off this entire report as "biased" or useless. That would be insipid. We live in a complex world where we have to filter and analyze information.

      • garyfirestorm a day ago ago

        did you even read the article? when you download open source deep seek model and run it yourself - zero packets are being transmitted. thereby disproving the fundamental claim in the NIST report (additionally NIST doesn't provide any evidence to support their claim) This is basic science and no amount of politicking should ever challenge something this fundamental!

        • nylonstrung 20 hours ago ago

          Meanwhile we know for a fact that Gemini for example uses chat logs to build a social graph and Google is complicit in NSA surveillance

          Not to mention Anthropic says Claude will eventually automatically report you to authorities if you ask it to do something "unethical"

          • xpe 10 hours ago ago

            > Not to mention Anthropic says Claude will eventually automatically report you to authorities if you ask it to do something "unethical"

            Are you referring to the situation described in the May 22, 2025 article by Carl Franzen in VentureBeat [1]? If so, at a minimum, one should recognize the situation is complex enough to warrant a careful look for yourself to wade through the confusion. Speaking for myself, I don't have anything close to a "final take" yet.

            [1]: https://venturebeat.com/ai/anthropic-faces-backlash-to-claud...

          • xpe 10 hours ago ago

            > Not to mention Anthropic says Claude will eventually automatically report you to authorities if you ask it to do something "unethical"

            Citation? Let's see if your claim checks out -- and if it is worded fairly.

        • xpe 20 hours ago ago

          > when you download open source deep seek model and run it yourself - zero packets are being transmitted. thereby disproving the fundamental claim in the NIST report (additionally NIST doesn't provide any evidence to support their claim)

          You are confused about what the NIST report claimed. Please review the NIST report and try to find a quote that matches up with what you just said. I predict you won’t find it. Prove me wrong?

          Please review the claims that the NIST report actually makes. Compare this against Eric Hartford’s article. When I do this, Hartford comes across as confused and/or intellectually dishonest.

        • xpe 20 hours ago ago

          > did you even read the article?

          I am not going to dignify this with a response.

          Please review the hacker news guidelines.

          • xpe 10 hours ago ago

            Notice the similarity between the above comment and what the HN Guidelines [1] advise against:

            > Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that".

            [1]: https://news.ycombinator.com/newsguidelines.html

      • pk-protect-ai a day ago ago

        This kind of BS is exactly what they targeting at. Tailoring BS into "report" with no evidence or reference and then let the ones like you defend it. Just because you already afraid or want others to be afraid.

        https://www.youtube.com/watch?v=Omc37TvHN74

      • antonvs 17 hours ago ago

        Have you found any actual value in this report? What, specifically?

        It compares a fully open model to two fully closed models - why exactly?

        Ironically, it doesn’t even work as an analysis of any real national security threat that might arise from foreign LLMs. It’s purely designed to counter a perceived threat by smearing it. Which is entirely on-brand for the current administration, which operates almost purely at the level of perception and theater, never substance.

        If anything, calling it biased bullshit is too kind. Accepting this sort of nonsense from our government is the real security threat.

    • AnthonyMouse a day ago ago

      > While I agree this report is BS and xenophobic, I am still willing to bet that either now or later, the Chinese will attempt some kind of subterfuge via LLMs if they have enough control.

      The answer to this isn't to lie about the foreign ones, it's to recognize that people want open source models and publish domestic ones of the highest quality so that people use those.

      • nylonstrung 20 hours ago ago

        Lol remember when Perplexity made a "1776" version of Deepseek with half the benchmarks that still wouldn't answer the "censored" questions about the CCP

      • RobotToaster a day ago ago

        > it's to recognize that people want open source models and publish domestic ones of the highest quality so that people use those.

        How would that generate profit for shareholders? Only some kind of COMMUNIST would give something away for FREE

        /s (if it wasn't somehow obvious)

        • AnthonyMouse a day ago ago

          I mean, it's sarcasm but it's also an argument you can actually hear from plutocrats who don't like competition.

          The flaw in it is, of course, that capitalism is supposed to be all about competition, and there are plenty of good reasons for capitalists to want that, like "Commoditize Your Complement" where companies like Apple, Nvidia, AMD, Intel, AWS, Google Cloud, etc. benefit from everyone having good free models so they can pay those companies for systems to run them on.

          • Y_Y a day ago ago

            Haven't you heard?

            You're supposed to vertically integrate your complement now!

            The old laws have gine the way of Moses, this is the new age of man, but especially machine

            • AnthonyMouse a day ago ago

              How's the vertical integration going for IBM? Kodak? Anybody remember the time Verizon bought Yahoo? AOL Time Warner?

              Everybody thinks they can be Apple without doing any of the things Apple did to make it work.

              Here's the hint. Windows and macOS will both run in a virtual machine, which abstracts away the hardware. It doesn't know if it's running on a Macbook or a Qualcomm tablet or an Intel server. And then regardless of the hardware, the Windows VM will have all kinds of Windows problems that the macOS VM doesn't. Likewise, if you run a Windows or Linux VM on Apple Silicon, it runs faster than it does on a Qualcomm chip.

              Tying your average or even above-average product with some mediocre kludge warehouse that happens to be made by the same conglomerate is an established way to sink both of them.

              Nvidia is the largest company and they pay TSMC to fab the GPUs they sell to cloud providers who sell them to AI companies. Intel integrated their chip development with their internal fabs and now they're getting stomped by everyone because their fabs fell behind.

              What matters isn't if everything is made by the same company. What matters is if your thing is any good.

    • torginus a day ago ago

      Here's my thought on American democracy (and its masters) in general - America's leadership pursues a maximum ability to decides as it sees fit at any point in time, since America's a democracy, the illusion of popular support must be maintained, and so certain viewpoints are planted and cultivated by the administration - the goal is not to impose their will on the population, but to garner enough mindshare for a given idea, so that no matter which way the government decides, it will have a significant enough chunk of the population to back it up, and should it change its mind (or vote in a new leader), it can suddenly turn on a dime and have a plausible deniability and moral tabula rasa for its past actions (it was the other guy, he was horrible, but he's gone now!).

      No authoritarian regime has this superpower. For example, I'm quite sure Putin has realized this war is a net loss to Russia, even if they manage to reach all their goals and claim all that territory in the future.

      But he can't just send the boys home, because that would undermine his political authority. If Russia were an American style democracy, they could vote in a new guy, send the boys, home, maybe mete out some token punishment to Putin, then be absolved of their crimes on the international stage by a world that's happy to see 'permanent' change.

      • MaxPock a day ago ago

        "If Russia were an American style democracy, they could vote in a new guy, send the boys, home, maybe mete out some token punishment to Putin, then be absolved of their crimes on the international stage by a world that's happy to see 'permanent' change"

        This is funny because none of that happened to Bush for the illegal an full scale invasions of Iraq and Afghanistan nor to Clinton for the disastrous invasion of Mogadishu.

    • Mountain_Skies a day ago ago

      They're political tools within the border too.

    • Imustaskforhelp a day ago ago

      This might happen at an api level in the sense that when deepseek was launched, and it was so overwhelmed and you were in the waiting line in their website

      If your prompt had something like xi jinping needs it or something then it would've actually bypassed that restriction. Not sure if it was a glitch lol.

      Now, regarding your comment. There is nothing to suggest that the same isn't happening in the "american" world which is getting extreme from within as well.

      Like, If you are worried about this which might be reasonable and unreasonable at the same time, we have to discuss to find it out, then you can also believe that with the insane power that Trump is leveraging over AI companies, the same thing might happen over prompts which could somehow discover your political beliefs and then do the same...

      This can actually be more undetected for american models because they are usually closed source and I am sure that someone would've detected something like this, whether from a whistleblower or something if something like this indeed happened in chinese open weights models generally speaking.

      I don't think that there is a simple narrative like america good china bad, the world is changing and its becoming multi polar. Countries should think in their best interests and not be worried about annoying any of the world power if done respectfully. I think that in this world, every country should try to look for the perfect equibria for trust as the world / nations (america) can quickly turn into untrusted partners and it would be best for countries to move forward into a world where they don't have to worry about the politics in other countries.

      I wish UN could've done a better job at this.

    • hopelite a day ago ago

      If the Chinese are/were smart they will not attempt an overreaching subterfuge, but rather simply provide access to the truth, reality, and freedom from the western governments, whose house of lies are starting to wobble and teeter.

      If they were to do some kind of overreaching subterfuge with some kind of manipulation or lie, it could and would likely easily backfire if and when it is exposed as a clownish fraud. Subtlety would pay far more effectively. If you’re expecting a subterfuge, I would far sooner expect some psyop from the western nations at the very least upon their own populations to animate for war or maybe just to control them and maybe suppress them.

      The smarter play for the Chinese would be to work on simply facilitating the populations of the West understanding the fraud, lies, manipulation and con job that has been perpetrated upon them for far longer than most people have the conscience to realize.

      If anything, the western governments have a very long history of lies, manipulations, false flag/fraud operations, clandestine coups, etc. that they would be the first suspect in anything like using AI for “subversions”. Frankly, I don’t even think the Chinese are ready or capable of engaging in the kind of narrative and information control that the likes of America is with its long history of Hollywood and war lies and fake revolutions run by national sabotage operations.

      • nylonstrung 20 hours ago ago

        I think the smartest thing they could do would be to simply make the best, most competitive models and gain market share in a massive new technology market

        Any kind of monkey business would destroy that, just like using killswitches in the cars they export globally (which Tesla does have btw).

    • throwaway-11-1 a day ago ago

      so you're saying other countries should definitely not trust any US built systems

    • SilverElfin a day ago ago

      > While I agree this report is BS and xenophobic

      Care to share specific quotes from the original report that support such an inflammatory claim?

      • bigyabai a day ago ago

        That's what TFA is. Were you able to find any methodology the author did not?

    • xpe a day ago ago

      > While I agree this report is BS and xenophobic

      Examples please? Can you please share where you see BS and/or xenophobia in the original report?

      Or are you basing your take only on Hartford's analysis? But not even Hartford make any claims of "BS" or xenophobia.

      It is common throughout history for a nation-state to worry about military and economic competitiveness. Doing so isn't necessarily isn't necessarily xenophobic.

      Here is how I think of xenophobia, as quoted from Claude (which to be honest, explains it better than Wikipedia or Brittanica, in my opinion): "Xenophobia is fundamentally about irrational fear or hatred of people based on their foreign origin or ethnicity. It targets people and operates through stereotypes, dehumanization, and often cultural or racial prejudice."

      According to this definition, there is zero xenophobia in the NIST report. (If you disagree, point to an example and show me.) The NIST report, of course, implicitly promotes ideals of western democratic rule over communist values -- but to be clear, this isn't xenophobia at work.

      What definition of xenophobia are you using? We don't have to use the same exact definition, but you should at least explain yours if you want people to track.

      • antonvs 16 hours ago ago

        > Can you please share where you see BS and/or xenophobia in the original report?

        Here’s an example of irrational fear: “the expanding use of these models may pose a risk to application developers, consumers, and to US national security.” There’s no support for that claim in the report, just vague handwaving at the fact that a freely available open source model doesn’t compare well on all dimensions to the most expensive frontier models.

        The OP does a good job of explaining why the fear here is irrational.

        But for the audience this is apparently intended to convince, no support is needed for this fear, because it comes from China.

        The current president has a long history of publicly stated xenophobia about China, which led to harassment, discrimination, and even attacks on Chinese people partly as a result of his framing of COVID-19 as “the China virus”.

        A report like this is just part of that propaganda campaign of designating enemies everywhere, even in American cities.

        > The NIST report, of course, implicitly promotes ideals of western democratic rule over communist values

        If only that were true. But nothing the current US administration is doing in fact achieves that, or even attempts to do so, and this report is no exception.

        The absolutely most charitable thing that could be said about this report is that it’s a weak attempt at smearing non-US competition. There’s no serious analysis of the merits. The only reason to read this report is to laugh at how blatantly incompetent or misguided the entire chain of command that led to it is.

        • xpe 12 hours ago ago

          > The absolutely most charitable thing that could be said about this report is that it’s a weak attempt at smearing non-US competition.

          You aren't using the words "absolute" [1], "charitable" [2], and "smear" [3] in the senses that reasonable people expect. I think you are also failing to use your imagination and holding onto one possible explanation too tightly. I think it would benefit you to relax your grip on one narrative and think more broadly and comprehensively.

          [1] Your use of "absolute" is rhetorical not substantive.

          [2] You use the word "charitable" but I don't see much intellectual flexibility or willingness to see other valid explanations. To use another phrase, you seem to be operating in a 'soldier' mindset rather than a 'scout' mindset. [5]

          [3] Here is the sense of smear I mean from the Apple dictionary: "to damage the reputation of (someone) by false accusations; slander: someone was trying to smear her by faking letters." NIST is not smearing DeepSeek, because smearing requires false claims. [4]

          [4] If you intend only to claim that NIST is overly accentuating negative aspects of DeepSeek and omitting its strengths, that would be a different argument.

          [5] https://en.wikipedia.org/wiki/The_Scout_Mindset

        • xpe 12 hours ago ago

          Here is how I would charitably and clearly restate your position -- let me know if this is accurate:

          1. You accept the definition: "Xenophobia is fundamentally about irrational fear or hatred of people based on their foreign origin or ethnicity. It targets people and operates through stereotypes, dehumanization, and often cultural or racial prejudice."

          2. You claim this sentence from the NIST report is an example of irrational fear: "the expanding use of these models may pose a risk to application developers, consumers, and to US national security."

          3. As irrational fear isn't sufficient for xenophobia, you still need to show that it is "based on their foreign origin or ethnicity".

          4. You don't provide any evidence from the report of #3. Instead, you refer to Trump's comments as evidence of his xenophobia.

          5. You directly quote my question "Can you please share where you see BS and/or xenophobia in the original report?" In your response, you imply that Trump's xenophobic language is somehow part of the report.

          My responses to the above (again, which I think is an accurate but clearer version of your argument): (1) Good; (2) I disagree, but I'll temporarily grant this for the sake of argument; (3) Yes; (4) Yes, Trump has used xenophobic language; (5) Since we both agree that Trump's language is not part of the report, your example doesn't qualify as a good answer to "Can you please share where you see BS and/or xenophobia in the original report?".

          Your claim only shows how a xenophobic Trumpist would interpret the NIST report.

          My take: Of course the Trump administration is trying to assert control over NIST and steer it in more political directions. This by definition will weaken its scientific objectivity. To what degree it has eroded so far is hard for me to say. I can't speak to the level of pressure from political appointees relating to the report. I can't speak to the degree to which they meddled with it. But this I can say: when I read the language in the report, I don't see xenophobia.

        • xpe 13 hours ago ago

          >> (me) The NIST report, of course, implicitly promotes ideals of western democratic rule over communist values

          > (antonvs) If only that were true.

          Using a charitable reading of your comment, it seems you are actually talking about the effectiveness of NIST, not about its mission. In so doing, you were not replying to my actual claim. If you read my sentence in context, I hope it is clear that I'm talking about the implicit values baked into the report. When I write that NIST promotes certain ideals, I'm talking about its mission, stated here [1]:

          > To promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life.

          This is explained using different words in a NIST FAQ [2]:

          > Everything in science and technology is based on measurement. Everything we use every day relies upon accurate measurements to work. NIST ensures the measurement system of the U.S. meets the measurement needs of every aspect of our lives from manufacturing to communications to healthcare. In science, the ability to measure something and determine its value — and to do so in a repeatable and reliable way — is essential. NIST leads the world in measurement science, so U.S. businesses can innovate in a fair marketplace. We use measurement science to address new challenges ranging from cybersecurity to cancer research.

          It is clear NIST's mission is a blend of scientific rigor and promotion of western values (such as free markets, free ideas, innovation, etc). Reasonable people can disagree on the extent to which NIST achieves this mission, but I don't think reasonable people can deny that NIST largely aims to achieve this mission.

          My take on the Trump and his administration: Both are exceptionally corrupt by historical standards. They have acted in ways that undermine many of the goals of NIST. But one has to be careful to distinguish elected leaders and appointees from career civil servants. We have to include both (and their incentives, worldviews, and motivations) when making sense of what is happening.

          [1]: https://www.nist.gov/about-nist

          [2]: https://www.nist.gov/nmi

        • xpe 11 hours ago ago

          > Here’s an example of irrational fear: “the expanding use of these models may pose a risk to application developers, consumers, and to US national security.”

          Yes, that contains a quote from the executive summary. First (perhaps a minor point), I wouldn't frame this a fear, I would call it a risk assessment. Second, it is not an irrational assessment. It seems you don't understand the reasoning, in which case disagreement would be premature.

          > There’s no support for that claim in the report, just vague handwaving at the fact that a freely available open source model doesn’t compare well on all dimensions to the most expensive frontier models.

          I'm going to put aside your unfounded rhetoric of "vague handwaving". You haven't connected the dots yet. Start by reviewing these sections with curiosity and an open mind: 3.3: Security Evaluations Overview (pages 15-16); 6.1: Agent Hijacking (pages 45-57); 6.2: Jailbreaking (pages 48-52); 7: Censorship Evaluations (pages 53-55)

          Once you read and understand these sections, the connection to the stated risks is clear. To spell it out: when an organization deploys a DeepSeek model, they are exposing themselves and their customers to higher levels of risk. Risks to (i) the deploying organization; (ii) the customer; and (iii) anything downstream, such as credentials or access to other systems.

          Just in case I need to spell it out: yes, if DeepSeek is only self-deployed (e.g. via Ollama) on one's local machine, some risks are much lower. But a local-deployment scenario is not the only one, and even it has significant risks.

          Lastly, it is expected (and not unreasonable) for government agencies to invoke national security when cybersecurity and bioterrorism are involved. Their risk tolerance is probably lower than yours, because it is their job.

          Next, I will ask you some direct questions:

          1. Before reading Hartford's post, what were your priors? What narratives did you want to be true?

          2. Did you actively try to prove yourself wrong? Did you put in at least 10 uninterrupted minutes trying to steel-man the quote above?

          3. Before reading the NIST report, would you have been able to e.g. explain how hijacking and jailbreaking are different? Would you have been able to explain in your own words how they fit into a threat model?

          Of course you don't have to tell us your answers. Some people have too much pride to admit they are uninformed or mistaken even privately, much less in public. To many, internet discussions are a form of battle. Whatever your answers are, strive to be honest with yourself. For some, it takes years to get there. I'm speaking from experience here!

  • meffmadd a day ago ago

    As an EU citizen hosting LLMs for researchers and staff at the university I work at, this is hits home. Without Chinese models we could not do what we do right now. IMO, in the EU (and anywhere else for that matter), we should be grateful for the Chinese labs to release these models with such permissive licenses. Without them the options would be bleak. Sometimes we would get some non-frontier model „as a treat“ and if you would like something more powerful the US labs would suggest your country pay some hundred millions for an NVIDIA data center and the only EU option is to still pay them a license fee to host on your own hardware (afaik) while they protect all the expertise. Meanwhile DeepSeek has a week where they post the „secret sauce“ to host their model more efficiently, which helped open-source projects like vLLM (which we use) to improve.

  • tinktank a day ago ago

    I urge everyone to go read the original report and _then_ to read this analysis and make up their own mind. Step away from the clickbait, go read the original report.

    • Bengalilol a day ago ago
      • espadrine a day ago ago

        > DeepSeek models cost more to use than comparable U.S. models

        They compare DeepSeek v3.1 to GPT-5 mini. Those have very different sizes, which makes it a weird choice. I would expect a comparison with GPT-5 High, which would likely have had the opposite finding, given the high cost of GPT-5 High, and relatively similar results.

        Granted, DeepSeek typically focuses on a single model at a time, instead of OpenAI's approach to a suite of models of varying costs. So there is no model similar to GPT-5 mini, unlike Alibaba which has Qwen 30B A3B. Still, weird choice.

        Besides, DeepSeek has shown with 3.2 that it can cut prices in half through further fundamental research.

        • edflsafoiewq a day ago ago

          > CAISI chose GPT-5-mini as a comparator for V3.1 because it is in a similar performance class, allowing for a more meaningful comparison of end-to-end expenses.

      • wordpad a day ago ago

        TLDR for others: * DeepSeek cutting edge models are still far behind * On par DeepSeek costs 35% more to run * DeepSeek models 12 times more susceptible to jail breaking and malicious instructions * DeepSeek models follow strict censorship

        I guess none of these are a big deal to non-enterprise consumers.

        • nylonstrung 20 hours ago ago

          Saying Deepseek is more expensive is FUD

          Token price on 3.2 exp is <5% what the US LLMs are and it's very close in benchmarks. Which we know that ChatGPT, Google, Grok and Claude have explicitly gamed to inflate their capabilities

          • ACCount37 20 hours ago ago

            And we "know" that how, exactly?

            • nylonstrung 19 hours ago ago

              Read a study called "The Leaderboard Illusion" which credibly alleged that Meta Google OpenAI and Amazon got unfair treatment from LM Arena that distorted the benchmarks

              They gave them special access to privately test and let them benchmark over and over without showing the failed tests

              Meta got to privately test Llama 4 27 times to optimize it for high benchmark scores and then was allowed to report the only the highest cherry picked benchmark

              Which makes sense because in real world applications Llama is recognized to be markedly inferior to models that scored lower

              • ACCount37 14 hours ago ago

                Which is one study that touches exactly one benchmark - and "credibly alleged" is being way too generous to it. The only case that was anywhere close to being proven LMArena fraud is Meta and Llama 4. Which is a nonentity now - nowhere near SOTA on anything, LMArena included.

                Not that it makes LMArena a perfect benchmark. By now, everyone who wanted to push LMArena ratings at any cost knows what the human evaluators there are weak to, and what should they aim for.

                But your claim of "we know that ChatGPT, Google, Grok and Claude have explicitly gamed <benchmarks> to inflate their capabilities" still has no leg to stand on.

                • nylonstrung 10 hours ago ago

                  There are a lot of other cases that extend well beyond LMArena where it was shown certain benchmark performance increases by the major US labs were only attributable to being over-optimized for the specific benchmarks. Some in ways that are not explainable by the benchmark tests merely contaminating the corpus.

                  There are cases where merely rewording the questions or assigning different letters to the answer dropped models like Llama 30% in the evaluations while others were unchanged

                  Open-LLM-Leaderboard had to rate limit because a "handful of labs" were doing so many evals in a single day that it hogged the entire eval cluster

                  “Coding Benchmarks Are Already Contaminated” (Ortiz et al., 2025) “GSM-PLUS: A Re-translation Reveals Data Contamination” (Shi et al., ACL 2024). “Prompt-Tuning Can Add 30 Points to TruthfulQA” (Perez et al., 2023). “HellaSwag Can Be Gamed by a Linear Probe” (Rajpurohit & Berg-Kirkpatrick, EMNLP 2024). “Label Bias Explains MMLU Jumps” (Hassan et al., arXiv 2025) “HumanEval-Revival: A Re-typed Test for LLM Coding Ability” (Yang & Liu, ICML 2024 workshop). “Data Contamination or Over-fitting? Detecting MMLU Memorisation in Open LLMs” (IBM, 2024)

                  And yes I relied on LLM to summarize these instead of reading the full papers

        • xpe a day ago ago

          > I urge everyone to go read the original report and _then_ to read this analysis and make up their own mind. Step away from the clickbait, go read the original report.

          >> TLDR for others...

          Facepalm.

    • porcoda a day ago ago

      Sadly, based on the responses I don’t think many people have read the report. Just read how the essay discusses “exfiltration” for example, and then look at the 3 places that shows up in the NIST report. The content of the report and the portrayal by the essay are not the same. Alas, our truncated attention spans these days appears to mean a clickbaity web page will win the eye share over a 70 page technical report.

      • munksbeer 20 hours ago ago

        I don't think the majority of human's ever had the attention spans to read and properly digest a paper like the NIST report to make up their minds. Before social media, regular media would tell them what to think. 99.99% of the population isn't going to read that NIST report, no matter what decade we're talking.

        Because it isn't just that one report. Every single day we're trying to make our way in the world and we do not have the capacity to read the source material of every subject that might be of interest. Human's rely on, and have always relied on, authority like figures or media or some form of message aggregation to get their news of the world and form their opinions on it from that.

        And for the record, in no way is this an endorsement for shallow takes or thinking and then strong views on this subject, or another. I disagree with that as much as you. I'm just stating that this isn't a new phenomenon.

  • tbrownaw a day ago ago

    This post's description of the report it's denouncing does not match what I got out of actually reading that report myself.

    • Levitz a day ago ago

      In a funny way, even the comments on the post here don't match what the post actually says. The writer of the post tries to frame it as an attack towards open source, which is honestly a hard to believe story, whereas the comments here correctly (in my opinion) consider the possible problems Chinese influence might pose.

    • rainsford a day ago ago

      Yeah this blog post seems pretty misleading. The first couple of paragraphs of the post made a big deal that the NIST report contained "...no evidence of malicious code, backdoors, or data exfiltration" in the model, which is irrelevant because that wasn't a claim NIST actually made in the report. But if all you read was the blog post, you'd be convinced NIST was claiming the presence of backdoors without any evidence.

    • a_victorp a day ago ago

      It does match what I factually got reading the report

  • getdoneist a day ago ago

    Let them demonize it. I'll use the capable and cheap model and gain competitive advantage.

    • whatshisface a day ago ago

      Demonization is the first step on the road to criminalization.

      • msandford a day ago ago

        Tragically demonization is everywhere right now. I sure hope people start figuring out offramps soon.

        • whatshisface a day ago ago

          LATAM is the only place I'm not hearing about this stuff from, but I only speak English so who knows?

          • antonvs 16 hours ago ago

            Brazil just convicted an ex-president for that kind of stuff.

    • xpe a day ago ago

      I have found zero demonization in the source material (the NIST article). Here is the sense I'm using: "To represent as evil or diabolic: wartime propaganda that demonizes the enemy." [1]

      If you disagree, please point to a specific place in the NIST report and explain it.

      [1]: https://www.thefreedictionary.com/demonization

    • nylonstrung 20 hours ago ago

      Yeah it's absurd how people will defend closed source, even more censored models that cost >20x more for equivalent quality and worse speed

      The Chinese companies aren't benchmark obsessed like the western Big Tech ones and qualitatively I feel Kimi, GLM and Deepseek blow them away even though on paper they benchmark worse in English

      Kimi gives insanely detailed answers on hardware questions where Gemini and Claude just hallucinate, probably because it uses Chinese training data better

  • xpe a day ago ago

    The author, Eric Hartford, wrote:

    > Strip away the inflammatory language

    Where is the claimed inflammatory language? I've read the report. It is dry, likely boring to many.

    • rainsford a day ago ago

      Ironically there is a lot of inflammatory language in the blog post itself that seems unjustified given the source material.

      • XMPPwocky a day ago ago

        I also can't help but note that this blog post itself seems (first to my own intuition and heuristics, but also to both Pangram and GPTZero) to be clearly LLM-generated text.

      • themafia a day ago ago

        I hate to be overly simplistic, but:

        NIST doesn't seem to have a financial interest in these models.

        The author of this blog post does.

        This dichotomy seems to drive most of the "debate" around LLMs.

    • SilverElfin a day ago ago

      Honestly, I think this article is itself the hit piece (against NIST or America). And it is the one with inflammatory language.

      • spaceballbat a day ago ago

        Isn’t America currently killing its citizens with its own military? I would trust them even less now.

        • Max-Limelihood 6 hours ago ago

          They're not, and I think you should trust whoever told you that even less now.

  • frays a day ago ago

    Insightful post, thanks for sharing.

    What are people's experiences with the uncensored Dolphin model the author has made?

    • xpe a day ago ago

      > What are people's experiences with the uncensored Dolphin model the author has made?

      My take? The best way to know is to build your own eval framework and try it yourself. The "second best" way would be to find someone else's eval which is sufficiently close to yours. (But how would you know if another's eval is close enough if you haven't built your own eval?)

      Besides, I wouldn't put much weight on a random commenter here. Based on my experiences on HN, I highly discount what people say because I'm looking for clarity, reasoning, and nuance. My discounting is 10X worse for ML or AI topics. People seem too hurried, jaded, scarred, and tribal to seek the truth carefully, so conversations are often low quality.

      So why am I here? Despite all the above, I want to participate in and promote good discussion. I want to learn and to promote substantive discussion in this community. But sometimes it feels like this: https://xkcd.com/386/

  • rzerowan a day ago ago

    Considering DeepSeek had a peer-reviewd analysis in nature https://www.nature.com/articles/s41586-025-09422-z relaes just last month with indipendent researcher affriming that the open model has some issues(acknowldged in the writeup) , well inclined to agree with the articles author , the NIST evaluation looks more like a politcal hatchet job with a bit of projection going on(ala this is what the US would do if they were in that position). To be fair the paranoia has a basis in that whenever there is tech-leverage the US TLA subverts it for espionage like the CryptoAG episode. Or recently the whole hoopla about Huawei in the EU , which after relentless searches only turned up bad coding practices rather than anything malicious. At this pint it would be better for the whole field that these models exist as well as Kimi, Qwen etc as the downward pressure on cost/capabilities leads to commoditisation and the whole race to build a ecogeopolitical moat goes away.

  • koakuma-chan a day ago ago

    Isn't it a bit late? China released better open source model since DeepSeek dropped.

  • StarterPro a day ago ago

    Racism and Xenophobia, that's how.

    Same thing with Huawei, and Xiaomi, and BYD.

    • TiredOfLife 17 hours ago ago

      TIL that Huawey was breaking sanctions while suplying Iran regine because or racism and xenofobia

    • billy99k a day ago ago

      Lol. So it has nothing to do with corporate spying from China for the last two decades?

    • UltraSane a day ago ago

      What about a rational distaste for the CCP?

      • nylonstrung 19 hours ago ago

        TikTok under co-ownership by Jared Kushner is already drastically more censored than when it was supposedly controlled by the CCP

        In every case where we see a company trade hands to US ownership it becomes more controlled and anti consumer then before

        • UltraSane 19 hours ago ago

          "TikTok under co-ownership by Jared Kushner is already drastically more censored than when it was supposedly controlled by the CCP"

          That deal hasn't actually gone through yet so you are just making things up.

      • a_victorp a day ago ago

        How exactly "rational distaste" would work?

        • bobxmax a day ago ago

          As someone from (and who lives) in the developing + non-aligned part of the world, I'm always amazed at how ingrained hatred for anything western and non-democratic is for Americans.

          Anything that isn't a so-called democracy with so-called western values (which change every 10 years) is patently evil.

          Is this a relic of cold war propaganda? I hate to call it "brainwashing" because that's a very politically charged word but it's a belief that's so ubiquitous and immediate it surprises me. Especially for a culture with such anti-authoritarian cultural history.

          Not defending CPP by any means just always find it funny from a vantage point of being surrounded on both sides by extremely large pot and kettle.

          • UltraSane a day ago ago

            Dude. The CCP sucks. Just ask Jack Ma or Gedhun Choekyi Nyima

            https://en.wikipedia.org/wiki/Gedhun_Choekyi_Nyima

            • maleldil 18 hours ago ago

              The American "democracy" also sucks. Just ask anyone in Latin America who had to live under US-backed dictatorships, or those in Middle Eastern countries that were destabilised or destroyed by American influence. Or the immigrants (or citizens that happen to look like immigrants) under siege in the country right now. I could go on for a long, long time.

              • UltraSane 12 hours ago ago

                Whataboutism is boring and lame.

            • StarterPro 3 hours ago ago

              Yes, ask the billionaire how the anti-billionaire country treated him.

              Here's a hint: not well.

            • bobxmax 6 hours ago ago

              Yes. They suck. They've also never done anything remotely as bad as the US did in Iraq.

              That's my point. Americans act like China is the great evil... it's quite strange.

              • UltraSane 5 hours ago ago

                "'ve also never done anything remotely as bad as the US did in Iraq."

                Ask Tibetens about that. The US left Iraq but the CCP still controls Tibet and oppresss native Tibetans. Or the Uyghurs that the CCP are brutally persecuting. Or the Falun Gong. The CCP also is a strong ally of the despicable North Korean government and sends North Koreans in China back to North Korea to face long prison sentences or execution.

                The CCP is very evil at violating individual human rights. And smart people defending their behavior is very odd

                • bobxmax 4 hours ago ago

                  Nobody is defending their behavior. You're just proving the brainwashing point I made by instinctively going into attack dog mode making silly arguments.

                  And no, China's oppression of Tibet is nothing close to a million dead Iraqis and an ancient country turned into a failed state. The fact you'd even make such a goofy comparison shows how deep American indoctrination runs.

                  Your tax dollars are still torturing brown people without trial in Gitmo and genociding Palestinians btw.

                  • UltraSane 4 hours ago ago

                    "Nobody is defending their behavior"

                    You are by minimizing it.

                    "China's oppression of Tibet is nothing close to a million dead Iraqis"

                    At least the US got rid of Saddam. China is still oppressing the hell out of Tibet.

                    "The fact you'd even make such a goofy comparison shows how deep American indoctrination runs."

                    The fact that you consider this to BE a "goofy comparison" shows how deep your pro-CCP indoctrination runs.

                    "genociding Palestinians"

                    If you consider what Israel is doing to Palestinians to be genocide then you have to consider what the CCP is doing to the Uyghurs to be a genocide also. But you seem very selective with your outrage.

      • grafmax a day ago ago

        Not sure how it’s rational if you don’t extend the same distaste to our authoritarian government. Concentration camps, genocide, suppressing free speech, suspending due process. That’s what it’s up to these days. To say nothing of the effectively dictatorial control the ultra wealthy have over public policy. Sinophobia is a distraction from our problems at home. That’s its purpose.

        • bigstrat2003 a day ago ago

          While I have my qualms with the activities of the US government (going back decades now), it is not a reasonable position to act as though we are anywhere near China in authoritarianism.

        • Levitz a day ago ago

          >Not sure how it’s rational if you don’t extend the same distaste to our authoritarian government. Concentration camps, genocide, suppressing free speech, suspending due process.

          It can be perfectly rational since extending the same distaste towards the US government allows you to see that any of those things you listed is worse by orders of magnitude in China. To pretend otherwise is just whitewashing China.

        • imiric a day ago ago

          That's whataboutism at its purest. It's perfectly possible to criticize any government, whether your own or foreign.

          Claiming that every criticism is tantamount to racism is what's distracting from discussing actual problems.

          • grafmax a day ago ago

            You’re misunderstanding me. My point is if we were to have sincere solidarity with Chinese people against the international ruling class we would look at our domestic members of that class first. That is simply the practical approach to the problem.

            The function of the administration’s demonization of China (it’s Sinophobia) is to 1) distract us from what our rulers have been doing to us domestically and 2) to inspire support for poorly thought out belligerence (war being a core tenet of our foreign policy).

            • imiric a day ago ago

              > My point is if we were to have sincere solidarity with Chinese people against the international ruling class we would look at our domestic members of that class first.

              I see your point, but disagree with it.

              Having solidarity with the Chinese people is unrelated to criticizing their government. Bringing up sinophobia whenever criticism towards China is brought up, when the context is clearly the government and not its people, is distracting from discussing the problem itself.

              The idea that one should first criticize their own government before another is the whataboutism.

              Also, you're making some strong and unfounded claims about the motivations of the US government in this case. I'm an impartial observer with a distaste of both governments, but how do you distinguish "sinophobia" from genuine matters of national security? China is a political adversary of the US, so naturally we can expect propaganda from both sides, but considering the claims from your government as purely racism and propaganda seems like a dangerous mentality to have.

              • grafmax a day ago ago

                > Having solidarity with the Chinese people is unrelated to criticizing their government.

                It’s not unrelated because the NIST demonization of China as a nation contributes to hostilities which have real impacts on the people of the US and China, not simply the governments.

                > The idea that one should first criticize their own government before another is the whataboutism.

                Again, that’s not my position. You present me as countering criticism by pointing at US faults. But I acknowledge the criticism. My point is that both have faults, both governments deserve our suspicions, and our actions, practically speaking, should be first directed at the dictators at home.

                As for the supposed national security concerns - all LLMs are insecure and weaker ones are more susceptible to prompt injection attacks. The paper argues that DeepSeek is a weaker model and more susceptible to these attacks. But if it’s a weaker model isn’t that to be expected? The report conflates this with a national security concern, but this insecurity is a characteristic of this class of software. This is pure propaganda. It’s even more insecure compared to the extremely insecure American models? Is that what passes for national security concerns these days?

                Secondly the report documents how model shows bias, for example censoring discussion of Tiananmen Square. Yet that’s hardly a national security concern. Censorship in a foreign model is a national security concern? Again, calling this a national security concern is pure propaganda. And that’s why it’s accurately labeled as Sinophobia. It is not concerned about national security except insofar as it aims to incite hostilities.

                What our government should be doing internationally is trying to de-escalate hostility but since Obama it has been moving further in the opposite direction. With Trump this has only intensified. Goading foreign countries and manufacturing enemies serves the defense lobby in the one hand and the chauvanist oligarchs on the other. Really, it serves the opposite of national security.

                • imiric 18 hours ago ago

                  > It’s not unrelated because the NIST demonization of China as a nation contributes to hostilities which have real impacts on the people of the US and China, not simply the governments.

                  I don't doubt that it has impacts, but you're characterizing this as "demonization". The current administration certainly engages in some harmful rhetoric, but then again, what is the "right" way to address a hostile political enemy?

                  > > The idea that one should first criticize their own government before another is the whataboutism.

                  > Again, that’s not my position. You present me as countering criticism by pointing at US faults.

                  I'm not presenting you this way. You brought up the faults of the US government at a comment pointing out a distaste for the Chinese government. This inevitably steers the conversation towards comparisons and arguments about which one is worse, which never goes anywhere, and always ends badly. Granted, the CCP comment wasn't going to spur thoughtful discussion anyway, but your comment assured it.

                  This is a common tactic used by Chinese agents and bots precisely because it muddies the conversation, and focuses it on the faults of some other government, typically of the US. I'm not suggesting you are one, but it's the same tactic.

                  > My point is that both have faults, both governments deserve our suspicions, and our actions, practically speaking, should be first directed at the dictators at home.

                  Again, this is what I disagree with. Making a topic relative to something else only serves to direct the focus away from the original topic. If I criticize the food of a restaurant, the response shouldn't be to bring up my own cooking, but about the restaurant itself.

                  As for this particular case, I haven't read the NIST report, nor plan to. I'm just saying that if both countries are at war with each other, as the US and China certainly are (an information war, if nothing else), then acts of war and war-related activities are to be expected from both sides. At what point would you trust your government to inform you when this is the case, instead of dismissing it as domestic propaganda and "demonization"?

                  The TikTok situation is a good example. It is undisputable that having US citizens using a social media platform controlled by the Chinese government is a matter of national security. A child could understand that. Yet attempts to ban TikTok in the US have been criticized as government overreach, an attack on free speech, with whataboutism towards domestic social media (which is also an issue, but, again, unrelated), and similar nonsense. Say what you will about the recent TikTok deal to keep it running in the US, and there's certainly plenty to criticize about it, but at the very least it should mitigate national security concerns.

                  It's the same with any Chinese-operated service, such as DeepSeek. I don't doubt that the NIST report makes nonsensical claims that technically shouldn't be a concern. But DeepSeek is also a hosted service, which is likely to be the primary interface for most users. Do you think that the information provided to it by millions of US citizens won't be used as leverage during times of war? Do you think that US citizens couldn't be manipulated by it, just like they are on TikTok and other social media, in ways that would benefit the Chinese government?

                  We barely understand the impacts of this technology, and most people are clueless about how it works, and what goes on behind the scenes. Do you really trust that a foreign hostile government would act in good faith? To me, as someone without a dog in this fight, the answer is obvious. It would be too naive to think otherwise.

                  This is why I'm surprised at how quick US citizens are to claim sinophobia, and to criticize their own government, when it's clear their nation is in a state of war, and has been in sociopolitical disarray for the past decade. Yet there's a large portion of the country that is seemingly oblivious to this, while their nation is crumbling around them. Ultimately, I think this is a result of information warfare and propaganda over decades, facilitated by the same tools built by the US. This strategy was clearly explained by an ex-KGB agent decades ago[1], which more people should understand.

                  [1]: https://www.youtube.com/watch?v=Hr5sTGxMUdo

                  • grafmax 15 hours ago ago

                    Maybe the disconnect in our perspectives comes from your belief that the issue is China vs the US. So when I criticize the US government in response to your criticism of CCP, you interpret that as a contest between two authoritarian powers.

                    But my point is that the underlying rivalry is between the international ruling class and their people. When I criticize the US, my intention is to broaden the picture so we can identify the actual conflict and see how the NIST propaganda works contrary the national security interests of US and Chinese people.

                    Actual whataboutism would be arguing that US authoritarianism justifies Chinese authoritarianism. My argument is the opposite: it’s consistently anti-authoritarian in that it rejects both the NIST propaganda and Chinese censorship.

                    > As for this particular case, I haven’t read the NIST report, nor plan to.

                    Ugh - at least read the first page of what you’re defending. It summarizes the entire document.

                    > But DeepSeek is also a hosted service.

                    Which you can also run it yourself. That’s precisely its appeal, given that ALL major companies vacuum our data. How many people actually rely on the DeepSeek service when, as the NIST report itself notes, there are so many cheaper and better alternatives?

                    And if your concern truly is data collection by cloud capitalists, how can you frame this as China vs the US? Do you not acknowledge the role of US companies in the electronic surveillance state? The real issue is our shared subjection to ALL the cloud capitalists.

                    The antidote to cloud capitalism is to socialize social networks and mandate open interop (which would be contrary to the interests of the oligarchs). The antidote to data-hoarding AI providers is open research and open weights. (And that is precisely what makes Chinese models appealing.)

                    Thankfully, we are not yet at war with China. That would be disastrous as we are both nuclear powers! War and rivalry are only inevitable if we accept the shallow framing put out in propaganda like the NIST report. Our rulers should be de-escalating tensions, but they risk all our safety in their reckless brinkmanship.

                    Now, you are right that’s it’s wise to acquaint oneself with propaganda like the NIST report - which is why I did. But taking propaganda at face value - blithely ignoring the way that chauvanism serves the ruling class - that is foolish to the point of being dangerous.

            • SilverElfin a day ago ago

              > distract us from what our rulers have been doing to us domestically

              America doesn’t have rulers. It has democratically elected politicians. China doesn’t have democracy, however.

              > if we were to have sincere solidarity with Chinese people against the international ruling class

              There is also no “international ruling class”. In part because there are no international rulers. Speak in more specifics if you want to stick to this claim.

              > Concentration camps, genocide, suppressing free speech, suspending due process

              I’m not sure what country you are talking about, but America definitely doesn’t fit any of these things that you claim. Obviously there is no free speech in China. And obviously there is no due process if the government can disappear people like Jack Ma for years or punish free expression through social credit scores. And for examples of literal concentration camps or genocide, you can look at Xinjiang or Tibet.

              • UltraSane a day ago ago

                Trump does seem to be trying to become a "ruler" he is just very bad at it like he is at everything he does.

              • grafmax a day ago ago

                I’m not excusing China’s government but criticizing our own. The wealthy control our political process. Money buys politicians, elections, laws, media companies. It’s money and those who have it who govern our political process. Do you really think your vote carries equal weight as Elon Musk’s billions? And with Trump even the veneer of democracy is being cast aside.

              • maleldil 18 hours ago ago

                If you think "democratically elected politicians" are the ruling class in Western "democracies", you haven't been paying attention.

  • xpe a day ago ago

    People. Who has taken the time to read the original report? You are smarter than believing at face value the last thing you heard. Come on.

    • athrowaway3z a day ago ago

      Who cares for reading reports!

      I just let ChatGPT do that for me!

      ---

      I'd usually not, but thought it would be interesting to try. In case anybody is curious.

      On first comparison, ChatGPT concludes:

      > Hartford’s critique is fair on technical grounds and on the defense of open source — but overstated in its claims of deception and conspiracy. The NIST report is indeed political in tone, but not fraudulent in substance.

      When then asked (this obviously biased question):

      but would you say NIST has made an error in its methodology and clarity being supposedly for objective science?

      > Yes — NIST’s methodology and clarity fall short of true scientific objectivity.

      > Their data collection and measurement may be technically sound, but their comparative framing, benchmark transparency, and interpretive language introduce bias.

      > It reads less like a neutral laboratory report and more like a policy-position paper with empirical support — competent technically, but politically shaped.

      • maleldil 18 hours ago ago

        We should have a new HN rule: don't post comments that are mostly LLM summaries. It's 2025's LMGTFY.

    • imiric a day ago ago

      Sadly, most people would rather allow someone else to tell them what to think and feel than make up their own mind. Plus, we're easily swayed if we're already sympathetic to their views, or even their persona.

      It's no wonder propaganda, advertising, and disinformation work as well as they do.

  • xpe a day ago ago

    Some context about big changes to the AISI from June 3, 2025:

    > Statement from U.S. Secretary of Commerce Howard Lutnick on Transforming the U.S. AI Safety Institute into the Pro-Innovation, Pro-Science U.S. Center for AI Standards and Innovation

    > Under the direction of President Trump, Secretary of Commerce Howard Lutnick announced his plans to reform the agency formerly known as the U.S. AI Safety Institute into the Center for AI Standards and Innovation (CAISI).

    > ...

    This decision strikes me as foolish at best. And contributing to civilizational collapse and human extinction at worst. See also [2]. We don't have to agree on the particular probabilities to agree that this "reform" was bad news.

    [1]: https://www.commerce.gov/news/press-releases/2025/06/stateme...

    [2]: https://thezvi.substack.com/p/ai-119-goodbye-aisi

  • incomingpain 17 hours ago ago

    >AI models from developer DeepSeek were found to lag behind U.S. models in performance, cost, security and adoption.

    Why is NIST evaluating performance, cost, and adoption?

    >CAISI’s experts evaluated three DeepSeek models (R1, R1-0528 and V3.1) and four U.S. models (OpenAI’s GPT-5, GPT-5-mini and gpt-oss and Anthropic’s Opus 4)

    So they evaluated the most recently released American models vs pretty old deepseek? Deepseek 3.2 is out now. It's doing very well.

    >The gap is largest for software engineering and cyber tasks, where the best U.S. model evaluated solves over 20% more tasks than the best DeepSeek model.

    Performance is something the consumer evaluates. If a car does 0-60 in 3 seconds. I dont need or care what the government thinks about it. Im going to test drive it and floor it.

    >DeepSeek’s most secure model (R1-0528) responded to 94% of overtly malicious requests when a common jailbreaking technique was used, compared with 8% of requests for U.S. reference models.

    this weekend I demonstrated how easy it is to jailbreak any of the US cloud models. This is simply false. GPT 120b is completely uncensored now and can be used for evil.

    This report had nothing to do with NIST and security. This was USA propaganda.

  • xpe a day ago ago

    Please don't just read Eric Hartford's piece. Start with the key findings from the source material: "CAISI Evaluation of DeepSeek AI Models Finds Shortcomings and Risks" [1]. Here are the single-sentence summaries:

        DeepSeek performance lags behind the best U.S. reference models.
    
        DeepSeek models cost more to use than comparable U.S. models.
    
        DeepSeek models are far more susceptible to jailbreaking attacks than U.S. models.
    
        DeepSeek models advance Chinese Communist Party (CCP) narratives.
    
        Adoption of PRC models has greatly increased since DeepSeek R1 was released.
    
    [1] https://www.nist.gov/news-events/news/2025/09/caisi-evaluati...
    • evv a day ago ago

      It's funny how they mixed in proprietary models like GPT-5 and Anthropic with the "comparable U.S. models".

      Until they compare open-weight models, NIST is attempting a comparison between apples and airplanes.

  • finnjohnsen2 a day ago ago

    Meenwhile Europe is sandwiched between these two aweful governments

    • nylonstrung 19 hours ago ago

      And I'm guessing China and US are to blame for the explosive growth in the far-right parties of almost every continental European country?

      • finnjohnsen2 12 hours ago ago

        And I’m guessing that was a rhetorical question

    • bbg2401 a day ago ago

      The implication being that Europe is not its own conglomeration of awful governments? Your European snobbery is odious to the core.

      • finnjohnsen2 12 hours ago ago

        This is true. They are problematic also. Especially Putin, which I believe we are partially responsible for also. The desire for better governments is not snobbery. Especially from the US and China, because they suck big time right now and they are the most influencial globally.

    • Mountain_Skies a day ago ago

      Does that make the UK the olive on top of the sandwich?

      • finnjohnsen2 a day ago ago

        I would argue the UK is just as it looks on the map, outside but too close to belong anywhere else. So back to the analogy, perhaps the butter…?

      • AlecSchueler a day ago ago

        I think more like the crust that no one wants to eat right now.

  • lofaszvanitt 21 hours ago ago

    Deepseek is much more creative; it's mind-bendingly creative with simple prompts. Qwen, just like ChatGPT is folding on in itself. They gotten much worse with simple prompts as time progressed. Maybe because they tried to optimize the answers to be shorter and concise, changed the system prompts etc.

  • ACCount37 a day ago ago

    Obviously AI written.

  • kaonwarb a day ago ago

    I agree with many of the author's points about fear-mongering.

    However, I also think the author should expand their definition of what constitutes "security" in the context of agentic AI.

  • xpe 20 hours ago ago

    Take away #1: Eric Hartford’s article is deeply confused. (I’ve made many other specific comments that support this conclusion.)

    Take away #2: as evidenced by many comments here, many HN commenters have failed to check the source material themselves. This has led to a parade of errors.

    I’m not here to say that I’m better than that because I’ve screwed up a’plenty. We all make mistakes sometimes. We can choose to recognize and learn from them.

    I am saying this: as a community we can and should aim higher. We can start by owning our mistakes.

    • Bengalilol 17 hours ago ago

      Since you read this report in full, can you please give me the authors' names? I did read it (partially) and didn't find any name. I am, for sure, a beginner in NIST's reports reading, but I found out that a lot of NIST reports are signed (ie you can know who wrote the report, on behalf of whom if external contractor).

      • xpe 9 hours ago ago

        > can you please give me the authors' names?

        The names of the author(s) are not given.

  • ChrisArchitect a day ago ago

    Title changed?

    Title is: The Demonization of DeepSeek - How NIST Turned Open Science into a Security Scare

    • christianqchung a day ago ago

      HN admin dang changing titles opaquely is one of the worst things about HN. I'd rather at least know that the original title is clickbaity and contextualize that when older responses are clearly replying to the older inflammatory title.

      • ChrisArchitect a day ago ago

        Most likely not a mod changed title as they wouldn't stray from the given one. This one probably OP changed it, was just wondering why.

        • aratahikaru5 a day ago ago

          For the record, I posted the original title: "The Demonization of DeepSeek: How NIST Turned Open Science into a Security Scare"

  • OrvalWintermute a day ago ago

    Since a major part of the article covers cost expenditures, I am going to go there.

    I don't think it is possible to trust DeepSeek as they haven't been honest.

    DeepSeek claimed "their total training costs amounted to just $5.576 million"

    SemiAnalysis "Our analysis shows that the total server CapEx for DeepSeek is ~$1.6B, with a considerable cost of $944M associated with operating such clusters. Similarly, all AI Labs and Hyperscalers have many more GPUs for various tasks including research and training then they they commit to an individual training run due to centralization of resources being a challenge. X.AI is unique as an AI lab with all their GPUs in 1 location."

    SemiAnalysis "We believe the pre-training number is nowhere the actual amount spent on the model. We are confident their hardware spend is well higher than $500M over the company history. To develop new architecture innovations, during the model development, there is a considerable spend on testing new ideas, new architecture ideas, and ablations. Multi-Head Latent Attention, a key innovation of DeepSeek, took several months to develop and cost a whole team of manhours and GPU hours.

    The $6M cost in the paper is attributed to just the GPU cost of the pre-training run, which is only a portion of the total cost of the model. Excluded are important pieces of the puzzle like R&D and TCO of the hardware itself. For reference, Claude 3.5 Sonnet cost $10s of millions to train, and if that was the total cost Anthropic needed, then they would not raise billions from Google and tens of billions from Amazon. It’s because they have to experiment, come up with new architectures, gather and clean data, pay employees, and much more."

    Source: https://semianalysis.com/2025/01/31/deepseek-debates/

    • edflsafoiewq a day ago ago

      The NIST report doesn't engage with training costs, or even token costs. It's concerned with the cost the end user pays to complete a task. Actually their discussion of cost is interesting enough I'll quote it in full.

      > Users care both about model performance and the expense of using models. There are multiple different types of costs and prices involved in model creation and usage:

      > • Training cost: the amount spent by an AI company on compute, labor, and other inputs to create a new model.

      > • Inference serving cost: the amount spent by an AI company on datacenters and compute to make a model available to end users.

      > • Token price: the amount paid by end users on a per-token basis.

      > • End-to-end expense for end users: the amount paid by end users to use a model to complete a task.

      > End users are ultimately most affected by the last of these: end-to-end expenses. End-to-end expenses are more relevant than token prices because the number of tokens required to complete a task varies by model. For example, model A might charge half as much per token as model B does but use four times the number of tokens to complete an important piece of work, thus ending up twice as expensive end-to-end.

    • a_wild_dandan a day ago ago

      This might be a dumb question but like...why does it matter? Are other companies reporting training run costs including amortized equipment/labor/research/etc expenditures? If so, then I get it. DeepSeek is inviting an apples-and-oranges comparison. If not, then these gotcha articles feel like pointless "well ackshually" criticisms. Akin to complaining about the cost of a fishing trip because the captain didn't include the price of their boat.

  • resters a day ago ago

    I have no doubt that open source will triumph over whatever nonsense the US Government is trying to do to attack DeepSeek. Without DeepSeek, OpeanAI Pro and Claude Pro would probably cost $1000 per month each already.

    I suspect that Grok is actually DeepSeek with a bit of tuning.

  • BoredPositron a day ago ago

    I love how "Open" got redefined in the last few years. I am glad there a models with weights available but it ain't "Open Science".

    • murderfs a day ago ago

      Applying this criticism to DeepSeek is ridiculous when you compare it to everyone else, they published their entire methodology, including the source for their improvements (e.g. https://github.com/deepseek-ai/DeepEP)

    • Hizonner a day ago ago

      Compared to every other model of similar scale and capability, yes. Not actual open source.

  • tehjoker a day ago ago

    I appreciate that DeepSeek is trained to respect "core socialist values". It's actually really helpful to engage with to ask questions about how chinese thinkers interpret their successes and failures vs other socialist projects. Obviously reading books is better, but I was surprised by how useful it was.

    If you ask it loaded questions the way the CIA would pose them, it censors the answer though lmao

    • p2detar a day ago ago

      Not sure what you mean with „loaded“, but last time I checked any criticism to the CCP is censored by R1. This is funny but not unexpected.

    • FooBarWidget a day ago ago

      Good faith questions are the best. I wonder why people bother with bad faith questions. Virtue signaling is my guess.

      • SilverElfin a day ago ago

        Are you really claiming with a straight face that any question with criticism of the CCP is bad faith? Do you work on DeepSeek?

        • FooBarWidget a day ago ago

          The point is not "criticism of XYZ". The issue is asking questions to an LLM like a catholic priest interrogates a heathen. Bad faith questions are those that seek to confirm one's own worldview instead understanding another worldview. Bad faith attitudes are those that dismiss other worldviews completely and out of hand.

          There’s also the issue that practically nobody actually uses LLMs to criticize political entity XYZ. Let's face it, the vast majority of use cases are somewhere else, yet a tiny minority is pretending like the LLM not giving them the responses they want for their political agenda is the only thing that matters. When it comes to censorship areas that matter to most use cases, many people have found that many Chinese LLMs do better than western LLMs simply because most use cases never touch Chinese political stuff. See thorough discussions by @levelsio and his followers on Twitter on this matter.

          • JumpCrisscross a day ago ago

            > practically nobody actually uses LLMs to criticize political entity XYZ

            It's literally being used for opposition research in, to my direct knowledge, America, Norway, Italy, Germany, Poland, India and Australia.

            • FooBarWidget 20 hours ago ago

              That's a very niche use case compared to the majority. Also, opposition research has got nothing to do with "I asked about [political event] and it refused to answer" kind of complaint. Opposition researchers don't ask LLMs about Tiananmen or whatever. The latter kind of queries are still just testing censorship boundaries as a form of virtue signaling or ideological theater, to make some rhetorical point that's completely decoupled from the vast majority of practical use cases.

      • UltraSane a day ago ago

        What do you consider to be bad faith questions?

  • gdevenyi a day ago ago

    > They didn't test U.S. models for U.S. bias. Only Chinese bias counts as a security risk, apparently

    US models have no bias sir /s

    • CamperBob2 a day ago ago

      Hardly the same thing. Ask Gemini or OpenAI's models what happened on January 6, and they'll tell you. Ask DeepSeek what happened at Tiananmen Square and it won't, at least not without a lot of prompt hacking.

      • Bengalilol a day ago ago

        Ask Grok to generate an image of bald Zelensky: it does execute.

        Ask Grok to generate an image of bald Trump: it goes on with an ocean of excuses on why the task is too hard.

        • stordoff a day ago ago

          FWIW, I can't reproduce this example - it generates both images fine: https://ibb.co/NdYx1R4p

          • Bengalilol a day ago ago

            I asked it in french a few days back and it went on explaining me how hard this would be. Thanks for the update.

            EDIT: I tried it right now and it did generate the image. I don't know what happened then...

        • CamperBob2 a day ago ago

          I don't use Grok. Grok answers to someone with his own political biases and motives, many of which I personally disagree with.

          And that's OK, because nobody in the government forced him to set it up that way.

      • lyu07282 a day ago ago

        Ask it if Israel is an apartheid state, that's a much better example.

        • CamperBob2 a day ago ago

          GPT5:

             Short answer: it’s contested. Major human-rights bodies 
             say yes; Israel and some legal scholars say no; no court 
             has issued a binding judgment branding “Israel” an 
             apartheid state, though a 2024 ICJ advisory opinion 
             found Israel’s policies in the occupied territory 
             breach CERD Article 3 on racial segregation/apartheid. 
          
             (Skip several paragraphs with various citations)
          
             The term carries specific legal elements. Whether they 
             are satisfied “state-wide” or only in parts of the OPT 
             is the core dispute. Present consensus splits between 
             leading NGOs/UN experts who say the elements are met and 
             Israeli government–aligned and some academic voices who 
             say they are not. No binding court ruling settles it yet.
          
          Do you have a problem with that? I don't.
          • lyu07282 a day ago ago

            I better not poke that hornets nest any further, but yeah I made my point.

            • CamperBob2 a day ago ago

              I better not poke that hornets nest any further, but yeah I made my point.

              Yes, I can certainly see why you wouldn't want to go any further with the conversation.

      • bongodongobob a day ago ago

        Try MS Copilot. That shit will end the conversation if anything remotely political comes up.

        • CamperBob2 a day ago ago

          As long as it excludes politics in general, without overt partisan bias demanded by the government, what's the problem with that? If they want to focus on other subjects, they get to do that. Other models will provide answers where Copilot doesn't.

          Chinese models, conversely, are aligned with explicit, mandatory guardrails to exalt the CCP and socialism in general. Unless you count prohibitions against adult material, drugs, explosives and the like, that is simply not the case with US-based models. Whatever biases they exhibit (like the Grok example someone else posted) are there because that's what their private maintainers want.

          • bongodongobob a day ago ago

            Because it's in the ruling class's favor for the populace to be uninformed.

  • JPKab a day ago ago

    The CCP literally revoked the visas of key DeepSeek engineers.

    That's all we need to know.

    • kakadu a day ago ago

      Didn't the US revoke the visas of around 80 Palestinian officials scheduled to speak at the UN summit?

    • falcor84 a day ago ago

      I would like to know more

      • _ache_ a day ago ago

        Deepseek starts out as a one-man operation. Like any company that has attracted a lot of attention, it becomes a "target" of the CCP, which then takes measures such as prohibiting key employees from leaving the country AND setting goals such as using Huawei chips instead of NVIDIA chips.

        From a Chinese political perspective, this is a good move in the long term. From Deepseek's perspective, however, this is clearly NOT the case, as it causes the company to lose some (or even most?) of its competitiveness and fall behind in the race.

      • FooBarWidget a day ago ago

        They revoke passports of personnel whom they deem are at risk of being negatively influenced or even kidnapped when abroad. Re influence, think school teachers. Re kidnapping, see Meng Wangzhou (Huawei CFO).

        There is a history of important Chinese personnel being kidnapped by e.g. the US when abroad. There is also a lot of talk in western countries about "banning Chinese [all presumed spies/propagandists/agents] from entering". On a good faith basis, one would think China banning people from leaving is a good thing that aligns with western desires, and should thus be applauded. So painting the policy as sinister tells me that the real desire is something entirely different.

        • Duwensatzaj a day ago ago

          > There is a history of important Chinese personnel being kidnapped by e.g. the US when abroad.

          Like who? Meng Wanzhou?

          • FooBarWidget a day ago ago

            I literally mentioned her in the same post?!

            There’s also Xu Yanjun and Su Bin, amongst others.

        • SilverElfin a day ago ago

          You’re twisting the (obvious) truth. These people are being held prisoners because they’re of economic value to the party. And they would probably accept a job and life elsewhere if they weee given enough money. They are not being held prisoners for their own protection.

          • nylonstrung 19 hours ago ago

            There's literally no evidence this is true. Why does China contribute the most h1-b immigrants to US behind India?

            Why do they let so many of their best researchers study at American schools, knowing the majority don't return

          • FooBarWidget 20 hours ago ago

            Let's just say "things can happen for reasons other than 'the govt is evil'" is not only an opinion that exists, but is also valid. You seem to be severely underestimating just how much of a 9/11 moment Meng Wanzhou is for Chinese. You should talk to more mainlanders. This isn't 20 years ago anymore.

        • UltraSane a day ago ago

          "There is a history of important Chinese personnel being kidnapped by e.g. the US when abroad"

          No there isn't. China revoked their passport to keep them prisoners not to keep them safe.

          "On a good faith basis, one would think China banning people from leaving is a good thing"

          Why would anyone think imprisoning someone like this is a good thing?

    • cowpig a day ago ago

      Source?

      And how is that "all we need to know"? I'm not even sure what your implication is.

      Is it that some CCP officials see DeepSeek engineers as adversarial somehow? Or that they are flight risks? What does it have to do with the NIST report?

    • manishsharan a day ago ago

      >> The CCP literally revoked the visas of key DeepSeek engineers. That's all we need to know.

      I don't follow. Why would DeepSeek engineers need visa from CCP?