261 comments

  • tombert a day ago ago

    When my sister and I would play monopoly as kids, we had lost the manual so whenever we didn’t like the outcome of whatever happened, we would make up rules about what was right. Technically then, it was very easy stay compliant while still being able to do well because we could rewrite the rules.

    Also, since I was older I feel like I was able to get away with those redefinitions a lot more often…

    • smallmancontrov a day ago ago

      The word "lawful" always seems to get dragged out when people in power are doing some especially heinous rulemaking, like throwing a hissy fit over a single company trying to voluntarily draw a line at domestic surveillance and fully automated killchains.

      • bko a day ago ago

        A private corporation can choose not to sell to the government. A lot of them do exactly this. A lot of hoops to jump through.

        However, if they do sell to the government, they shouldn't have some sneaky way to exert control over decision making using their products. We're a country of laws, and for better or for worse, these laws are made by elected officials and those appointed by elected officials.

        Why an American company wouldn't want American defense to have the most capable tools at their disposal is a different matter all together, but here we are.

        • hvb2 a day ago ago

          Your court system wasn't designed for the Executive branch acting with actual bad intent.

          You're a country of laws, but if enforcing them takes months if not years... Then during that time, you're the wild wild west

          • DennisP a day ago ago

            The system also wasn't designed for presidential immunity. Combining that with unlimited federal pardons, we're the wild west permanently, or at least until that decision is overturned.

            • Nasrudith 17 hours ago ago

              I suspect cynically that as soon as someone not a republican takes power the presidential immunity will magically evaporate in a burst of bad faith jurisprudence.

          • remarkEon 11 hours ago ago

            This comment is hilariously incorrect. Courts stop the Executive branch all the time. You do not know what you're talking about.

        • joshuamorton a day ago ago

          > they shouldn't have some sneaky way to exert control over decision making using their products.

          why not, many companies have all sorts of rules you agree to when using their products, including many legal ("lawful") things. Are you saying that the government as a client should be unbound by contractual obligations that apply to other clients?

          • throwup238 a day ago ago

            Governments negotiate their own contracts with their own terms of service. That’s one of the hoops government contractors jump through.

            • thayne a day ago ago

              That's fine as long as the company can choose they don't like those terms and refuse to do business. But in this case the government threatened, and carried out the threat, of classifying Anthropic as a "supply chain threat" if they didn't agree to the government's terms.

            • kube-system a day ago ago

              Not only that, but some of the contractual terms are defined by federal acquisition law, et al.

            • joshuamorton a day ago ago

              I want to be clear, I agree. I have no objection to unique government contracts. I'm specifically curious about GPs position that a government contractor should be (ethically?) bound from putting contractual obligations on government use of their service.

              Like the various ai providers limit lawful use like creating AI pornography. I think it would be reasonable to keep a contractual restriction against that even when working with the government.

        • tombert a day ago ago

          This administration has made it very clear that they will do what they can to change laws whenever convenient, without congressional oversight, whether or not they are "allowed" to.

          Trump implemented tariffs he wasn't allowed to immediately, he started a war he probably wasn't allowed to in order to (allegedly) distract from associating with a pedophile, he wrote an executive order trying to undo the fourteenth amendment, he has actively been abducting and imprisoning lawful residents (and even citizens!) and actively pushed for racial profiling to do so.

          If a company feels like the government will simply rewrite the laws in order to advance any kind of political whim (including to be weaponized against that very company!), it's not wrong or even weird for them to want to add safeguards to their product.

          To be clear, this isn't weird or uncommon. Lots the stuff you sign in the EULA isn't preventing you from doing things that are "illegal".

      • WarmWash a day ago ago

        Anthropic wanted the ability to verify compliance whereas OAI and Google are fine with "trust us". Which is how it always is, and always has been.

        For better or worse, the government is the one who audits, and has it's own internal systems for self audits. So no one except them tells them what they can or cannot do. The government would never put itself in a position where civilians died because Amodei didn't like the vibe of the case being worked.

        In a way it's wild that people are upset that the government didn't put a billionaire megacorp CEO in the drivers seat of intelligence.

        • ffsm8 a day ago ago

          It's incredible if you honestly believe that.

          The only reason this blew up at all was because of the insane overreach by the DoW after anthropic voiced their concern.

          It was well within anthropic right to do so, as it was part of their contract.

          And it would've been very understandable that the DoW balked at that, though the real issue would be the incompetence how the contract was able to get through with that in it. But with that contact in place, the only sensible action would've been to terminate the contract and move on. Frankly, nobody would've cared.

          But the DoW felt it just had to go further... And their chosen action was just an insane overreach - hence the controversy.

        • anticensor a day ago ago

          Anthropic wanted the ability to verify compliance whereas OAI and Google went "OK no verification but then we won't give you the weights".

        • trhway a day ago ago

          >So no one except them tells them what they can or cannot do.

          you're missing "laundering the responsibility" approach - find a lawyer who writes that the thing is legal in his opinion, and voila.

    • bko a day ago ago

      I'd prefer our elected officials own the manual, accepting the fact that [person I don't like] could be in power and they can re-write the rules, then a private billion dollar corporation. Especially when it comes to defense.

      • mc32 a day ago ago

        Ha! If the Congress did diddly squat about eavesdropping on them by organizations that aren’t supposed to spy on citizens back in the Obama days (we also spied on allies’s governments but that’s kinda what all of them do) there is no hope in them reining things back at all… for mere hoi polloi.

        • bko a day ago ago

          I guess we have to appoint Amodei and Altman as our benevolent dictators to keep Congress in check!

          • vintermann 19 hours ago ago

            They're allowed to say no. The ability I have to say no to you doesn't mean I rule over you.

    • caycep a day ago ago

      I was going to post about whether there were still "laws" in the US, but this post gets the point across much better

    • avaer a day ago ago

      We need to encourage kids to play and make mistakes more so they can prepare themselves for the real world.

      Which is really just a bunch of big children with bank accounts, drugs, and weapons.

    • TZubiri a day ago ago

      The pentagon is part of the executive, not the legislative, and as such they can not write the 'rules' (law)

      • citadel_melon 12 hours ago ago

        The executive also can’t declare war, yet we are in Iran. Executive orders have stood in for legislation; even though it shouldn’t.

        What is considered lawful is up to the whims of the pentagon and the other two branches have shown little interest in providing sufficient checks and balances. Maybe 5 years after the pentagon makes an illegitimate legal justification will a judge strike it down: that is if we are lucky.

    • cucumber3732842 a day ago ago

      The big reason it's "obvious" when tech megacorps do it is because big tech is new to the game and doesn't have an existing regulatory capture system already up and running and legitimized like medical, civil engineering, energy, agriculture, chemical, etc, do.

      If this were 3M making nasty stuff for Northrop to put in bombs and drop on brown people or Exxon scheming up something bad in Alaska or bulldozing a national park for solar panels or some other legacy BigCo doing slimy things that are in the interests of them and the government but against the interest of the public they'd have 40yr of preexisting trade group publications, bought and paid for academic and media chatter, etc, etc, that they could point to and say "look, this is fine because the stuff we paid into in advance to legitimize these sorts of things as they come up says it is" though obviously they'd use very different words.

      • GeekyBear a day ago ago

        > big tech is new to the game and doesn't have an existing regulatory capture system already up and running

        The career officials in the Obama FTC started proceedings for an antitrust lawsuit against Google over a decade ago.

        The political appointees (of both parties) shut it down.

        It seems to me that regulatory capture has been working for Google for some time now.

        • tombert a day ago ago

          I mean it's basically an extremely high-stakes version of the (possibly apocryphal) Upton Sinclair quote: "It is difficult to get a man to understand something, when his salary depends upon his not understanding it."

          Most people (at least the people I've talked to, which admittedly is somewhat of a lefty bubble but I think even more generally) agree that companies getting to or close to "monopoly" status is a pretty bad thing, and that they should be broken up. Political candidates get a lot of social credit for claiming that they're going to do exactly that. The moment that they actually get into a position where they actually could do something about it, they suddenly remember who their campaign contributors are, and can then create reasons to avoid actually solving any of these problems.

          Very occasionally we have successes in this field, like the breakup of Standard Oil and AT&T), but of course both of these sort of became toothless since we basically allowed both of these companies to re-acquire each other and form the same problems again.

          There are similar reasons as to why politicians will occasionally push for regulations to not allow themselves to invest in companies that their policies affect, but somehow manages to never get through.

          Politicians are very rarely punished for breaking political promises, but often rewarded for making the promises. They are also rewarded by their corporate overlords for breaking these promises.

          • GeekyBear a day ago ago

            > Political candidates get a lot of social credit for claiming that they're going to do exactly that. The moment that they actually get into a position where they actually could do something about it, they suddenly remember who their campaign contributors are, and can then create reasons to avoid actually solving any of these problems.

            I read a good Google adjacent example of this in yesterday's NYTimes:

            > Mr. Brin, a Google co-founder and one of the world’s richest people, is a longtime friend of Mr. Newsom, the California governor. Both men attended each other’s weddings. But now Mr. Brin pulled Mr. Newsom aside to a different part of the property for a serious talk. Mr. Brin told Mr. Newsom that he could not stand the state’s proposed billionaire tax... Mr. Newsom, who had never seemed inclined to support the tax, came out the next month and pledged to defeat it.

            https://archive.ph/LTkix

          • Aerroon a day ago ago

            >they suddenly remember who their campaign contributors are, and can then create reasons to avoid actually solving any of these problems.

            There are very real concerns when you break up a company though. Rockefeller's wealth shot up a lot when Standard Oil was broken up. That could easily make a politician that's "politician out to get the big companies" into "politician making billionaires richer."

            • vintermann 19 hours ago ago

              > Rockefeller's wealth shot up a lot when Standard Oil was broken up.

              Most owners (weighted by share) do NOT seem to want their big monopolies broken up, despite the track record of Standard Oil.

            • tombert a day ago ago

              Tough to say for sure, but I think it's probably still better to have more billionaires if there's more competition.

              I wasn't around during the breakup, but my parents told me that phone service got considerably better and cheaper after the AT&T breakup, which makes enough sense to me: if a consumer can drop you for someone else, you have a reason to try and compete on service and/or price.

        • WarmWash a day ago ago

          Google has a monopoly because of the internet's insistence on ad blocking, and outright indignant refusal to dare pay a greedy company for thinking they could ask for money for a "free" web service.

          It's basically impossible to get off the ground competing against google when 30-40% of people are just freeloading your service, and 80-90% think the internet is an ethereal realm that everyone could have ad and subscription access to if we could only agree to starve these greedy middle men.

          • tombert a day ago ago

            I've heard dozens of people say this (and I've even said it myself) but I don't think it actually holds water. People will pay for things if those things don't suck, and it's not even hard to find examples of that (even with Google products no less!).

            For search, Kagi has had a growing fanbase for a couple years now, but let's take things that have been easy to get for free for decades: Movies.

            People have been, with relatively impunity, able to torrent movies for free for a very long time. It's not hard, and the only way you're paying for it is ads for hot MILFs in your area. And yet, despite this having always been an option, somehow Netflix and Hulu and Disney+ and HBO Max have managed to make fairly successful businesses selling movies that could have been pirated.

            I could get YouTube as ad-free with an ad blocker, but I pay for YouTube Premium. I could get all my music for free with Redacted, but I use YouTube music, or I buy CDs. I could torrent video games but I just buy them off Steam or GOG.

            This isn't new either; there were thousands of free forums on the internet in the late 90's, but yet people still bought accounts on Something Awful for quite awhile (and indeed still buy accounts, but with much lower numbers).

            We can certainly argue about how much value these companies are providing, and we can argue about how it's annoying how there's a million different streaming services now and how that's really irritating, but my point stands: people do pay for things on the internet.

            We don't have to accept that companies need to sell all our data. We don't have to accept being bombarded with ads. We don't have to accept that people won't pay to use services.

            • WarmWash a day ago ago

              The harsh reality is that conversion from "free" to paid is on the order of 1%. This is true for everything from patreon, to wikipedia, to kagi, to nebula, to home mailers for charity.

              1% of the people pay, 60% watch ads, 39% are crusaders who conveniently are morally obligated to not pay or compensate for anything (but have their costs covered by the other two groups, who complain about ads/costs but somehow are blind to the dead weight they are dragging around).

              Worst of all is that it's impossible to have an honest conversation about it, because they people who haven't seen an ad or paid for a movie in 20 years go absolutely insane when called out. YouTube creators talk about it in private, but they would never dare say anything on their channel. Ad blocking is practically a religion.

              • tombert a day ago ago

                Do we need more than 1% conversion though? As long as the company is sustainable then that's sufficient to justify the company's existence. I think it says more about a lot of these services in that they're so shitty that they people will only use them if they're "free"; if Google or Facebook or Instagram or TikTok aren't good enough services to justify people paying for it, then maybe they shouldn't exist?

                You can't use Kagi or Nebula without paying, so I don't really see how they're suffering from the free riders you keep insisting are some horrible epidemic. Almost by definition, if you're using Kagi or Nebula, you're already a conversion...are you saying a 1% conversion from advertising?

                I have a collection of four hundred blu-rays and thousands of CDs. I pay for Netflix and Hulu and Amazon Prime, I pay for YouTube Premium and YouTube Music, and I don't use an ad blocker. I don't know if that falls into your criteria of "someone who can discuss this honestly", and of course I don't really have a means of "proving" this to you, but if you can assume I'm being truthful I don't think I'm speaking out of my ass here.

                • WarmWash a day ago ago

                  Then accept the equilibrium of Google being the god of the internet.

                  If people refuse to pay, refuse to view ads, and are happy to let the suckers like you (and me) carry the cost, then no one should be complaining about the impenetrable giants who reign over us. The internet can reap what it sowed. I'm burning 3GB a month loading ads on my phone so others can view ad free? Maybe I should petition the IRS to let me write it off as a charitable donation.

                  The story of Vid.me is excellent here, because they were actually on track to dethrone youtube. The hype was real and they genuinely were getting positive traction. Did youtube fight back? Did google sound the alarm? Was there any effort to keep creators on yt? No, no, and no. Why?

                  Because Google knew Vid.me would run out of runway, and that at heart, the users were just there for the free lunch. Vid.me went bankrupt and never made even a dollar from their "fans".

                  • tombert a day ago ago

                    But there are companies that charge money and manage to be successful, as I stated.

                    Even before the internet most businesses failed. Sometimes for good reasons, sometimes for dumb reasons. Before people expected everything to be free. Pointing to a company that you liked failing doesn't really prove anything; there's always a billion variables that can contribute to corporate failures and saying "LOL PEOPLE WON'T PAY FOR THINGS AD BLOCK BAD I HAVE TO PAY BANDWIDTH" doesn't really say anything.

                    Just because you can find some companies that charged money and failed doesn't change my point at all. Netflix has become successful enough to be in the running to buy Warner Bros. Netflix is an internet-first company that doesn't do anything for free and yet it's getting to a point where it's able to buy a very large legacy media company. It has been competing with free YouTube content and ThePirateBay.

                    I don't see at all how this proves that I need to "accept the equilibrium of Google".

              • balamatom a day ago ago

                >it's impossible to have an honest conversation about it, because they people who haven't seen an ad or paid for a movie in 20 years go absolutely insane when called out

                It's generally not possible to have an honest conversation about something when one side sees the other's honest response as "going absolutely insane" :-)

          • vintermann 19 hours ago ago

            No, Google had significant power over "who gets to buy and at what price" long before ad blocking caught on. Don't blame ad blockers for sabotaging your plan to get rich.

      • SecretDreams a day ago ago

        > If this were 3M making nasty stuff for Northrop to put in bombs and drop on brown people or Exxon scheming up something bad in Alaska or bulldozing a national park for solar panels or some other legacy BigCo doing slimy things that are in the interests of them and the government but against the interest of the public they'd have 40yr of preexisting trade group publications, bought and paid for academic and media chatter, etc, etc, they could point to and say "look, this is fine because the stuff we paid into in advance to legitimize these sorts of things as they come up says it is" though obviously they'd use very different words.

        My friend, this paragraph needed some periods. I could not follow what you were trying to say - but it seemed interesting enough to consider retyping.

        • Vachyas a day ago ago

          Good comment, and I agree lol

          I read it twice (admittedly quickly) but couldn't grasp the point even though I felt like it was there.

          • fwip a day ago ago

            It's not really hard to read.

            If this were a traditionally evil company, the work to legalize the evil things would have started forty years ago.

            • SecretDreams a day ago ago

              Ya I roughly understand that the OP wanted to convey this message, but it was absolutely a hard read the way they conveyed it.

  • anematode a day ago ago

    Who could have seen this one coming. From yesterday: https://www.cbsnews.com/news/google-ai-pentagon-classified-u... ("Hundreds of Google workers urge CEO to refuse classified AI work with Pentagon").

    Any AI researcher who continues to work here is morally compromised.

    • orochimaaru a day ago ago

      Why is it morally wrong for a US citizen to work with their government?

      • finghin a day ago ago

        The acts of the government being wrong in an upsetting amount of cases would be a big reason.

      • fooker a day ago ago

        Because, we have pretty convincing historical precedent that 'just following orders' does not work as a defense when your government does something indefensible.

        • ReptileMan a day ago ago

          Worked just well for the paperclip guys.

          • citadel_melon 12 hours ago ago

            Let’s steel-man the parent comment. Obviously “just following orders” is not generally a morally sufficient argument even if you end up not facing repercussions for your actions.

      • tyre a day ago ago

        It’s not, but legal is not the same as ethical.

        For a long time, and probably still, it was legal for the US to torture enemy combatants. It was never ethical.

        • rob74 a day ago ago

          If you add to that the very broad limits of what the current administration considers "legal" (as in "pretty much anything we want to do"), I can understand feeling uneasy as a Google employee...

        • gigatree a day ago ago

          You’d need some shared ethical/moral framework to make that claim, which doesn’t really seem to exist anymore

          • yibg a day ago ago

            You don't need a shared moral framework to come to a personal moral conclusion.

            • lo_zamoyski a day ago ago

              What does that mean? How does one come to a personal moral conclusion? Vibes?

              (I take "moral framework" to mean a principled stance that gives objective grounding for a moral judgement. I agree that we can come to a moral judgement without putting it through a systematic and discursive defense, and I reject the notion that there are many moralities or that they are arbitrary, but it is also true that diverging conceptions of the basis of morality will frustrate agreement. Stopping at personal moral judgement does not lend itself to fruitful dialogue and understanding, as it constraints the domain of what is intersubjectively knowable.)

              • yibg 9 hours ago ago

                My moral framework can be different from yours. Me the individual can come to the conclusion that something is immoral when the rest of the group doesn’t agree with me. And (at least for my own moral framework) I should take action accordingly.

                So I don’t need a shared framework to make the claim that something is immoral (to me).

      • josefx a day ago ago

        What makes you think that Googles AI experts are US citizens?

        • kube-system a day ago ago

          100% of the Google employees who would be working on "classified AI work" are US citizens by law.

          • mattlondon a day ago ago

            So what, they won't be using any of the existing Google Gemini models of infra then? Because all of Google - from Gemini to the data center infra etc - has (and still is) worked on by non-US persons even - gasp - outside the US. They'll do a complete clean-room ground up bootstrap of all the research and infrastructure from zero?

            Seems unlikely.

            • kube-system a day ago ago

              You of course don't have to reinvent science, but it is in fact standard practice to do infrastructure from the literal ground up with US citizens for even unclassified government data.

              https://aws.amazon.com/govcloud-us/

              • AlotOfReading a day ago ago

                Can you provide a different source on that? The govcloud page you've linked says operated by US citizens, not built by US citizens. I'd be pretty surprised if they did the latter. Standard practice as I understand it is to simply run the standard software in a separate environment. A recent Propublica report [0] pointed out that Microsoft was hiring citizens to escort the actual engineers that aren't citizens, for example.

                [0] https://www.propublica.org/article/microsoft-digital-escorts...

      • hashmap a day ago ago

        working to directly advance a product used substantially to oppress people via surveillance or war crimes, when you have many other choices, is immoral. easy.

      • _vertigo a day ago ago

        It’s not morally wrong per-se but just because you are working with your government does not mean what you’re doing is necessarily moral

        • cooper_ganglia a day ago ago

          Just because you are working with your government does not mean what you’re doing is necessarily immoral, either.

          • _alternator_ a day ago ago

            Correct. It depends. For example, it might depend on what the collaboration is likely to result in. Perhaps it would be more likely to be moral there were some boundaries in place, like "no mass domestic surveillance" or "no fully autonomous weapons".

            Because the US government currently believes it is legal to blow up civilian drug traffickers and wage war without congressional approval. So at some point, yes, collaboration is immoral.

            • nradov a day ago ago

              The US military has deployed fully autonomous weapons since at least 1979, and potential adversaries are now doing the same. For better or worse that ship has sailed.

              • _alternator_ a day ago ago

                Look, a dumb bomb is a fully autonomous weapon once it's launched. Let's be real: an LLM making decisions on who to target and when and where to launch munitions represents a meaningful change in our concept of autonomous weapons.

              • Forgeties79 a day ago ago

                So we are wrong to express any opposition or desire to maybe raise the bar here? Aren’t we supposed to be “the good guys”? Or should we just accept a role as the menace of the world, wildly throwing its weight around whenever we have an unscrupulous president?

                • nradov a day ago ago

                  Those questions are moot. There are situations where it's simply impossible to have a human in the loop because reaction time is too slow or the environment is too dangerous or communication links are unreliable. Russia is deploying fully autonomous weapons to attack Ukraine today and they will be selling those weapons (or licensing the technology) to their allies. There is no option to stop. And let's please not have any nonsense suggestions that we can somehow convince Russia / China / Iran / North Korea to sign a binding, enforceable treaty banning such weapons: that's never going to happen.

                  • t-3 a day ago ago

                    There's always an option to stop. We can choose civility over barbarity, stop trying to kill people over 1000+ year old dick waving contests, and stop threatening each other with doomsday weapons because your grandpa shot my grandpa. Just because our leaders are too stupid and cowardly doesn't mean there's no option.

                    • nradov a day ago ago

                      Sounds good! Please convince Vladimir Putin to choose civility over barbarity, then get back to us so we can discuss options.

                      • convolvatron a day ago ago

                        I wasn't aware that the US was throwing away its moral compass for the just cause of frustrating Putin's expansionism. The new story seems to be Putin gets to do what he wants, and so do we.

                        • nradov a day ago ago

                          If you think there's something wrong with giving our warfighters the most effective weapons to carry out their assigned missions with minimum casualties then your moral compass is completely broken. Personally I favor a less interventionist foreign policy but that has to be addressed through the political process. Not by unaccountable individual defense contractor employees making arbitrary policy decisions.

                          • Forgeties79 a day ago ago

                            > warfighters

                            You should know that every single veteran I know ruthlessly mocks Hegseth for trying to use this term non-comedically. It’s a synonym for someone who takes their service way too seriously/makes it their whole identity. It’s almost exclusively used to mock people.

                      • sillyfluke a day ago ago

                        Not sure you're aware, but the joke may be on you. It's apparently Putin who's convinced Trump and the Mullahs (not the band) to choose civility over babarity by allowing a superyacht of one of his cronies to pass through the Hormuz.[0]

                        Russian trolling at its finest, truly. This timeline keeps raising the bar on the absurdity quotient.

                        [0] https://www.bbc.com/news/articles/cm2pn8zdxdjo

                      • Forgeties79 a day ago ago

                        We aren’t Russian and Putin is not our leader. We can choose how we behave and operate. This is like saying we should use chemical weapons if someone else deploys one. You’re speaking as if it’s all so binary. “Do what they do or you lose.”

                        • nradov a day ago ago

                          It's cheap and easy for someone sitting safely behind a computer to pretend to be morally superior when you're not the one who has to make hard decisions, or deal with the consequences. Chemical weapons have seen minimal use after WWI largely because they're not very militarily effective. Autonomous kinetic weapons actually work. Right now Ukrainians are building autonomous weapons to defend themselves against Russian autonomous weapons. For Ukrainians it is binary: do what they do or you lose. Would you prefer that they lose? And don't presume to tell us that the Russians can be persuaded to stop by non-violent means, that would be completely delusional.

                          • Forgeties79 a day ago ago

                            >It's cheap and easy for someone sitting safely behind a computer to pretend to be morally superior when you're not the one who has to make hard decisions, or deal with the consequences.

                            This is a deeply flawed argument that has an obvious application back at you, but either way if you’re going to stoop to personal attacks I think we’re done here.

          • vintermann 18 hours ago ago

            Right, so it was a comically bad defense.

            Like the guy in an old clip saying "What is my crime? Enjoying a meal? A succulent Chinese meal?" while being arrested for trying to pay with a stolen credit card. The succulence of the meal has nothing to do with it, and that it's your own government has nothing to do with it. It's just a sad way to try to distract from what's actually wrong with helping build tools for mass surveillance and autonomous murder.

          • t-3 a day ago ago

            In a logical or mathematical sense, sure, but when it's the US government and a huge surveillance-tech company it's pretty necessarily immoral (at least in an American context where harming liberty is immoral - other cultures disagree).

          • Jtarii a day ago ago

            Hegseth bombed a girls school in Iran last month. I think it's fair to doubt the moral worth of anyone assisting this admin.

            • somenameforme a day ago ago

              I don't think that was intentional, but invading countries while trying to distract them with negotiations, randomly assassinating leaders and hoping everything just turns out well, threatening to "destroy civilizations", targeting bridges and more, all while aiding and abetting Israel which is intentionally destroying pharmaceutical, educational, and other such civilian institutions is all 100% intentional.

              In some ways worse than bombing the school was the effort to implicitly deny it. The school was near a military facility, and itself was a military facility in the past. US intelligence screwed up. They should have simply acknowledged what happened and why. Their response just reeked of cowardice and malice at the highest level.

          • Forgeties79 a day ago ago

            Who said otherwise? Clearly it’s about facilitating specific acts by the government. Why are y’all acting like it was so wildly broad? No one said “working with the government is inherently immoral.”

            • cooper_ganglia a day ago ago

              Literally the parent comment:

              >Any AI researcher who continues to work here is morally compromised.

              • Forgeties79 a day ago ago

                …doing this kind of work with the federal government. That is clearly what they are saying. You stripped all context from the discussion.

                You’re looking for the least defensible, worse interpretation of their comment.

                • cooper_ganglia a day ago ago

                  No. Their comment was: “Any AI researcher who continues to work here is morally compromised.”

                  But, “…doing this kind of work with the federal government.” is added context that was not there and is based on your own interpretation.

                  The language of the parent comment charges that simply working at a company that is engaging in this makes one complicit in an immoral act, and the complicity itself is immoral. I disagree with all of that.

                  • Forgeties79 a day ago ago

                    Yes. Working at a company explicitly profiting off of doing clearly immoral acts is wrong. It doesn’t mean working for a company contracted with the federal government is always wrong.

      • blks a day ago ago

        Besides all the questionable and illegal stuff that the current government does, a lot of people don’t want to work on technologies that kill people.

      • SauciestGNU a day ago ago

        Because the government is comprised of Nazis now and is waging wars of expansionist conquest abroad and murdering domestic dissidents at home. Anyone working toward enabling that deserves to be on the receiving end of the systems they build.

      • unethical_ban a day ago ago

        Are you intentionally lumping in all civic service in one moral bucket? Is working at the post office morally equivalent to developing panopticon technology to suppress protest and track citizens?

      • pigpag a day ago ago

        Weird, why is it morally right for anyone to work with immoral organizations? -- That's what's in the focus, right?

        Whether the current government is immoral, or if government can be philosophically immoral is up to debate. But your question sounds like a deflection to me.

        • Cider9986 a day ago ago

          Heya pigpag. Your account seems to be shadowbanned, even though your comments seem normal. If you want people to be able to see your comments I reccomend creating a new account or appealing to hn@ycombinator.com

      • hawk_ a day ago ago

        Sorry to Godwin the thread but the Third Reich would like a word.

      • vintermann 19 hours ago ago

        Same thing that's wrong with enjoying a succulent Chinese meal.

      • mattnewton a day ago ago

        Idk about morality, but it’s certainly a way to stop dystopian mass surveillance nightmares if everyone capable of building one refuses.

        So if you live in the US and don’t want one government agency in the US to have this power (that is ambiguous under current law), one way you can try to avoid it is by refusing to sell it to them and urging others to do the same.

        It’s a long shot sure, but it certainly seems more effective than hoping the legislature wakes up and reigns in the executive these days.

      • psychoslave a day ago ago

        Given most government policies and direct engagement in all kind of monstrosities over the last millennia, there is really no reason to limit the case to USA, indeed.

      • Terr_ a day ago ago

        You're using a strawman. This was never about just being employed by a government in the most tepid and universal sense.

        Ex: "Why is it morally wrong for a US citizen to work with their government?", asked the employee compiling lists of American citizens of Japanese descent to be rounded up into Internment Camps.

      • OkWing99 a day ago ago

        Change the country from 'US' to 'China' or 'Iran'. And ask the question again.

      • catcowcostume 14 hours ago ago

        Because the American government is a criminal organization

      • IshKebab a day ago ago

        Because their current government is immoral.

      • tastyface a day ago ago

        Because the current government is a vindictive, murderous, proto-fascist government. (But you know that already.)

    • tjwebbnorfolk a day ago ago

      Why is it morally compromising to work with the military of the country you live in?

      • plaidthunder a day ago ago

        I'm not anti-military as a rule but... c'mon. Opinions on the US military vary.

        In extremis, were the people working for Pol Pot just good patriots with no moral culpability?

        We could surely at least agree that there are cases where working for the military of your home country doesn't fully excuse you from your actions.

        In fact, I think international tribunals have existed which operated on just those principles.

      • throawayonthe 21 hours ago ago

        because the country you live in is the united states? this is not complicated

      • mrexcess a day ago ago

        We can all agree that working for the Nazi government’s military would be morally compromising, right?

        You propose that other governments militaries would not be so compromising. Seems reasonable.

        But the question then becomes, what is the operative distinction between the two?

    • declan_roberts a day ago ago

      Thankfully Russia, China, etc have the same qualms as we do in the United States and will refused to send their brightest engineers to work on weapons so they don't become "morally compromised"!!!

      • titzer a day ago ago

        I don't think the long-term game theory of race to the bottom works out quite how you think.

        "Our enemies would have no qualms building a weapon that will end life on earth! We better build it first because we're the good guys!"

        • declan_roberts a day ago ago

          Послушайте этого парня!

      • yibg a day ago ago

        We also used to point to Russia and China as places we don't want to copy.

        • declan_roberts a day ago ago

          You don't have to copy them. You have to beat them.

      • notJim a day ago ago

        This was the same logic that was used when building nuclear weapons, and many of the scientists involved in that tried to find a different path (most notably Niels Bohr). I think we would be in a much better world if they had been successful. It's good that we're trying again w/ LLMs.

      • genxy a day ago ago

        People in those countries do have qualms, they are people after all and they choose to work in other fields.

      • tensor a day ago ago

        The US is sure becoming an unfree scary place just like Russia. Keep it up following those role models!

      • gambiting a day ago ago

        I don't know if you're being sarcastic(sounds like you are!) but indeed a lot of engineers left Russia after the war in Ukraine started as they didn't want to be drafted and didn't want to contribute to the war effort in some way, even if indirectly. Of course, many stayed or even willingly help. See how many engineers from Iran work abroad too, for moral and other reasons.

        The point is - this happens everywhere, it's not just some weird western thing.

    • boringg a day ago ago

      Why is it that this line items comes up EVERY TIME an article comes out in a knee jerk reaction - its so incredibly absolute:

      "Any AI researcher who continues to work here is morally compromised."

      It feels like a constant campaign and the posters seem so incredibly self righteous and unthoughtful.

      • crumpled a day ago ago

        Probably because the articles are talking about how the AI will be used in immoral ways, and that the people who know that and continue doing the work must be morally compromised.

        I know that there might be $several ways those highly-paid engineers might still rationalize their work. Some of them might have ideological reasons to treat entire classes of people as unworthy of life. Within the model of their ideologies, the most evil things might be perfectly moral.

        I wonder what reasons you have to disagree with people's moral stance against using AI as a weapon.

        • boringg a day ago ago

          In absoluteness you lose your credibility except when rallying people to arms.

          • anematode a day ago ago

            I stand by it. I'm not including all Google employees, ofc – there are some fantastic projects coming out of there – just the people working on their AI systems which will be accessible to the government with (effectively) no oversight.

            I actually don't think it's so nuanced. We know (from its spat with Anthropic) that the government wants the ability to use AI to implement mass surveillance of Americans and fully autonomous killings. We also have ample data that this administration takes the law as a mere suggestion. It's imperative not to make their abuses easier.

            Google's researchers aren't stuck there; their skills are in extraordinary demand and I'm sure Anthropic, for example, would hire them in an instant.

    • devin a day ago ago

      That's what the 7 figure salaries are for.

      • testfrequency a day ago ago

        It’s funny to me how many progressive people I know and am friends with who work at these AI companies which are marginalized demographics (Trans, Gay, Latino, Black).

        Still have faded Bernie stickers on their cars, No Kings organizers, “fuck SF I’m in the east bay for life fuck tech” - and you all make 7 figures Monday - Friday by supporting the death of society and democracy.

        I don’t dare say anything though because “money is money”, the bay is expensive..but I do sure as shit judge every single person I know who joined OAI, Anthropic, Google, and Meta.

        • foobar_______ a day ago ago

          Preach. The hypocrisy is startling. I think people started at these companies maybe years ago with "good intentions" and are willing to turn a blind eye. But now, given just how glaring clear it is, I don't think it is really excusable anymore. To be clear, people can work wherever they want including these companies but what kills me is the hypocrisy. They are pathological liars to themselves if they somehow think they aren't complicit.

        • beernet a day ago ago

          Agreed. Just shows that big money doesn't dilude small character.

        • site-packages1 a day ago ago

          I would suggest looking inwards if this is how you really feel.

          • testfrequency a day ago ago

            I mean no harm in saying what I said, I love my friends. I just can’t stomach the hypocrisy, it’s what the companies are preying and feeding off of.

            My friends are incredibly bright and good at what they do, it’s why they all have the roles they have. It makes me sad (and frustrated) knowing they are lured in by enough money dangling in front of them that makes them swallow their souls and identity, while fuelling the fire in the same breath.

            I have a deep amount of respect and gratitude for my friends (and anyone else) who chooses to work at non-profits, and more ethical - mission based companies for less. I hate how much these AI companies and roles are offering people, it’s completely forced lots of gifted people into a war machine.

            • site-packages1 a day ago ago

              Do you suspect there is any chance they are fully independent adult human beings with full agency, who have looked at the pros and cons, and chosen to make the choices they did with clear eyes? Do you think there's any context that might square their choices with their own internal principles that don't make them hypocrites? I mean these as real questions. For "friends you love" you really seem to take a dim view of their intelligence.

              • somenameforme a day ago ago

                One of humanity's greatest weaknesses is cognitive dissonance. People can convince themselves of just about anything. And in some ways intelligence is a burden here. A fool will just do something with a reason of 'f you, that's why.' It's only the clever man that will even bother rationalizing the villain into the hero, and we're great at it. An interesting thought experiment is to ask people if they'd be willing to push a button that would randomly kill a person somewhere in the world for a million dollars. They'd have no direct accountability themselves and their action would be unknown to anybody else.

                People will rationalize themselves into declaring this moral even though it is obviously one of the most overtly amoral actions possible. One friend I have, a rather intelligent guy otherwise, was even trying to create a utilitarian argument that he'd donate some percent of his 'earnings' to life saving charities meaning he'd be saving more life on the net. The fact that if everybody thought and behaved the same way, the entirety of humanity would cease to exist, was a consideration he didn't have a response for. Let alone the fact that he just rationalized his way into justifying near to any deed imaginable, so long as you got paid enough for it.

              • testfrequency a day ago ago

                I’ll be honest and say it’s made me question and reposition some of my friendships with a number of these friends. Some joined well before we knew the fallout of how AI has affected and impacted society negatively, some have joined in recent years because they were offered 2x their currently already high comp package, and others will take any job they can get (who, admittedly, I judge far less as I know they are just needing to survive in a HCOL city).

                My dim view is more on the AI companies being absurdly overvalued, with too much money to know what to do, which feeds downwards into compensation packages, which lure in “innocent” individuals who can’t say no. It’s not been a healthy market to be vulnerable in, most companies outside AI are just not getting the same funding or can compete at all - and it’s a shit storm.

          • gambiting a day ago ago

            I'm curious what is that you're suggesting, exactly.

            • site-packages1 a day ago ago

              I made another comment above. People contain multitudes. Different contexts, different choices, not everyone is in a box defined by the viewer's world view. You can't really know what's going on with someone else, in their heads, in their context, so give them some grace. Instead, this person's "friends" are "hypocrites" who were "lured" into their choices. It's very condescending. I am suggesting the poster re-examine their own views on other people in light of this.

              • foltik a day ago ago

                You're missing the point. They're just lamenting the contrast between what their friends say (fuck tech, no kings) and what they spend their workweek in service of.

                It's not complicated: if these friends would take a non-society-destroying job at equal pay (who wouldn't?) then their values aren't driving the decision, money is. Fine, that's a choice adults get to make. But then own it and actually justify it on its merits, don't just retreat to "who are you to judge."

                • senordevnyc a day ago ago

                  Not everyone sees AI as "society-destroying".

                  • foltik a day ago ago

                    Didn’t say that. The friends in question clearly think it is. My point more generally was about people who publicly talk about $X being society-destroying while materially enabling $X for a paycheck.

                    • senordevnyc 13 hours ago ago

                      It’s really not clear to me that they think that. OP was clearly saying that if you’re progressive, the intellectually honest position is to be anti-AI. I don’t think that necessarily follows.

    • JeremyNT a day ago ago

      Also yesterday, on Brin getting cozy with this administration:

      https://www.nytimes.com/2026/04/27/us/politics/sergey-brin-g...

    • robrenaud a day ago ago

      Is every American tax payer morally compromised?

      • eks391 a day ago ago

        Yes ;)

        I agree with the intent of your rhetorical question, so I'm jesting with you. I'm justifying my "yes" with the hopefully humorous distraction that every person, including American taxpayers, has at some point made a nonsustainable/selfish (my definition of immoral) decision.

    • RobRivera a day ago ago

      That's not a productive stance to take, if you're trying to be good faith and an agent of progress, even assuming morality isn't realitive, and the context nuanced.

    • pixel_popping a day ago ago

      Why would they be morally compromised? So the ones building open-source models should be as well because some terrorist will use the model to do nefarious stuff?

    • thisisauserid a day ago ago

      I agree that it is immoral to obey some laws. Which ones are you saying are immoral here?

      • ddtaylor a day ago ago

        An AI researcher can work anywhere they want, can't they? At the minimum they could work in a different field entirely. It seems like a false dichotomy to frame the question around laws.

        • thisisauserid a day ago ago

          Got it. It's immoral because you said so.

          • ddtaylor a day ago ago

            What did I say was immoral?

    • ReptileMan a day ago ago

      Morality is relative and malleable. And usually people are quite good at claiming that whatever suits my agenda is moral.

    • site-packages1 a day ago ago

      > Any AI researcher who continues to work here is morally compromised.

      Arguably it's exactly the opposite. In the same way we ask billionaires to pay their taxes because the regulatory regime is what allowed them the structure to make their billions in the first place, the national security of the country the AI researchers are in is what allows them to make a vast salary to work on interesting, leading edge capabilities like AI. They should feel obligated to help the military.

    • 2OEH8eoCRo0 a day ago ago

      Is it any less moral than surveilling your neighbors and/or turning your neighbors against each other with social media?

    • mvelbaum a day ago ago

      Any AI researcher who refuses to support his own country in a technological arms race is morally bankrupt, foolishly naive and does not deserve to enjoy the the way of life created for him by those who sacrificed their lives.

  • sailfast a day ago ago

    This all works if you assume that any action the government takes must be “lawful”. The assumption here is that the Pentagon is obeying the law and any unlawful use would go through normal reporting / violation channels - same as any illegal order or violation or whistleblower report.

    The Pentagon does not want Google or anyone else deciding what they can and cannot use their AI for. They’re saying we won’t break the law, and that should be enough for you - pinky swear!

    And that seems to be enough for Google. Though I might request some auditing capability that is agentic to verify rather than take them at their word.

    Next step: is Google FEDRAMP’d yet for this and for classified enclaves? Or do they also go through Palantir’s AI vehicle?

    • gwbas1c a day ago ago

      I look at this as a case of "pick your battles."

      In war, the civilians can't audit every move of the military. (It's impractical, both for reacting timely, and for keeping secrets from the enemy.)

      If the military doesn't work with Google, they will work with someone else who might not put the same amount of pressure on the military about the practical limits on AI. Or, even worse, our enemy might use a significantly better AI that we do.

      My hope is that "war" shifts to AI vs AI, machine vs machine. Calling people who work on AI for wartime purposes immoral is fundamentally immoral when AI in war replaces the need for human casulties.

      • mitthrowaway2 a day ago ago

        As a private contractor, you can sign a contract to deliver pizza or bandages to US soldiers, but also put into the contract that you won't deliver lethal weapons, if that's your own ethical stance. You don't need to audit every move of the military, just the stuff you're doing at their request.

        And sure, maybe that just means the military decides to take their business elsewhere. But if you have confidence that your service is the best, then you sell based on that.

        • eks391 a day ago ago

          I think you and your parent have great arguments. Your pizza deliverer chose his battle, which was to only deliver pizza, not materiel, and is commendable. Your parent seems to want to delegate death from humans to AI, which seems to me like a simplification that won't turn out exactly like that, but the premise of deciding whether that is a battle to pick is valid. If you want to start blurring the lines between the analogy and literality, if you choose to pick every battle to fight, there's not enough human bandwidth to do it all, and delegation to AI could be helpful. That last sentence is more loose, so I won't defend it, but I couldn't help not making a tie between picking your battles and literal battles. Perhaps a form of dark humor there.

          • mitthrowaway2 a day ago ago

            The broader context of this is that Anthropic did put ethical restrictions into their contract. A bunch of AI employees industry-wide called for solidarity with Anthropic. But then OpenAI, and now Google, defected against this equilibrium and signed contracts agreeing to "any lawful use".

            The GP was arguing that, first of all, it's not practically possible to put limitations on such a contract, because you can't audit everything the military does. But that argument is bunk, because not only do you not have to audit everything the military does (only what you as a contractor are asked to do), Anthropic also signed exactly such a contract, and the DoW did indeed run into those restrictions and got frustrated by it.

            Their second argument, that if Google didn't agree then someone less scrupulous would take their place and exert less pushback, is also bunk. Google's pushback is as low as it gets; you can't sign a contract to do something illegal, so agreeing to any lawful use is the loosest possible contract that anybody can sign. And given that they defected in this prisoner's dilemma, they are already the less scrupulous party doing the work that Anthropic would not.

      • ajam1507 a day ago ago

        It shouldn't be the role of a company to hold their nose and work with the government, it should be the government's role to inspire confidence that what they are doing with the technology is ethical.

        > Calling people who work on AI for wartime purposes immoral is fundamentally immoral when AI in war replaces the need for human casulties.

        This is naive. It will only reduce casualties for the side with the AI, and will very likely embolden countries to fight more wars.

  • ceejayoz a day ago ago

    Who defines "lawful" if Google and the Pentagon disagree?

    > The classified deal apparently doesn’t allow Google to veto how the government will use its AI models.

    Seems concerning?

    • CobrastanJorji a day ago ago

      That's presumably the trick, and it's not a subtle one; it's why the article puts it on quotes in the headline. Google gets to claim that it stood up for principles because it boldly insisted that the government obey the law, and the government will claim that whatever it decides to do is lawful. It's the same as what OpenAI did except not handled buffoonishly.

    • f33d5173 a day ago ago

      Lawful is presumably defined in the usual, common sense, ie we can do whatever the f we want until a court physically forces us not to.

      • dmd a day ago ago

        And since the court has no way to physically force anything - that's the executive branch's function, (it's right there in the name) - lawful has no meaning whatsoever if it's the executive branch that wants to break the law.

        • muvlon a day ago ago

          And the Pentagon has historically gotten away with damn near everything even in the judicial branch by appealing to national security.

    • impulser_ a day ago ago

      No it doesn't at all. Private corporations shouldn't be telling the government what it can and can't do. That's the job of the people. You want private corporation overriding your vote?

      • ceejayoz a day ago ago

        > Private corporations shouldn't be telling the government what it can and can't do.

        So Google can't tell the government it needs a warrant to perform a search? Google can't sue over something the government did?

        It's Google's product they want to buy.

        • serial_dev a day ago ago

          Just follow the orders, man!

          • red-iron-pine a day ago ago

            don't worry about the people getting sent to camps. it's lawful so it's okay.

            now follow orders.

        • impulser_ a day ago ago

          I'm talking about lawful, like it written in the terms.

          • ceejayoz a day ago ago

            But Google isn't, apparently, permitted to object "that's not lawful".

            And again, it's Google's product. Why can't they set conditions? If I pay Google to host my email, I'm still subject to their policies.

      • yibg a day ago ago

        Of course it can. Terms of service and contractual obligation (should) apply to governments as well. Google is perfectly capable of outlining what's acceptable use and what's not, and the government is free to accept or reject and not use the product. Google is choosing not to set the boundaries.

      • xp84 a day ago ago

        Agree. It seems on the surface convenient right now when people think the company (or rank and file employees?) are on their political “team” but they’d get less comfortable when oil companies or other “bad” companies dictate terms to the government. “We’ll provide fuel for the military if and only if you overturn the leader of $COUNTRY”

        (Yes, I recognize that past military entanglements do read as favors for Big Oil, but that’s more because lobbyists directly purchased the corrupt and useless Congress)

        • noelsusman a day ago ago

          In that scenario the President would invoke the Defense Production Act to compel the oil company to supply the oil. They threatened to use that power against Anthropic, though it's unclear how it applies to something like AI. "Claude without guardrails" is not a product Anthropic offers, so they would fight it on grounds similar to how Apple fought against being forced to crack an iPhone.

          The main issue here is that Congress is asleep at the wheel and has refused to implement any sort of guardrails around how the government is and is not allowed to use AI.

        • ceejayoz a day ago ago

          > “We’ll provide fuel for the military if and only if you overturn the leader of $COUNTRY”

          A mechanism to address this exists, though.

          https://en.wikipedia.org/wiki/Defense_Production_Act_of_1950

    • tdb7893 a day ago ago

      Especially concerning with the how creative the executive branch can be when it comes to what laws mean. With little oversight, it seems guaranteed that it will be used for unlawful activities (despite whatever tortured argument some lawyer will have put into a memo somewhere).

      • xp84 a day ago ago

        Yeah, they’re really bad! Seems like it might be time to try convincing people to vote for someone else! Democrats haven’t tried that play since 2012, preferring the “scorn and insult anyone outside your base” strategy that’s worked so well since.

    • cooper_ganglia a day ago ago

      Google should never be determining what is lawful or not.

    • kingleopold a day ago ago

      "who watches watchmen"

      question as old as time itself

    • ApolloFortyNine a day ago ago

      This has to be one of the strangest "debates" in history.

      Congress and the courts obviously.

      If you think there's a hole in the law tell your congressman, don't, for some reason, try and put Google or any Ai company above the government.

      • ceejayoz a day ago ago

        > Congress and the courts obviously.

        The first is fully neutered. The second is far too slow.

        "Nothing unlawful" needing to be in the contract is inherently concerning, as it's typically the default, assumed state of such a thing.

        • deepsun a day ago ago

          "follow the law" in contracts IMO is there to be able to claim a "breach of contract" by one party.

      • calgoo a day ago ago

        Please! That ship sailed a long time ago. Sure tell your congressman, who is most likely bribed (lobbying is bribing, lets use the real words) by the same companies to accept the deal. The courts can try, but who is going to enforce it when the people above says that its fine.

    • belzebub a day ago ago

      There's big air quotes energy in their statement

    • ethagnawl a day ago ago

      The classified aspect is probably the most concerning. How can I write my representative (and expect a form letter response six weeks later) if I don't know what I'm objecting to or even if I should be objecting?

      • cooper_ganglia a day ago ago

        Why would you write a letter if you don't know what you're objecting to or even if you should be objecting?

        • ceejayoz a day ago ago

          Can't I object to not knowing?

          • cooper_ganglia a day ago ago

            No, that's what classified means.

            • ceejayoz a day ago ago

              Surely I can complain about overclassification of things that should not be classified?

              • xp84 a day ago ago

                Absolutely. We will file your complaint in the appropriate location.

                The location is classified.

                Ok all jokes aside, if you suspect that there’s wrongdoing in the classified sphere, and it really matters to you, well, you should get involved in politics. We don’t just let everyone everywhere know everything, because we think it would be risky if Putin or the Chinese Communist Party also knew all those things. So we limit it to people who have taken oaths and are accountable and need to know (the military), the civilians who need to know (security clearance holders), and those who hold a high office with the public’s trust (high-ranking politicians). You can be a Senator. You just need a lot of people to trust you enough to vote for you. Or, and this is a bit easier, support politicians you do trust to vet classified things to be elected to high office, and ask them to look into it and give you their word that things are being done properly.

        • ethagnawl a day ago ago

          That's kind of my point? I'm concerned by what has been made available but can't form a complete opinion and decide if I need to take action without knowing the full extent of the agreement.

          • cooper_ganglia a day ago ago

            Nor should you be burdened with that.

            This is why we elect competent (hopefully) leaders to worry about these things for us. Mob rule democracy about every national secret would mean they’re not secrets for very long!

            • impossiblefork a day ago ago

              Why should you not be burdened with that?

              Surely you are responsible for the consequences of what you do, no matter how indirect? After all, we live in physical reality, not in some world of laws.

              If you cause something you cause that thing. You are reponsible, even if it is through some long chain.

              • cooper_ganglia a day ago ago

                > Surely you are responsible for the consequences of what you do, no matter how indirect?

                No, that’s preposterous. You are not responsible for the actions of others simply because your actions put them in a place to perform said actions. That seems like a very stressful way to go through life.

                • impossiblefork a day ago ago

                  You have caused them. You exist in the environment you exist, not in some other environment.

    • dismalaf a day ago ago

      By definition "the law" is the set of laws that the government passes. So it's a roundabout way of saying the government can pretty much do what they want.

      Also, this is probably the only acceptable arrangement when it comes to industry-government contracts. The government will always have more information than civilians.

    • jonathanstrange a day ago ago

      One thing is sure, they don't have international law in mind...

    • shevy-java a day ago ago

      It kind of reminds me of a mix of Skynet in Terminator and Minority Report. But nowhere near as interesting. More annoying than anything else.

      I am kind of mad at James Cameron here. Skynet was evil but interesting. Reallife controlled by Google is evil but not interesting - it is flat out annoying.

  • hgoel a day ago ago

    How well does this hold up in terms of legal scrutiny when previous actions indicate that the Pentagon would retaliate against Google if they didn't accept this "lawful use only" farce?

    Could Google back out of this agreement later by arguing that they were coerced?

    Not trying to suggest that Google would be opposed to doing evil, but curious about how solid this agreement would be in practice.

  • john_strinlai a day ago ago

    there is 0 reason that the definitions of 'lawful' for the purposes of these agreements should be classified.

    • svachalek a day ago ago

      There's a reason, you just won't like it.

  • exabrial a day ago ago

    This whole thing is sorta blown out of context; but it does make explosive journalism, sell a bunch of ad revenue, and give Anthropic a lot of publicity (whom I think is just amoral here).

    The bottom line is in the past the US government has acquired weapons/tech with deals like "it will never be used for X purpose". However, as far as my research has shown, it has never accepted deal where said weapons or tech has self-policing features. For instance, a bomb that would refused to explode over US soil or something like that.

    And it makes sense why they would refuse such deals, as it would be a security risk, or a system that could be exploited against them by an enemy.

    Of course that does not make great headlines. And given the current administration and how news headlines are written these days, it makes sense this is the way this whole thing went down.

    • neuronexmachina a day ago ago

      > The bottom line is in the past the US government has acquired weapons/tech with deals like "it will never be used for X purpose". However, as far as my research has shown, it has never accepted deal where said weapons or tech has self-policing features.

      Some examples of past US govt use of tech with self-policing features:

      * DJI drones have historically had geofencing around restricted areas (e.g. airports, DC). Back when the US govt used DJI drones, govt users had to get unlock licenses from DJI to be able to operate in those areas: https://support.dji.com/help/content?customId=en-us034000067...

      * cloud computing features like Assured Workloads place firm guardrails on e.g. what regions services can be spun up in: https://cloud.google.com/security/products/assured-workloads

      * for ITAR compliance, software sold to the government will often have IP-based geofencing and lock down if it's run outside an authorized area

      * I'm pretty sure the US govt uses software that has licensing enforcement

      * This is sort of the opposite, but military technology exported by the US to other countries quite frequently has self-policing built in, e.g. geo-blocking of missiles and aircraft

      • exabrial a day ago ago

        Aren't all of your examples exactly the opposite of what I said?

  • jldugger a day ago ago

    "When the president does it, that means it is not illegal" -- a former president

    • mvelbaum a day ago ago

      -

      • jldugger a day ago ago

        I'm not sure what point you're making, or whether it actually contradicts my own.

  • ripvanwinkle a day ago ago

    One observation.

    Having your work being used by the govt in ways you disagree with feels similar to having your taxes used in ways you disagree.

    When you pay taxes you have no say in the bombs acquired with that and where they are dropped. The latter though doesn't seem to provoke the same push back

    • dmit a day ago ago

      > When you pay taxes you have no say in the bombs acquired with that and where they are dropped.

      Vote in elections, local and general.

    • jMyles a day ago ago

      > When you pay taxes you have no say in the bombs acquired with that and where they are dropped. The latter though doesn't seem to provoke the same push back

      Indeed - paying "taxes" to a murderous entity is a horrible affront to morality and humanity. We do it because we're terrified; we are not perfect moral creatures. But we still know it's wrong.

    • Barrin92 a day ago ago

      you answered your own implicit question. You have a choice who you sell your work to, you don't have a choice what your taxes do. Seems pretty straight forward why the former elicits more push back. The government forces you to pay taxes it doesn't force you to build them tools of surveillance or weapons.

      • ripvanwinkle a day ago ago

        IF the feds are a sufficiently large market your viability as a business might depend on keeping them happy.

        btw i am not making a judgement call on the ai usage issue itself, just saying that this and taxes are more equivalent than it might seem

        • Barrin92 a day ago ago

          >IF the feds are a sufficiently large market your viability as a business

          sure if you're Lockheed you might be screwed, but that's not the case for Google. Military contracts, or even government contracts as a whole are a tiny fraction of the King Kong Sized gorilla that is Google.

          The fact that Anthropic puts up a fight but OpenAI/Microsoft and Google don't I find hard to characterize as anything other than pathetic. These guys could, if the wanted to, afford a lawyer or to two to push back on the administration. They do that pretty successfully with their taxes in most places btw.

  • xbmcuser 20 hours ago ago

    If something can't be profitable charge it to the government for multiple times and make it profitable and let taxpayers ie the citizens hold the burden. I never thought I would see this scam outside a poor corrupt country but damn US is falling fast.

  • pkilgore a day ago ago

    No remedies, no right.

    What are the consequences of breach? Otherwise, Americans only use for this is to wipe their ass, and only if they can find a paper version.

  • Havoc a day ago ago

    And in love and war all is fair...

    Reality is this ship sailed once the US/Palantir rolled out AI target selection

  • flufluflufluffy a day ago ago

    > We remain committed to the private and public sector consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight.

    And starts the lying to our faces. The public and private (from your own employees!) consensus is that it should not be used for those things at all, regardless of “human oversight.”

    • calgoo a day ago ago

      I hate this part: `domestic mass surveillance`

      So the rest of the world is fine to spy on, its the domestic part they don't agree with. So go on, destroy lives all around the world, helping the powers at be build the fascist state. Its fine to use Gemini to tell what building to blow up; its fine for Gemini to wrongly identify people and cause hundreds or thousands of deaths based on the telling the military who to attack.

  • ctoth a day ago ago

    Huh. I never realized the T-800 runs on Android. Makes sense, I guess.

  • Imnimo a day ago ago

    Unsurprising from Google, but still bad. If Google has no right to object to a particular use, this is equivalent in practice to "any use, lawful or not".

  • ethin a day ago ago

    The fundamental problem with these "agreements" is that they are utterly nonsensical as written. Google has one idea of "lawful" and what it means; the Pentagon most definitely has a vastly different interpretation meaning "whatever we want". These companies make these agreements because they do not understand (either deliberately or just by the factor of them not understanding the intelligence sector) that when the intelligence community says "we will only use this for lawful purposes," what they are really telling you is something very, very different. With entities like the Pentagon your agreements should probably both define what "lawful" really means and should provide as few ambiguities as you can manage. Ideally you'd provide zero ambiguities but I'm not sure that's achievable in practice.

  • mullingitover a day ago ago

    Reminder that this administration has some absolute howler theories about what constitutes lawful behavior[1].

    [1] https://www.nytimes.com/2025/09/20/us/politics/tom-homan-fbi...

  • chabes a day ago ago

    Snakes. All of them

  • anygivnthursday a day ago ago

    Is Iran already a vibe war or those are just coming?

  • manicennui a day ago ago

    This administration has already proven that they don't care about the law and see anything they do as lawful.

  • cdrnsf a day ago ago

    Lawful is meaningless in the context of the Trump administration. Should Google waver (which they won't), they'll be declared a supply chain risk or otherwise bullied into submission.

    • Ritewut a day ago ago

      Google holds immense power in their position. Trump can make their life very difficult but Google can make life for Trump very difficult as well. They have no need to kneel, they are choosing to.

      • threepts a day ago ago

        Google simply cannot justify this power struggle, it can doesn't mean it will. It got to the top by kissing the ring and that's how they stay at the top.

        • Ritewut a day ago ago

          I know. I'm noting that Google could fight this. They just won't.

          • threepts a day ago ago

            I still disagree with your point Google having enough power to supersede the democratic process. If they have "good will" and fight back, someone else will take their place.

            • Ritewut a day ago ago

              The idea that if I don't do it, then someone else will doesn't excuse you from making unethical decisions.

      • f33d5173 a day ago ago

        what immense power?

        • Ritewut a day ago ago

          You don't think Google having control over the most used email, most used browser, most used search engine, most used video website, and most used phone OS gives them immense power?

          • ajam1507 a day ago ago

            And what do you think they could do with those things?

  • kbelder a day ago ago

    That's how I'd like Google to behave in regards to dealing with me.

  • OtomotO a day ago ago

    Lawful means nothing but "according to law", which is a meaningless statement...

    Remember that even the third Reich had laws!

  • HNisCIS a day ago ago

    Refusing to participate WORKS.

    I've had the unfortunate experience of working at a startup that started courting some autonomous weapons companies and HOLY SHIT were they the bottom of the barrel. Levels of incompetence you wouldn't believe, just good ol' boys who wanted to play with energetics. Then the company I was working for also hemorrhaged all their top engineers because they found the work unsettling.

    The takeaway is that your refusal to assist these shitheads does have an impact, they have to pay more for talent and they have a much harder time courting good talent.

  • franciscator a day ago ago

    Do not get distracted, that technology is used to kill people.

  • qznc a day ago ago

    And that is news-worthy because unlawful use is normal?

  • khazhoux a day ago ago

    That was a typo. The agreement is actually to use it for any awful use.

  • threepts a day ago ago

    and the pentagon determines the law?

  • joering2 a day ago ago

    The sign contract for any lawful use ?? Can you sign a contract with US government for some unlawful use??

  • jcgrillo a day ago ago

    It's pretty funny how these guys are all becoming some kind of internet version of, like, Halliburton. It seems pretty desperate. B2C and B2B applications didn't pan out I guess?

    • zarzavat a day ago ago

      It's one of two identified uses for AI that is profitable today: writing code and blowing up schools. They are desperate to show the market that the technology is anything more than a money pit.

    • ctoth a day ago ago

      The thing is we're in a new Cold War, and most of our adversaries have gotten the memo and most of us ... haven't. Yes, becoming a new Halliburton is a rational move if you see the board right now. I don't like it even one tiny bit.

      • a456463 a day ago ago

        I don't like it even tiny bit. But other people are doing it, so I mma go full steam ahead.

        This is exactly what got us here.

      • jcgrillo a day ago ago

        If that's the case we ought to be investing in technology that actually delivers results. Patriot missiles work. Javelins work. M777's work. AI? Dunno man.. Instead it seems like what's happening here is Google has found a gullible customer that is willing to pay for something that doesn't necessarily deliver.

  • morkalork a day ago ago

    Will lawful use be determined in secret courts a la NSA and FISA?

    • Sanzig a day ago ago

      Doubtful it will even get that far, the DoJ will simply draft an appropriate fig leaf memo with a predetermined conclusion and the government will simply plow on ahead.

      https://en.wikipedia.org/wiki/Torture_Memos

      • stephbook a day ago ago

        They simply say they have that memo. Who knows whether they even drafted it for real? And if anyone starts looking, Gemini can quickly draft one itself. Nice!

    • vrganj a day ago ago

      Don't be silly.

      "When the president does it, that means that it is not illegal." - Richard Nixon

      • kentm a day ago ago

        Also the Supreme Court, half of Congress, and apparently something like 40% of the American populace.

  • CrzyLngPwd a day ago ago

    Meh.

    Lawful didn't stop Project MKUltra, or attacking countless countries, or overthrowing countless governments, or murdering countless people, or kidnapping people and torturing them, or...

    The USA can do anything it wants, to anyone, any time.

  • themafia a day ago ago

    How about: The pentagon can have "AI" once it completes a single successful audit.

  • aaroninsf a day ago ago

    I'm not going to even bother trying it, but I assume that the top Google autocomplete for "lawful" is "evil"

  • psychoslave a day ago ago

    Do no evil. Well don't make anything illegal at least. I mean, let's not do what is different from whatever we wish at the moment.

    • threepts a day ago ago

      Evil to them is not making money. It's pretty subjective.

  • Brian_K_White a day ago ago

    What a handy word "lawful".

  • ChrisArchitect a day ago ago
  • ChrisArchitect a day ago ago
  • shevy-java a day ago ago

    The beginning of Skynet 6.0.

  • grafmax a day ago ago

    There's a lot of money in genocide.

  • triage8004 a day ago ago

    AI going to be doing more lawful genocide in near future

  • mattdeboard a day ago ago

    "don't be evil"

  • jMyles a day ago ago

    These deals / arrangements / affronts / conspiracies will continue as long as there are sums of money too large to say no to.

    It's so unbelievably obvious at this point that the Pentagon, and everything like it across the globe, needs a deprecation plan. We don't need these massive states anymore for security or regularity; we can communicate around the world at the speed of light and bypass their notions of how we're supposed to relate to one another.

    Enough is enough. Spin down the nukes. Bring home the ships. Send the money back.

  • PunchTornado a day ago ago

    very disappointed. if I could, I would sell my google stock and buy anthropic. can't wait for anthropic to ipo. I love them.

  • SpicyLemonZest a day ago ago

    As a big critic of the OpenAI deal, this kinda sounds like a nothingburger to me. Of course Google doesn't get a veto on operational decisions, no customer would ever agree to such a thing. The problem with OpenAI was that they took advantage of Anthropic standing their ground to wedge their way in, which was both bad on its own terms and raises serious concerns about whether they're being honest on the real terms of the deal.

  • vrganj a day ago ago

    See also: https://en.wikipedia.org/wiki/IBM_and_the_Holocaust

    Capital and Big Tech have always been opportunistic enablers, not principled actors. Corporate Values have always been nothing but internal propaganda. "Don't be evil", what a farce.