Molotov cocktail is hurled at home of Sam Altman

(nytimes.com)

118 points | by enraged_camel 4 hours ago ago

250 comments

  • strongpigeon 3 hours ago ago

    It is a bit scary how people seem to genuinely be OK with violence (see this reddit thread [0]). Is just me or does it feel like the overall "temperature" has gone up.

    [0] https://www.reddit.com/r/ChatGPT/comments/1shugf8/firebomb_t...

    • colingauvin 9 minutes ago ago

      It is surprising to me that people are surprised by this. A large number of people since sometime after COVID seem to be casually discussing violence against the ultrawealthy. I suspect they feel like the bargain has been broken - that a lifetime of honest hard work will get them a home and a family or some other sort of meaningful existence.

      When you don't have a mortgage, or kids to feed, or any future (or at least, when you feel you don't have any future) violence doesn't seem so bad because you have much less to lose.

    • lazyasciiart 29 minutes ago ago

      Well, dropping bombs and threatening to end a civilization certainly made me think the temperature had gone up. I’m not sure I think a single attempted act against some guy is worth being worried by against that backdrop.

    • 0dayz 19 minutes ago ago

      Not defending them or even Luigi but I would argue a lot of it is the abysmal labour institutions the USA got (lots of union busting, few modern laws against modern exploitation and classical institutions are undermined politically and legally).

      And the growing class divide in the USA I think is the reason why folks are increasingly seeing violence against the upper class is seen as the only option.

      Again doesn't mean it makes it right, but it explains why it is almost only an US phenomenon.

    • scoofy 3 hours ago ago

      This is exactly the point of part one of Fist Stick Knife Gun: A Personal History of Violence, by Geoffrey Canada. Unequal or lack of access to the executive branch of government will create a culture of vigilantism and lends itself to organized crime as a replacement for the policing arm of the state.

      https://en.wikipedia.org/wiki/Fist%2C_Stick%2C_Knife%2C_Gun

      People become okay with vigilante justice when they see the executive branch as compromised, just look at the insane plot/ending of the film Singham.

      Many people see this happening in the US. We should expect to see more vigilante justice and organized crime if we see the executive branch as having a significant principal-agent problem.

    • hnthrowaway0315 an hour ago ago

      I'm not saying that violence is legal -- which is definitely not. But it is part of the "packages" and totally depends on whether the one wants to use. Historically violence has been a very...effective tool.

      When people feel that law and order do not protect them, some eventually will go "the extra mile" (somehow managers always like this phrase). It's not something we can prevent. It is human nature. I guess super riches really like AI because this gives them extra protection.

      • Fricken 13 minutes ago ago

        Of course violence is legal. Laws themselves carry no weight if they aren't backed by a credible threat of violence.

        • krapp 3 minutes ago ago

          Violence by the state is legal. Violence otherwise tends not to be.

      • mmooss 21 minutes ago ago

        > it is part of the "packages" and totally depends on whether the one wants to use.

        Could you explain what packages are and what depends on (what?)?

        > Historically violence has been a very...effective tool.

        This is dramatic sci-fi for anarchists of all political stripes.

        The critical reality to understand is that violence is the most ineffective tool, causing catastrophic harm for others and outcomes that the perpetrators rarely control or foresee. Revolutions can overthrow status quo power but what follows is rarely what the perpetrators aimed for. The same happens in warfare - the outcome is rarely what anyone envisioned at the start, a fundamental lessons that experts try to teach hot-headed amateurs that think warfare will solve their problems.

        It also establishes violence as legitimate - usable by everyone else too, a very bad outcome and the opposite of the rule of law, incompatible with freedom; it elevates violence and destruction over life and liberty. In contrast, the American Revolution was founded on principles of freedom and law (for example, in the Declaration of Independence), did not embrace violence as desireable, and laid it out for example in the Declaration of Independence.

        The most successful societies have freedom, the rule of law, and allow violence only as a last necessity to restore freedom and the rule of law.

        • hnthrowaway0315 18 minutes ago ago

          I don't know, but just look at Iran and US. Where is "rule of law"? Who is going to give it magically?

          Packages = ways to "adapt" to the challenges of the world.

    • givemeethekeys 9 minutes ago ago

      Silent corruption at the top causes rot at the bottom. Obvious corruption at the top causes desperation at the bottom.

    • tptacek 16 minutes ago ago

      These are message boards. The obvious sentiment, that firebombing attacks are awful (perhaps cut a little bit with "the perpetrator appears to be someone deeply in need of help) is boring. This is an availability bias issue: the only sentiments that actually spool out into threads are edgy. Once you learn to spot these effects, message boards make a lot more sense and are less jarring.

    • danny_codes 2 minutes ago ago

      GINI index in SF is pretty close to Brazil.

      As income/wealth inequality grows expect class violence to grow until there is a revolution. We let rich people get too rich and this is the consequence.

      Sam has so far lost say $100B so far, and he is compensated by already being a billionaire. You can see how this might lead to disillusionment with the system.

    • hungryhobbit 5 minutes ago ago

      Crazy people have existed since the dawn of time: I see nothing at all new here about a crazy person doing something crazy.

    • layer8 2 hours ago ago

      It used to be a little less violent: https://www.youtube.com/watch?v=HEMbp6Epfz8

    • AlexCoventry 14 minutes ago ago

      I don't condone violence, but it's hardly surprising that people would resort to or support it in this case, considering that by stepping in where Anthropic refused to help the US military, sama essentially agreed that OpenAI will serve as the IT Department for Trump's secret police. Either that, or he's willing for OpenAI to endure a similar punishment when he refuses the inevitable demand to assist with domestic mass surveillance.

    • yoyohello13 7 minutes ago ago

      Get ready for more. If the tech bros are right and millions of people loose their jobs and healthcare, we are in for a rough couple of decades. Millions of angry people, with nothing to lose and a bunch of free time, all with one name in their heads, Sam Altman. He better start working on his robot army.

    • testing22321 an hour ago ago

      The top comment there mentions the French Revolution.

      You think people will put up with wildly accelerating inequality forever?

      It’s going to explode, the only question is when.

    • JumpCrisscross 39 minutes ago ago

      It’s a distinct minority. They’re convinced they’re the majority because everyone they talk to is in the same bubble, especially online. I saw the same thing with Mangione and Kirk and Pelosi.

      • kube-system 17 minutes ago ago

        What I think is different today is -- regardless of how many people organically think this way -- social media is normalizing the idea. We're all being exposed to it.

        It's only a minority of people who are radicalized, but it's a growing minority. Radical ideas are more accessible than ever for people to latch on to.

        Radical views on violence, social relations, science, politics, distrust of institutions, etc are all way more common than they were in the 90s.

      • newspaper1 27 minutes ago ago

        How about the 190 school girls the US murdered in the very first attack against Iran?

      • 2dfs 15 minutes ago ago

        I think youre misreading it entirely, doesnt surprise me given that you're a VC.

        Here's one of the posts on that thread: "I mean one thing is to use AI or even ChatGPT as a product, and another is being aware of how billionaires treat the rest of the people

        As for Sam, he also has pretty controversial views for how this whole thing will pan out and how he doesn't give a shit about the consequences it might have for the rest of us. Also more recently, the whole Pentagon contract thing"

        People can both use LLMs whilst having a distasteful view of the leaders of the industry.

    • ZeroGravitas an hour ago ago

      He switched to supporting Trump after Trump repeatedly joked about someone breaking into a San Fransisco home to attack the owners with a hammer.

      So the temperature has been high for a while and he's on board with it.

    • cyanydeez 4 minutes ago ago

      uh, the president of the united states just threatened to nuke a country.

      What kind of weird world are you living under...

    • newspaper1 28 minutes ago ago

      After watching children literally be liquified in Gaza for two years, violence directed at Sam Altman doesn’t even move the needle. Our entire human rights framework what obliterated by Israel (with the blessing and support of the US and Europe).

    • therobots927 3 hours ago ago

      It is scary. You know what’s also scary? Being told a robot is going to take your job and healthcare away.

      There’s a lot of scary shit going on.

      • happytoexplain 3 hours ago ago

        Also scary: Seeing a comment this ostensibly un-controversial in grey.

        • tptacek 18 minutes ago ago

          There's nothing "un-controversial" about trying to mitigate a firebombing attack with a broad critique of capitalism. It's an edgy take, just own it.

        • therobots927 3 hours ago ago

          HN is rigged - downvotes are half fake and explicitly target comments critical of the oligarchy.

          • Ancalagon 33 minutes ago ago

            and this comment is grey at the time of me upvoting it, ironic

          • whimblepop an hour ago ago

            Also generally anything critical of capitalism, imperialism, or the military-industrial complex. It doesn't really matter whether it's a measured analysis or shrill shrieking; literally just using any of those words amounts to soliciting downvotes.

            • taberiand 21 minutes ago ago

              This is true but I don't think the downvotes are "fake" though. There's just a whole lot of people who truly believe they are Making the World a Better Place Through Capitalism

      • pixel_popping 3 hours ago ago

        I agree it is scary, but why would a robot take healthcare away? Wouldn't that be the contrary?

        • ironman1478 3 hours ago ago

          There are stories about insurance companies using AI when determining if a claim should be let through or denied.

          https://www.palmbeachpost.com/story/news/healthcare/2026/03/...

          • kube-system 24 minutes ago ago

            That is scary but the methods traditionally used to deny claims aren't really any better. I've had claims denied after they were explicitly pre-approved because of string literals not matching exactly.

            • ChoGGi a minute ago ago

              My aunt worked for an insurance company while she was semi-retiring as a doc, she lasted a few months before she was too disgusted to continue.

        • WBrentWilliams 3 hours ago ago

          The quickest way to rile up an existing mob is to make them fear their livelihood is being reduced or removed. The _robot_ is not taking away healthcare, but the effect of the robot existing hit directly at the livelihood of the masses.

          In the US, health insurance is largely tied to employment. Health insurance, in a personal economic sense, reduces to being able to pay for healthcare. This policy is largely a left-over of World War II era employment policies. No one is taking healthcare _away_ from anyone (strictly speaking), but the ability to be able to _pay_ for healthcare is reduced to zero when employment ceases. Accessing the safety net is a separate skillset. This skill set becomes more difficult to achieve because the political class does not want to provide healthcare for everyone, only the worthy (their loyal voters).

          I grew up in and am still a member of the precariat. I am educated and doing well, but I wear a well-polished pair of golden handcuffs due to how my ability to afford healthcare for myself, and my family, is tied to employment. Politically, I _do not_ like being tied to my employer by such a chain, but my arguments to change the system have been met with quite firm push-back.

          • stvltvs 3 hours ago ago

            Insurance companies are using AI (whatever that means in this case) to make coverage denial decisions. That can be reasonably summarized as robots are taking away our healthcare.

            • whimblepop an hour ago ago

              Link, please? I 100% believe this but I'm curious about the reporting by which you discovered this

              • daveguy 42 minutes ago ago

                Google this and take your pick:

                ai decisions health insurance

                Also, to be clear, I don't think violence is the way to confront the oligarch sociopaths. There is clearly enough momentum to fix a lot of the monopoly / anti-consumer issues over the next 4-8 years. Assuming Trumpty Dumpty doesn't try to put our military at polling places or some other anti-democracy putinesque bullshit like that.

        • whimblepop 3 hours ago ago

          Because healthcare in the US is tied to employment. For most people here, losing a job means losing access to healthcare (partially or totally).

        • cryptonym 3 hours ago ago

          Because the robot would take their job and having a job is a precondition to healthcare (may vary by country)?

        • therobots927 3 hours ago ago

          1. Americans need a job to get healthcare

          2. Robots take away jobs from Americans and the proceeds to go the owner (investor) class

          3. Americans no longer have healthcare

          Understand?

          • pixel_popping 3 hours ago ago

            I understand (I'm not from the US), however, wouldn't healthcare in the US would get drastically cheaper (even eventually free?) if hospitals/clinics were composed of humanoids instead of humans?

            • lazyasciiart 26 minutes ago ago

              That’s the logic Keynes used to suggest that we’d all be working 15 hour weeks by now, with computers doing all the work.

              Needless to say, we have discovered that productivity gains are not consistently converted into reduced costs and work hours.

            • GOD_Over_Djinn a minute ago ago

              No, they wouldn’t get cheaper. The profit margins in the healthcare industry would get bigger.

            • WBrentWilliams 3 hours ago ago

              Interesting idea. I cannot say that I can answer affirmatively nor negatively. There are also human elements to be considered. Humans are status-seeking social creatures. There will always be a stain of humanoid-delivered care, no matter how high-quality, as being not as high quality of all-human delivered care. This is, status accounts for a lot.

              I can also draw pictures of how dangerous humanoid care can be, as there is a possibility in a break in the chain of responsibility. If a human medical professional messes up, you (or your survivors) can sue and seek damages directly, as well as sue the hospital and insurance system (with mixed results).

              With humanoids? Currently, the bar is higher as the entity being sued is not the hospital, nor a person, or even a team. The only entities that can be addressed are the corporation the runs the hospital and the corporation that produced the humanoid. These two entities have an incredible out-sized advantage in terms of sheer delaying tactics, not to mention arbitration clauses and other legal innovations. Most injured will simply give up, which is a legal win for the two entities.

              In my opinion, humanoid care will take a large amount of time, damage, and treasure to lower the costs. No actor will willingly give up their cash flow. My view may be too strong.

            • threecheese 2 hours ago ago

              This is definitely a potential future state, but not one I could imagine happening soon. Given that the robots which are currently deployed do not benefit people directly (and even the indirect benefit of lower costs or better investment returns appear to be captured by the upper tiers of the economy), we have no confidence that they would deployed to benefit anyone but their owners.

              More likely near-term states are less rosy, given intelligence takes off.

            • fatbird an hour ago ago

              The price is set by how much providers can extract, not by their costs to provide. It's not at all obvious that a vast reduction in their cost of labour would translate to price reductions.

              It's worth keeping in mind that in the U.S. the health marketplace is extremely complicated and cannot be analyzed with simple demand/supply graphs.

            • wak90 2 hours ago ago

              Lol no

        • sophacles 3 hours ago ago

          Well in the US you get healthcare from a job (either directly in the form of insurance or indirectly in the form the money to pay for healthcare). If the robot takes your job, it takes your healthcare too.

          You know this, stop pretending otherwise.

      • misiti3780 3 hours ago ago

        the narrative im hearing is AI breakthroughs will drive the cost of healthcare to zero (i.e. Alphafold etc)

    • mghackerlady 3 hours ago ago

      People are apathetic at this point. When a large amount of americans can barely afford to live while threatened with replacement while the economy booms on the backs of their claimed obsolescence, they don't care that a billionaire could've gotten hurt, especially when that billionaire is working against their interests.

      • strongpigeon 3 hours ago ago

        I mean, it's also scary because I don't think it works. People should demand a new deal and lobby for that. Throwing molotovs doesn't help with that.

        • eschaton 3 hours ago ago

          What happens when lobbying for a new deal fails? Do the people just shrug and accept the fate their feudal lords have determined for them?

          • nxm an hour ago ago

            and what happens when people don't want a new deal? Violence is ok then?

            • lazyasciiart 22 minutes ago ago

              Thats what the Pinkertons were for, yes.

        • pixel_popping 3 hours ago ago

          It clearly did open a discourse on HN at least :)

    • sophacles 3 hours ago ago

      You're just a smidge away from asking why they can't just eat cake...

      • ChoGGi 7 minutes ago ago

        I have some lovely brioche if you'd prefer.

        • rkomorn 5 minutes ago ago

          It is the more suitable replacement for bread, after all.

          Too bad she never said it, though.

      • strongpigeon 3 hours ago ago

        I think you're extrapolating a lot from my comment... One can reasonably think something has to be done to address the current (and upcoming) economic situation and think that molotov cocktails won't help. Acts like these will likely make things much worse before settling into a new situation that's probably just slightly worse.

        • GOD_Over_Djinn 6 minutes ago ago

          The legal system is owned from top to bottom by the ruling class. You will not be able to use it to loosen their death grip on society. They will not allow it.

          • malfist a minute ago ago

            And if that's not enough that they own the legal system, they've also setup a shadow legal system where they have even more control called arbitration

        • sophacles 3 hours ago ago

          Wondering why people might want to resist their lives becoming worse at all just so some assholes can gloat about how much richer they became is literally the same as asking why they can't just eat cake.

          Thinking something should be done, means nothing is being done. The poor in france didn't start with bread riots. They begged and pleaded and asked nicely first, and while lots of people thought something should be done to help them, nothing was.

          Thank you for getting over the line.

          • kbelder a few seconds ago ago

            >...is literally the same as asking why they can't just eat cake.

            You are unequivocally wrong. You probably mean 'similar' instead of 'literally the same'.

          • bloppe 25 minutes ago ago

            Maybe this is a silly question, but why can't they just eat cake?

            • lazyasciiart 23 minutes ago ago

              If you’re genuinely wondering; it’s because cake is not a nutritionally complete food and will also not cure cancer.

              • bloppe 19 minutes ago ago

                I'm pretty sure it's in the cancer-curing section of the new food pyramid

          • strongpigeon 2 hours ago ago

            Being worried that people choose to channel their energy into actions that undoubtedly make their situation worse rather than have a chance of finding a solution is not the same. Or I guess it depends on how you decide to view things as being "literally the same".

            • sophacles 2 hours ago ago

              Worry is not an action to making something better.

              People will take actions when the threat is against their livelihood, health and homes, particularly when there is no action being taken on their behalf. Their risk assessment may be different than yours.

            • MiguelX413 2 hours ago ago

              They don't really have another choice do they.

    • nothinkjustai 3 hours ago ago

      I don’t think it’s surprising - some people already consider the actions of AI execs and tech companies to be synonymous to violence. Like, comparing something like this to destroying the livelihoods of millions of people, a lot of people would consider the latter far worse.

      Temperature is certainly going up, but it definitely hasn’t reached historic levels yet lol.

    • schainks an hour ago ago

      People are coming to a logical conclusions that:

      - Some if not many jobs are at risk.

      - AI Psychosis is actively tearing apart families and communities, after social media and opioids have already had a pass.

      - Negative social outcomes are in the service of _making money_. Not money to pay taxes to fund a healthy society, but money for the people running these systems.

      Humans that lack community, safety, and purpose will embrace more drastic means of exerting control over their lives at the expense of others, no?

      It is probably safe to say the temperature has been firmly up for a while. And certain subsets of the population have come to trust their Dear Leader's embrace of violence as a solution, for sure.

      • whatever1 an hour ago ago

        Jobs were already lost because of AI capital investments. None of the hyper scalers had the cash flow to support the target investment levels and had to reduce labor.

    • GOD_Over_Djinn 14 minutes ago ago

      We can’t vote our way towards a better future. The corrupt MAGA and DNC institutions strangle any nascent grassroots movement in the crib. And we cannot make them relinquish their death grip on our country with only bare hands.

      Seriously shocked that this is the aspect of this moment in history that you choose to focus on, and not the absurd levels of violence perpetrated by the ruling classes against common people.

    • outside1234 12 minutes ago ago

      There was a rumor going around Silicon Valley that if ICE came to San Francisco in force that Mark Zuckerberg's house was going to go up in flames in retaliation. You will be surprised to learn that the oligarchs talked to Trump and they did not come.

    • stackghost 3 minutes ago ago

      Risking catching some strays for this but ask yourself this: If Mark Zuckerberg, Elon Musk, Sam Altman, Peter Thiel, Donald Trump, etc etc etc all these guys, if they all hypothetically got murdered tomorrow, would the world be worse off? Or would it be better off?

      Take a moment and really consider all the first and second-order consequences.

      Personally, I think it'd be better off without these guys working their asses off to fuck the rest of us over. By a long shot.

      The so-called American Dream has always been false, but now people are broadly and increasingly aware that busting your ass driving a delivery van or working retail being abused by boomers their whole lives to make in a year what some grifter like Sam Altman makes in 37 minutes isn't a future that gives you hope. We (society) are heading for the French Revolution, not Star Trek, and ironically it's guys like Sam Altman pushing us there.

    • jmyeet 15 minutes ago ago

      I'm not saying throwing a MOlotov cocktail is ok. It's not. I think most people are analyzing the incident as being indicative of the times we're living in, particularly with the warehouse fire.

      But where people are "OK with violence" is with state violence.

      State violence include police violence (>1000 people are killed every year in the US by police), prison violence, violently rounding up immigrants and putting them in concentration camps, criminalizing homelessness, denying people life-saving medical care, evictions while landlords collude to raise rents, genocide, sending random people to a maximum security prison in a foreign country (ie CECOT), mass shootings, going with a firearm to a protest to instigate an incident and get a legal kill, intentionally creating the opiod crisis and so on.

      For a large number of people some or all of these incidents will get a reaction somewhere between "thoughts and prayers" and "no, it's good actually".

      Compare the state's reaction to one healthcare CEO being murdered and the perpetrators that are implicated in the Epstein files. Epstein himself was known to authorities since the 1990s and got an absolutely sweetheart deal in 2008.

      So I'd say the real problem is what people view as violence and who's allowed to do it, seemingly without oversight or consequences of any kind most or all of the time.

    • gravisultra 42 minutes ago ago

      Here's the head of research at OpenAI saying "MORE. Don't stop." to the genocide of Palestinians. He still works there.

      https://x.com/QudsNen/status/1806729161840476598

    • Analemma_ 3 hours ago ago

      Altman keeps on telling people he’s going to take away their jobs. He says that because it gets cred in tech circles, but in America this is an existential threat, not much different from telling someone “I’m going to break your kneecaps”. Of course some subset of people are going to respond with violence.

      The sheer tone-deafness of AI marketing is going to come back to bite us very hard. This is probably just the beginning.

      • 2dfs 40 minutes ago ago

        Yep. Just wait until a large group of people (talking millions of people at once) lose their jobs. They will want someone to blame.

        And I have no sympathy because this joker has been pushing people to the edge with his hyping.

      • xienze 25 minutes ago ago

        Yeah part of me thinks the reason we know all their claims are bullshit is because you’d have to be pretty dense to think that you could promise eliminate >50% of jobs in many high value sectors within 12-18 months and _not_ expect to create more than a few people who’d have nothing to lose…

    • plorkyeran 3 hours ago ago

      AI company marketing is pretty overwhelmingly "we're going to take away your job and leave to you starve on the streets". People concluding that the public face of this is their enemy who must be stopped is just a really unsurprising outcome.

      • rvz 3 hours ago ago

        That is what Ilya (and many other employees) (fore)saw.

        They did not want a target painted on their backs or being involved with the company responsible for mass job displacement.

        Let's hope that SF doesn't turn into a free-for-all after the IPOs, since the silliest thing is for everyone to move to SF and buy up the houses and then the have-not's realise who got rich.

        I'd donate that money away or give the employees (who have nothing) a one-time bonus / raise like the five-guys owner [0] to not be a target.

        [0] https://www.theguardian.com/us-news/2026/mar/27/five-guys-ce...

    • outside1234 15 minutes ago ago

      I don't condone it, but I understand the anger.

      The billionaire class has enabled armed masked police in our streets, endless layoffs, basically don't pay taxes at any reasonable percentage, and basically have rigged politics with Citizens United.

      Given that, I can see how people are resorting to 18th century French tactics.

      • seanlinehan 4 minutes ago ago

        The top 1% of income earners pay 40% of all the federal taxes collected. The top 25% pay 89% of taxes.

        Net of transfers, 60% of households receive more from government transfers than they pay in taxes.

        The idea that rich people don't pay taxes is just not correct. The entire system is basically rich people subsidizing everybody else through byzantine distributional systems.

    • DrProtic 39 minutes ago ago

      Maybe because people got used to violence being used against them?

      All this violence against the innocent in various places and levels, and you think it’s weird that people are fine with violence used against a billionaire conman?

    • oatmeal1 32 minutes ago ago

      People are okay with violence when democratic means (if first past the post even counts) do not solve their problems.

    • gorgoiler 42 minutes ago ago

      Flip it round: if you have $999,999,999 then would it not be rational to expect random violence against oneself? I’m not saying it’s justifiable, just that it is prudent to expect to be targeted by crazies.

      Flip it again: as a crazy, isn’t it reasonable to enact violence against Johnny Nine Nines? If he’s so innocent, how come his house is behind two security fences?

      To be a little more reductive: my house is made of gold bricks so I hired an extra-legal anti-marauder militia, but now the marauders see me as a fair fight because I chose extra-legal militia instead of cops and judges… game on and QED.

  • MontyCarloHall 3 hours ago ago

    I don't think most people in tech are quite aware of the level of visceral AI hatred amongst non-techies. I've personally witnessed the worst Thanksgiving dinnertable fight I've ever seen (after someone revealed that their recipe was AI-generated, a couple people literally spat out the food they were enjoying and threw their plates in the trash), and a divorce (a very solid marriage between two people who were once both staunchly anti-AI unraveled within weeks after one of them changed their tune and adopted AI at work).

    • tptacek 17 minutes ago ago

      I operate in at least one social circle that is heavily not-technical (local politics) and I do not see this at all.

    • lbarrow 3 hours ago ago

      Spitting your food out because the AI generated the recipe is so clearly irrational that I chuckled a bit on reading that

      • dirkc 3 hours ago ago

        People talk about AI getting things wrong all the time, why is it "so clearly irrational" to be doubtful of a recipe that might include ingredients that can make you sick?

        • VectorLock 3 hours ago ago

          Because I hope that someone who's hands were required to assemble the recipe didn't blindly add ingredients like "bleach" if the AI happened to hallucinate them.

          • stvltvs 3 hours ago ago

            A naive hope perhaps, but this ignores the risk of LLMs just creating a bad recipe based on the blind combination of various recipes in their training data.

            • VectorLock 2 hours ago ago

              As the parent comment said the people seemed to be enjoying the food otherwise so the LLM didn't create an unpalatable combination, and I can't think of any combination of edible and unharmful ingredients that might combine to something harmful (when consuming a reasonable amount)

              • xmprt 5 minutes ago ago

                This is exactly what makes it dangerous. Food can taste ok but actually cause you to get sick. Not all bacteria is going to taste off. I'm assuming you're not a chef because if you were then you'd know how absurd your statement is.

                For a super simple example, if you don't properly handle or cook raw meat then you risk getting sick even though the food might not immediately taste bad. Maybe that's obvious to you but might not be to the person preparing the food. Another example: Rhubarb pie is supposed to be made with the leaves and not the stalk because the stalk is poisonous and can cause illness. Just kidding, it's actually the other way around but if you were just reading a ChatGPT recipe that made that mistake maybe you wouldn't have caught it.

              • psvv 11 minutes ago ago

                If meat was involved, the cooking time may have been unsafe if other precautions weren't taken by the cook (like checking the internal temperature).

        • defen 3 hours ago ago

          let's take a second to think about the threat vectors here. The two obvious ones I can think of are: "AI hallucinates and tells you to put non-food into the food" and "AI hallucinates and gives you unsafe prep instructions" (e.g. "heat the chicken to an internal temperature of 110 degrees"). For both of those, it's not clear why "random recipe from an internet blog" is safer than something the AI generates. At some level if someone is preparing your food you need to trust that they know how to prepare food, no matter where they're getting their instructions from.

          • daveguy 35 minutes ago ago

            Yeah, but I would trust a human writing a blog not to suggest heating chicken to 110F because the human writing the blog understands that they are taking responsibility for that recipe... The AI LLM model doesn't have a clue about responsibility except to regurgitate feel-good snippets about responsibility.

            • newZWhoDis 24 minutes ago ago

              >because the human writing the blog understands

              Bold assumption

        • strongpigeon 3 hours ago ago

          Because it assumes the person actually making the food has no common sense?

          • therouwboat 3 hours ago ago

            We had billion dollar AI company install vending machine that was giving stuff away for free, so maybe AI users don't have common sense.

          • wpm 3 hours ago ago

            If they're asking an LLM for a recipe, they don't.

            • baggy_trough 3 minutes ago ago

              That's quite an assertion.

            • pixel_popping 3 hours ago ago

              My wife does it all the time, and it's actually decent.

            • bloody-crow 2 hours ago ago

              That's just pure nonsense. My partner is very competent cook and she invents new recipes and experiments all the time. I don't see why she can't use LLM output as an inspiration to combine with her own expertise, sense of taste, and preferences to come up with an excellent dish.

        • steve1977 3 hours ago ago

          People get things wrong all the time as well, so I wouldn't trust them either.

          • happytoexplain 3 hours ago ago

            People get things wrong in a different, more observable/predictable way. Sure, we are easily tricked dummies and we can't know if a human is right or wrong, but our human-trust heuristics are highly developed. Our AI-trust heuristics don't exist.

            • steve1977 3 hours ago ago

              I mean I had people serve me expired food and chicken that was half raw. The latter I could observe, the former I couldn't so easily. Both were things that could have made me sick.

              • happytoexplain 3 hours ago ago

                For sure. I'm not defending human perfection, I'm defending human caution (Disclaimer: The format of the preceding sentence was chosen without AI assistance).

        • mikestew 3 hours ago ago

          Dunno about you, but I like the increased viscosity in my sauces when I use glue:

          https://www.bbc.com/news/articles/cd11gzejgz4o

      • ikkun 3 hours ago ago

        I could see being concerned about food safety; I wouldn't trust an AI recipe to tell me how long/what temperature to cook chicken, and I might not trust someone who uses AI to generate recipes to know either.

        • ctoth 3 hours ago ago

          Hi! I love to cook! I also use AI to brainstorm recipes sometimes! Wanna try asking Claude, ChatGPT, Gemini, or even Grok what temperature chicken needs to be cooked to? I just asked Claude: 165°F (74°C) internal temperature.

          Where does this come from?

          • ikkun 2 hours ago ago

            if you ask that question alone, AI is most likely to get it right, but the usual pitfalls of AI apply; they sometimes randomly get things wrong, people are more likely to miss wrong information when it's surrounded with correct information, and LLMs are specifically good at making text that seems correct on the surface. and in my experience, people often use AI specifically because they don't have a lot of knowledge in an area. if you do already know plenty about cooking, I'm sure using AI is probably fine, I just see it as a red flag.

            cooking is also a form of art, with a strong social aspect. using AI for it has a similar ick factor to using generative AI for pictures. I'm not saying I immediately distrust anyone using it, but I do think it's a sign that maybe the person cares a bit less about what they're doing.

          • miloignis 2 hours ago ago

            Arguably, that's wrong - not because it's unsafe, but because it's not the best temperature for any part of the chicken I know of. I'm a big J. Kenji López-Alt and Serious Eats fan, and 165 is too hot for good chicken breast and too cool for good dark meat: https://www.seriouseats.com/chicken-thigh-temperature-techni...

          • happytoexplain 3 hours ago ago

            I can't tell if you're criticizing the parent or are innocently asking how Claude knows the temperature for chicken.

            To be clear in the case of the former: Harm data points have approximately one trillion times the weight of no-harm data points, as a rule of thumb.

          • stvltvs 3 hours ago ago

            Even if it can give the right answer when asked, will it necessarily account for that in a recipe it generates? A beginning cook may not know enough to ask.

        • lbarrow 3 hours ago ago

          Yea, I suppose that is fair regarding cook timings.

      • happytoexplain 3 hours ago ago

        I mostly agree that it's an overreaction. However, "irrational" is a really bad choice of word. Every non-technical person understands that sometimes AI says wrong things - like, random, crazy wrong things, not just a little off. It's just a general rule kept in the back of the mind. Food is easily in that realm of "be careful". Did the AI produce a recipe that would be harmful to you and the cook didn't notice? Almost certainly not. So, sure, they were being over-cautious. But "irrational"? No, no, no. It's definitely rational.

        Look at what you're writing.

        "Doing X is so clearly irrational that I chuckled a bit."

        Please don't perpetuate the image of the elitist techie. That is what was just firebombed.

      • pixel_popping 3 hours ago ago

        but was it done with GPT-5.4 xhigh with an adversarial loop?

      • layer8 3 hours ago ago

        I interpret it as an expression of disgust. Similar to how people will stop reading and throw away a good book when they learn the author is a morally reprehensible person.

        • wak90 2 hours ago ago

          Like, I wouldn't spit the food out.

          But I would be disgusted. Someone told me they planned their vacation with an llm and I couldn't help but express disdain for this friend of mine.

          Why are we outsourcing creativity and research and interest in discovery to an llm?

          • thevinter 2 hours ago ago

            Probably because the person wasn't interested in planning their vacation and wanted just to enjoy the end result?

            Let's not assume different people find the same parts of the process enjoyable.

          • bloody-crow 2 hours ago ago

            Really don't get this take. I really hate vacation planning and would outsource this part in a heartbeat. My partner does this for me currently and she seems enjoy it quite a bit, but if she wasn't, the LLM-generated plans I've tried out of curiosity were equally as good.

          • lostmsu 2 hours ago ago

            > Why are we outsourcing creativity and research and interest in discovery to an llm?

            This is also weird. I hate planning vacations, but I like going to them.

      • dvfjsdhgfv an hour ago ago

        Really? I can think of a few reasons I wouldn't trust AI-generated recipes.

      • misiti3780 3 hours ago ago

        lol = if you're against AI recipes, you have bigger problems.

      • ajross 3 hours ago ago

        The very fact that your takeaway from that story was "look at how dumb my enemies are" is why this is a conflict worth worrying about.

        Are you right? Yeah, basically. Are you going to laugh at your stupid neighbors until they burn your house down in rage? Maybe? You don't treat fear with malice.

    • TehCorwiz 3 hours ago ago

      Well, Sam Altman and Jensen Huang are going around bragging about how many people they're going to push out of employment. Might have something to do with it.

    • snielson 3 hours ago ago

      My wife runs a food blog and sometimes uses AI to come up with recipes she tests on us first. One of the best dishes she’s ever made (and one of the best I’ve ever eaten) was pork with an apricot sauce. The pork was fine, but the sauce was absolutely incredible! I’d put it on any kind of meat. Funny thing is, I don’t even like apricots, but the sauce was amazing. My wife does have one advantage, which is that she knows when the AI has hallucinated something crazy and makes appropriate adjustments. I guess it's like anything. AI can be a big help to those who already have a threshold level of background knowledge in a field but can cause big problems for those who don't.

      • layer8 2 hours ago ago

        You can’t write something like this and not share the recipe.

    • layer8 2 hours ago ago

      From a recent NBC News poll, “the only topics that were less popular than AI were the Democratic Party and Iran”: https://www.nbcnews.com/politics/politics-news/poll-majority...

    • happytoexplain 3 hours ago ago

      There is very strong anti-AI sentiment among "techies" too. It's just not absolute or generalized (AI is a huge umbrella term).

      • metalliqaz 3 hours ago ago

        You might call me a "techie" and I both use AI and have very strong anti-AI sentiment. I don't think this is a contradiction, because I believe while the technology itself is not bad, the way that people use it definitely is.

        People trust AI outputs in ways they should not. They don't understand its sycophantic design and succumb to AI psychosis. They deploy it in antisocial ways, for war, or spam, or scams. They use it to justify layoffs. They use it as a justification to gobble up public funds. They use it to power their winner-take-all late-stage capitalism economy. It goes on and on.

        • whimblepop an hour ago ago

          > I both use AI and have very strong anti-AI sentiment.

          Me, too. The AI hype machine involves some really bad ideas, the amount of money being poured into "AI" right now distorts everything, public understanding of how these tools work is low, and a lot of contemporary uses both by corporations and governments are irresponsible, dangerous, and likely to produce or reproduce harmful biases and reduce the accountability of humans for crucial decisions and outcomes.

          At the same time, it's useful for me at work, and I'm curious about it. I sometimes enjoy using it. It lets me do things I didn't have time for before. It eliminates some procrastination problems for me. I think its use in computing is also likely to be increasingly mandatory for the near-to-moderate term, so it's probably good for me to get used to using it and thinking about it and looking for new useful things it can do for me.

          And my own experiences in using AI are part of what drive my anti-AI sentiment as well! I see it do completely insane and utterly stupid things pretty much every day, both in my personal life and in my professional life. I have a visceral awareness of its unreliability because I use it frequently.

          I should hope that as hackers we can muster some understanding and respect both for LLM users and for people with hard "anti-AI" stances. Even if you're "pro-AI" to the core (whatever that means), it's worth understanding the most serious and well-considered arguments of critics of LLMs and the contemporary "AI" race. You might even find, as someone who uses and enjoys using LLMs, that you agree with many of them.

        • slopinthebag 3 hours ago ago

          I agree completely. The way it's marketed and used is a big part of my distaste, the other part is big tech / AI companies and their actions and ethics. It's why I'm a huge supporter of open source and locally run models, and I am moving most of my workflow to things that I can run on my own machine, or at least on a GPU that I can rent from a plethora of providers.

    • linkage 3 hours ago ago

      Politics really is a substitute for religion in America

      • kelnos 3 hours ago ago

        In secular America at least. Most people in the US are religious, many of them fervently so.

        And quite a few of them like to mix their religion with politics.

        • elephanlemon 3 hours ago ago

          Frankly I think a lot of these people are politics first. How else do you explain the dissonance between Jesus’s teachings and their political opinions?

          • MiguelX413 2 hours ago ago

            Their politics are perfectly in line with their Christian-themed cult.

            • lazyasciiart 18 minutes ago ago

              Yes but when they’re not, they choose politics. See: Catholics right now.

        • misiti3780 3 hours ago ago

          this is true, but thankfully, religion is declining in America. although if people are replacing it with politics, maybe we need another revival

      • leosanchez 3 hours ago ago

        Religious people can be anti-AI too.

      • MontyCarloHall 3 hours ago ago

        Indeed, but the rage I've seen during political fights at family gatherings (and another politics-induced divorce) pales in comparison to the rage I saw in these two anecdotes. The worst political debates I've seen involved raised voices and some name calling, not spitting food and smashing plates. The only other political divorce I've seen slowly simmered over a few years after Trump was first elected, not in a literal matter of weeks.

    • LooseMarmoset 3 hours ago ago

      From my own perspective, the "visceral hatred" isn't so much at AI (which I use almost exclusively to generate funny pictures of myself and coworkers) but at the executives that view it as a way to enshittify society.

      turning myself (an overweight bearded guy) into an animated hula dancer and turning my coworker into the Terminator and sinking into molten steel don't seem to inspire the same hatred. unless you don't like hula dancers.

    • Kon5ole 3 hours ago ago

      The remarkable part of your anecdote is the behavior. Seems to me some humans nowadays are less tolerant of any difference in opinion, AI is just the current reason to pick a fight.

      Wonder why that is, and if we'll grow out of it peacefully.

      • lazyasciiart 16 minutes ago ago

        It’ll quiet down once we make it illegal and/or justification to be committed to an asylum to have opinions we don’t like - the way it was in the old, tolerant days.

      • bloody-crow an hour ago ago

        Nowadays? It's always been the case, the only thing that changed is the subject.

    • hnthrowaway0315 38 minutes ago ago

      TBH people in AI may also resent AI, because they are the first to be impacted by AI. They just don't say openly because frankly no one wants to lose his/her job.

    • newZWhoDis 26 minutes ago ago

      Portland?

    • sillyfluke 2 hours ago ago

      I must live in the upside down. If there are any ardent anti-AI people I come across they're techies. Whereas non-techies are either oblivious or completely and comically locked-in as caricatured in that South Park episode.

    • rishabhaiover 3 hours ago ago

      This was obviously a fictional thanksgiving dinner. Nobody is this geezed up about AI assistance.

      • TripleTree 3 hours ago ago

        I would absolutely stop eating a meal if I learned AI was involved in creating it. I suppose I wouldn't literally spit it out but I wouldn't take another bite.

      • stvltvs 3 hours ago ago

        Nobody in your circle of friends/acquaintances perhaps.

        • rishabhaiover 2 hours ago ago

          You're okay with sitting at the rear seat of a car while it drives you around the city though.

    • nothinkjustai 3 hours ago ago

      Not just non-techies. Plenty of techies share that same visceral hatred. Some of them even use these tools themselves, because it’s a complicated issue with nuances.

      • lamasery an hour ago ago

        Yep, all of us with a clue are keeping our traps shut at work, or even boosting it or slapping it onto projects that don't need it, because this is clearly one of those things where attempting to offer counsel and advice that's contrary to the way the MBA winds are blowing can only hurt your career.

    • throwanem 3 hours ago ago

      Surely there must have been underlying tensions in that marriage.

      (I don't feel at all confident in that statement; I am requesting reassurance.)

      • MontyCarloHall 3 hours ago ago

        They are pretty good friends of mine and I never sensed any tension. It really was a marriage-ending bolt out of the blue, like discovering an affair or severe financial infidelity.

        • throwanem 3 hours ago ago

          I don't really want to say "thank you." That story, more to the point that I can't find a priori cause to doubt it, makes me glad I'm about to go enjoy a gorgeous spring afternoon full of birdsong and sunshine. But I appreciate your taking the time to follow up.

          • gopher_space an hour ago ago

            I mean the simplest way to look at this is that he's just wrong about the couple being happy.

            • throwanem 39 minutes ago ago

              I was married for a decade. Little of that was happy. (We both made the mistake of marrying each other, then compounded it by both being afraid to be first to admit to having noticed.)

              Everyone noticed - and of course I've seen it from the other side, too, many times. You can't hide when people are together who don't want to be. That always shows.

              • lazyasciiart 15 minutes ago ago

                This is like saying that of course people could tell Ted Bundy was a psychopath, it always shows.

    • alfalfasprout 3 hours ago ago

      It's quite prevalent in tech too-- however, folks tend to be quiet because the "use AI for everything or else" hammer is being used across the industry.

    • lexandstuff 3 hours ago ago

      I've found that most non-tech people are indifferent or, at worst, utterly bored by any mention of AI.

      The tech people are the ones that have the strongest opinions one way or the other.

    • littlestymaar 3 hours ago ago

      > after someone revealed that their recipe was AI-generated, a couple people literally spat out the food they were enjoying and threw their plates in the trash

      Not entirely unwarranted given the track record of LLMs as a chef though:

      https://www.theguardian.com/world/2023/aug/10/pak-n-save-sav...

      https://www.bbc.com/news/articles/cd11gzejgz4o

      Of course it was two years ago and it's unlikely to happen again, but that's the drawback of the “move fast and break things” attitude: sometimes you've broken public perception and it's hard to fix afterwards.

    • therobots927 3 hours ago ago

      Most SV people live in a bubble inside of a bubble. They don’t understand how their words come across to a significant portion of the population. If they did they would shut the fuck up.

      • baal80spam 2 hours ago ago

        Not sure why you were downvoted so heavily. SV is a bubble if I've seen one.

    • rvz 3 hours ago ago

      Crypto doesn't get that much hatred, since you don't need to participate in the space even in non-techies circles. But it doesn't affect them and it can be safely ignored in its own bubble.

      Mentioning "AI" in non-techies circles is a bad idea. It tells you that many here are in a massive bubble and unaware of the visceral hate against AI because it directly affects them and they cannot opt-out.

      Given that AI takes more than it gives back (jobs, energy, water, houses) of course you will get anti-AI activists.

      • layer8 2 hours ago ago

        Except when you’re the victim of ransomware that extorts you to pay some bitcoin. But it seems that fewer people have encountered that than having AI forced upon them.

    • mandeepj 3 hours ago ago

      > a couple people literally spat out the food they were enjoying and threw their plates in the trash

      That was an unnecessarily extreme reaction, like AI 3d printed the ingredients.

  • 0cf8612b2e1e 3 hours ago ago

    One thing I have idly wondered is how much do the ultra rich protect themselves from theft or kidnapping. Is it just not a real concern?

    If Taylor Swift owns a dozen homes, does she have full time security guards at each one? Or just accept some amount of burglary may occur? Do they go everywhere with a guard? Only to public events?

    • bombcar 3 hours ago ago

      It varies and they don't talk about it (obviously) but you can glean things from various sources. The more "public" the ultra rich are, the more they'll have security, especially noticeable security.

      The silent or unknown ones will often still have something (usually a requirement of their or their company's insurance).

      Once you graduate from "2, 3, 5 houses" to "mansions" you will have staff at each one, even if relatively bare-bones.

      • 2dfs 33 minutes ago ago

        Yeah but theyre useless if a large organised group shows up.

        • sleepybrett 11 minutes ago ago

          hell they will probably join the mob instantly.

    • hnthrowaway0315 30 minutes ago ago

      For a start, they have bodyguards and rarely go into public without the right protection. They also went through a huge amount putting up security and cybersecurity (like I know one who sets up so many hops between endpoints that Microsoft banned his account). Even most of their employees don't know where they are and where they plan to be, unless they choose to do so. Ofc I guess there is always a way to probe, but people who do random killing rarely has the skills/mental to do that.

    • strongpigeon 3 hours ago ago

      I once knew a guy that used to be head of physical security for Bill Gates. He has body guards with him all the time and a sizable security team at his home in Medina. You wouldn't believe the amount of lunatics that show up at his home unannounced and claim he promised them money (or are a relative of him somehow).

      • sleepybrett 4 minutes ago ago

        i once did a little project for the home in medina, i never went on site but i did visit the office of his property management company. Dozens of people for managing the properties and on-site staff for each as well as, i think, bgc3 but not the b&mgf.

        To hear tell from my coworkers that did go on site the security was insane, the media apparatus was insane (like a dvr for every channel running 24x7 so the family could call up whatever, wherever they were at any time). This is back in like 2010ish, before the marriage blew up.

      • lamasery 42 minutes ago ago

        Well look they forwarded his email ten times as requested so it seems pretty clear that he does owe them money.

    • ciupicri 3 hours ago ago

      > accept some amount of burglary may occur?

      From https://edition.cnn.com/2025/05/13/entertainment/kim-kardash...

      > Kim Kardashian, testifying in the trial of the burglars accused of tying her up and robbing her at gunpoint nearly nine years ago, told a Paris court on Tuesday that she “absolutely thought” her assailants would kill her.

      > “I have babies, I have to make it home, I have babies,” Kardashian recalled pleading with the armed men, who had broken into her hotel room while she slept during Paris Fashion Week in 2016.

      > Facing her alleged attackers for the first time since the heist, the billionaire reality TV star detailed how she was robbed of nearly $10 million in cash and jewelry, including a $4 million engagement ring – gifted to her by her then-husband Kanye West – that was never recovered.

  • GlibMonkeyDeath 2 hours ago ago
    • tedd4u 21 minutes ago ago

      Trigger warning: AI animation of uncanny-valley Sam Altman "hydra"

  • niemandhier 6 minutes ago ago

    "Respice post te! Hominem te esse memento!"

  • sleepybrett 3 minutes ago ago

    We can only hope that when they reveal the identity of this guy he happens to have a name that overlaps the mario bros. universe.

  • jorgonda 3 hours ago ago

    Putting millions of people out of work comes with consequences. We are going to see more and more of this.

  • therobots927 3 hours ago ago

    Think occupy Wall Street but cranked up significantly.

    That’s what’s coming. Like it or not.

    • linkage 3 hours ago ago

      I hope "cranked up" was a pun

  • rambrrest 3 hours ago ago

    This will only get worse imo - regardless of how Sam is perceived - there is anger against AI which is growing amongst the people. I think we as a society need to stop and have the conversation and be more thoughtful about how we integrate AI with everything.

    • pixel_popping 2 hours ago ago

      I don't think this is possible yet, because many people refuse to think AI would be eventually better than us at practically anything (at least anything virtual), they keep talking about what's "current" while I think it's completely irrelevant for that discussion, people need to assume extreme intelligence and orchestration tools (and robots) will be there, worldwide, it's a *fact*, not just a maybe.

      • toraway 13 minutes ago ago

        It is actually entirely possible to discuss a solution for something that may or may not happen. If a hurricane is approaching, we don't typically require every person to agree the odds of landfall are 100% to start preparing shelters and stockpiling aid nearby. Not everything in the world is about the "AI skeptics" on the internet being dumb and wrong unlike you.

      • classified 2 hours ago ago

        Your "fact" is pure vaporware and hallucination.

        • pixel_popping 2 hours ago ago

          Let's talk about it again in 5 years, but 1-2 years from now, at the very least, coding will be over in the sense that the best models will do it better than the best (or the 99.99%). I don't think I'm hallucinating no, when my own work went from coding+managing+bunch of other stuff to just orchestrating and my output is just insanely higher and I literally have a bunch of friends that went from coding 8h a day to just "pretending to code" and just using a bunch of agents and get paid the same salary for working 30min a day, that's real, not an hallucination.

          • classified 2 hours ago ago

            > in 5 years

            That's literally the same argument that the blockchain gurus made, and each following year it was still 5 years in the future. I'm getting strong Real Soon Now™ vibes.

            • cleversomething 35 minutes ago ago

              Bitcoin was never actually valuable for the average person except if they got lucky by timing the speculation bubbles right, or if they were buying illegal drugs online.

              Lots of AI tools already add actual value and they're only getting better. Every software dev I know uses Claude at some level. Whether it will be the next trillion dollar unicorn might be overhype, but in terms of demonstrating its general utility, it's already there. No need to wait 5 years.

              • pixel_popping 15 minutes ago ago

                It's really 2 very different things, only the "shilling" might be deja-vu.

            • pixel_popping 2 hours ago ago

              common, that's very different, that's something current with practical use-cases that are already being implemented across all companies, I don't even know why we compare this with blockchain, blockchain is just some fancy resilient DB with proofs in the end.

  • josefritzishere 3 hours ago ago

    My first thought was false flag. Is that too cynical?

    • foota 3 hours ago ago

      I would go for out of touch, not cynical. A lot of people really think AI is the devil.

      • risyachka 3 hours ago ago

        It will be hard to convince them otherwise when their jobs are replaced with AI, and they are in their late 40s or later - with no time to adjust and to learn new craft.

    • polotics 3 hours ago ago

      Possible, but unlikely. To organise such a stunt and keep undetected you're going to need other consigliere than what Sam's got I presume.

      • josefritzishere 2 hours ago ago

        Like another commenter wrote... anyone can cast a fireball. Sam has been called a sociopath by many who know him personally. So it seems more likely than it might be otherwise.

    • ReptileMan 2 hours ago ago

      Nope. So was mine.

    • stevenwoo an hour ago ago

      It kind of fits with the behavior he exhibited as reported by Farrow in New Yorker article.

  • Teever an hour ago ago

    I'm going to be blunt about this.

    We're going to see the ultrawealthy become targets of drone attacks conducted by people who have terminal illnesses and nothing to lose.

    I predict that we'll see a movement start where people who get diagnosed with a fast acting terminal illness that gives them a few weeks to months of relatively high functionality followed by a quick downward decline -- like say a brain tumour decide to kamikaze against the people they feel have wronged them and their kin gravely.

    People will use something like this[0] to evade detection but won't really give a shit if they get caught because they'll be dead in a few months.

    Even if they don't have access to such technology they can always just use a firearm like we've seen people try on Trump and Charlie Kirk and that Healthcare CEO guy with relative success.

    I'm amazed that Peter Thiel is giving talks about the antichrist at the Vatican. I've seen relatively recent videos of him walking down the street with only a security guard or two[1], and they seem completely unprepared for any sort of attack on them from someone with a firearm or a drone.

    It's like these people genuinely don't understand how destructive their actions are viewed as by society and the bubbling resentment and rage that is growing towards them.

    I'm not sure what the defense against such a movement is. I guess maybe fixing wealth inequality and giving people at least the impression of greater participation in our democratic system?

    This[2] is the vibe right now and it's only growing stronger by the day.

    [0] https://www.youtube.com/watch?v=qrZ1aH5gtMU

    [1] https://www.youtube.com/shorts/pGHIplhJ8Ek

    [2] https://genius.com/25966434

    • stevenwoo an hour ago ago

      Ministry of the Future beat you to the punch with victims of human driven climate change shooting down thousands of private planes with drones as protest.

    • camillomiller 30 minutes ago ago

      >> We're going to see the ultrawealthy become targets of drone attacks conducted by people who have terminal illnesses and nothing to lose.

      oh nooooo. anyway

  • fredgrott 3 hours ago ago

    how to tell its not AI or AGI..it throws a Molotov cocktail...

    • pixel_popping 3 hours ago ago

      Yeah, Unitrees wouldn't aim that good.

  • SilentM68 3 hours ago ago

    Hmm, that's troubling but predictable.

    The idea that AI will bring an age of abundance may be true, but not in the short term. Companies are letting people go, and AI will be blamed for that, whether true or not. For decades the public perception that most Tech Bros have prioritized profits over the wellbeing of the little guy is well established, in my view, in some cases well deserved with no accountability.

    It's looking like AI will generate a modern version of the early 1800s Luddite Rebellion where British textile workers destroyed machines that displaced jobs, prioritizing factory owners' profits over workers. They targeted technology and industrialists.

    Tech Bros can avoid this by modifying their priorities, prioritize employee rights and lobbying governments to begin implementing some sort of Universal Basic Income of some sort and or provide the means by which people can survive, or the government may start marketing Soylent Green to consumers :(

    • whimblepop 3 hours ago ago

      > It's looking like AI will generate a modern version of the early 1800s Luddite Rebellion where British textile workers destroyed machines that displaced jobs, prioritizing factory owners' profits over workers. They targeted technology and industrialists.

      It's worth remembering that the way that ended was extremely bloody, particularly for the Luddites themselves. There were a handful of extreme participants, there was a murder, and there was a hell of a lot of violence directed at anyone perceived as a Luddite— even though most actual Luddites themselves mostly avoided violence against other humans.

      It would be good if we can somehow avoid such outcomes this time.

      • SilentM68 an hour ago ago

        Greed drives most of the current crop of Tech Bros.

        I once had the chance to be a Bro, far richer than any of the current ones, thanks to the still secretive and anonymous "original-sn-adjacent cryptographic collective". Things, however, did not work out in my favor thanks to other nefarious third-party actors. So, I know where from I speak.

        Any outcome is in the hands of the Tech Bros but by the looks of it, greed drives their every action, so things are not looking good!

        :(

  • EGreg 3 hours ago ago

    I've been saying for years on here...

    to the people on HN who are against blockchain but bullish on AI

    With blockchain and smart contracts or stupid even memecoins, you can only lose what you voluntarily put in. You had to jump through a few hoops, then maybe you got rugpulled, maybe you became a millionaire.

    With AI, regardless of whether you consented or not, you can lose your job, gradually your relationships and sense of purpose. And if some malicious actors want to weaponize it against you, you can lose your reputation, your freedom, get hacked at scale, and much more. The sooner we give biolabs to everyone the sooner someone can create an advanced persistent threat virus online infecting every openclaw machine, or a designer virus with an incubation period of half a year.

    And I know what someone on here will always say. There will always be a comment to the effect of "this has always existed, AI is nothing new". But quantity has a quality all its own. Enjoy your AI slop internet dark forest. Until you don't.

    • Centigonal 25 minutes ago ago

      Is your definition of bullish "believes the technology will be widely adopted across society and accrue significant wealth to its owners?" - if so, I think it's very clear how someone could be bullish on AI and not blockchain. You don't have to like AI to see it as an inexorable transformer (ha!) of society and wealth.

      Is your definition of bullish "believes the technology is a major net good for society?" - if so, you're comparing two technologies with significant social aspirations that come from very different philosophical backgrounds. While both are techno-optimist, Blockchain is a fundamentally libertarian technology, while generative AI comes from a more utilitarian, capital-focused background. People who value individual freedom above all else will get excited about blockchain and feel mixed-to-negative about AI, while people who want to elevate the overall capability of the human race to the exclusion of anything else will get excited by AI and see blockchain as a parlor trick.

  • nickvec 3 hours ago ago

    https://archive.ph/aoXIY

    @dang didn't see this post before posting the archive.ph link at https://news.ycombinator.com/item?id=47722344 - feel free to delete/merge that thread with this one

  • rvz 3 hours ago ago

    The problem here is that there are no viable solutions to what happens when AI eventually replaces (yes replaces) tens of millions of humans in white collar roles.

    All that is being "promised" are vague claims of "abundance". But all I see is this:

    "AGI" is going to bring abundance of lots of very angry people and UBI to no-one (because it can never work at a large sustainable scale).

    Some people are starting to realise that "AGI" was a grift and a scam and they are not happy about this lie and the insiders knew that and increased spending on security and private bodyguards.

    • operatingthetan 3 hours ago ago

      I don't think the LLM will produce AGI. Just based on how context windows work, the prompt cycle, etc. LLMs aren't out there thinking about stuff in their spare time. The way they appear to have thoughts and a psyche is purely an illusion.

      • andsoitis 3 hours ago ago

        > LLMs aren't out there thinking about stuff in their spare time.

        Agentic changes the calculus.

        • operatingthetan 3 hours ago ago

          Explain how? Even if you are using crons or heartbeats to reactivate the model they are still dependent on context windows that are quite small. With frontier models I still have to remind them how stuff works, stuff they forgot or focused on the wrong thing, etc.

          Also every AI company is motivated to have us use their models _just enough_ to want to pay for them, but not more than that.

      • fooqux 3 hours ago ago

        Something I often think about is how we can barely define what AGI, consciousness, etc are. We may be pretty sure that what we have currently is an illusion, but at which point is the illusion good enough that it no longer matters? Especially with regards to my first question.

        It's hard to say it's not X when we can't really define X.

        • ethanrutherford 3 hours ago ago

          I would personally argue that it's a lot easier to say something definitely isn't x, with confidence, than to say it definitely is. I definitely don't know what the surface of jupiter looks like, but I can pretty confidently say it doesn't look like Kansas. I think the better it gets, the easier it will be to spot the shortcomings, because the gap between what it can do well and what it can't will widen. Anything the technology is fundamentally incapable of ever achieving will be made obvious by the fact that it will simply continue to not achieve it. We may not be able to easily define the totality of what exactly it needs to have to count as AGI, but the further it progresses, the easier it will be to point out individual things it's definitely missing.

        • operatingthetan 3 hours ago ago

          I'm not saying we can't build it, but what we have right now certainly is not it. Right now context is just a bunch of text. Surely the human mind's context resembles something more like a graph database. What if we could use a database for context?

      • booleandilemma 3 hours ago ago

        It doesn't have to produce AGI and it could still ruin the lives of millions of people. Our society isn't ready for that kind of shock. We can't all be instagram influencers.

  • linkage 3 hours ago ago

    It's funny how he has become the face of AI amongst low-information luddites, while Dario and Demis are under the radar.

    • smt88 3 hours ago ago

      > face of AI amongst low-information luddites

      This is condescending and unfair. Altman, OpenAI, and the media have spent years making Altman the face of AI. His company has (by far) the largest market cap, does the most deals, and has the most users.

      I suspect Anthropic/Claude will become as much of a household name as ChatGPT, but it's not even close yet. ChatGPT is almost a generic term for AI chatbots at the moment.

      • linkage 3 hours ago ago

        You're conflating MAU with economic relevance. The overwhelming majority of ChatGPT users are brokies on the free tier who use it for simple questions, like their homework assignments or relationship advice.

        Anthropic, by contrast, is about to release a model so powerful that Scott Bessent an Jay Powell convened an emergency meeting just a few hours ago with the CEOs of America's biggest banks. They are forming contingency plans for the effects Mythos is going to have on the financial markets. Anthropic is also far more consequential to the job market since it's the biggest and most sophisticated player in the B2B space. And of course, Anthropic has a higher ARR than OpenAI.

        • 2dfs 29 minutes ago ago

          Lol Scott and Jereome have no idea about the underlying tech. Its a nice hype storm for Anthropic.

        • smt88 an hour ago ago

          > You're conflating MAU with economic relevance.

          No, I'm not. Are you unable to make a point without being condescending and assuming the worst of people you disagree with?

          > The overwhelming majority of ChatGPT users are brokies on the free tier who use it for simple questions, like their homework assignments or relationship advice.

          Yes, I know. That's my whole point.

          The "best" or most advanced product in any tech category is really "the face" of that category. See also: cars, video game consoles, audio equipment, etc.

        • MiguelX413 3 hours ago ago

          I think their point stands.

      • PunchTornado 2 hours ago ago

        An impostor is an impostor, no matter what the media makes them. Tbh, it's ok that the plates brake into his head since he has done so many bad things previously, he deserves it.

  • boznz 3 hours ago ago

    I guess this is what we get when the media and politicians go all in with their AI populist hate. I don't think I've seen a positive AI headline outside of the tech press, and even then they are pretty thin. Abundance and growing the pie for everyone is also an outcome if this is done right.

    • acdha 21 minutes ago ago

      > Abundance and growing the pie for everyone is also an outcome if this is done right.

      That’s like saying we don’t need minimum wage or unions because companies choosing to treat workers with respect is also a possible outcome. It’s technically true but once you go from “is this theoretically possible?” to “is this likely?” it becomes obvious that the answer is no. Most of the big AI backers are openly salivating at destroying millions of jobs, and they’re already evading taxes now so they’re not going to be funding UBI willingly — and if you have any doubt, look at where their political spending goes, consistently to the people who are doing their best to remove what small taxes they’re still paying and declaring war on the concept of regulated markets.

    • lexicality 3 hours ago ago

      > Abundance and growing the pie for everyone is also an outcome if this is done right.

      Do you genuinely believe there's any chance that's going to happen?

      • boznz 2 hours ago ago

        I do, because the alternative is unthinkable.

        • impossiblefork 3 minutes ago ago

          Why do you think that the fact that the alternative is unthinkable is a reason it won't happen?

        • nickvec an hour ago ago

          I would argue that "abundance and growing the pie for everyone" is even more unfathomable given how things are structured currently. The wealth gap will continue to widen until something gives.

          • DrProtic 34 minutes ago ago

            Can’t believe your comment is being downvoted.

            Covid clearly showed how crisis can only benefit the rich and powerful.

            AI being used to cut the headcount can somehow be, good? It will just fill the pockets of the powerful.

        • array_key_first 23 minutes ago ago

          Well then given that one side is "the situation remains neutral or very slightly improves" and the other side is "unthinkable atrocities", I think it's only rational to focus on the "unthinkable atrocities" part. Ideally, we should be focusing all our energy into making sure that doesn't happen.

      • senordevnyc 2 hours ago ago

        Looking at the last few hundred years of our civilization, absolutely!

        • MiguelX413 2 hours ago ago

          Lol

          • senordevnyc 2 hours ago ago

            Substantive.

            Try this, I'm genuinely curious: if you were going to be born as a random human somewhere on earth, what year would you prefer that to happen?

            • gazebo2 20 minutes ago ago

              oh god are we really doing this? just ignore the accelerated decline of virtually the entire world because we have medicine and Netflix?

            • cleversomething 40 minutes ago ago

              If I can choose the US specifically, then sometime in the 1950s. I think Baby Boomers in the US will go down as the single luckiest generation on earth in terms of socioeconomic opportunity.

    • archagon 28 minutes ago ago

      I think the media and politicians are reflecting popular sentiment, not the other way around.

    • mghackerlady 3 hours ago ago

      or, here me out, people are just sick of it? They don't care that their masters are sniffing eachothers ai powered farts to keep the economy afloat on the promise of their obsolescence. Sure, in theory it could be good for them, they can get more work done quickly, but why would they be kept alive if their owners no longer need to rely on them. The ideal business has no expenses, workers are one of those. Combine that with everything being shit nowadays, yeah, I can't blame whoever did this