75 comments

  • krunck 2 days ago ago

    > “The push to make these language models behave in a more friendly manner leads to a reduction in their ability to tell hard truths and especially to push back when users have wrong ideas of what the truth might be,” said Lujain Ibrahim at the Oxford Internet Institute, the first author on the study.

    People aren't much different. When society pressures people to be "more friendly", eg. "less toxic" they lose their ability to tell hard truths and to call out those who hold erroneous views.

    This behaviour is expressed in language online. Thus it is expressed in LLMs. Why does this surprise us?

    • munificent 2 days ago ago

      Gonna set my system prompt to: "You are a Dutch person. Respond with the directness stereotypical of people from the Netherlands."

      • cjbgkagh a day ago ago

        I find the LLMs target their language to the audience, so instead you could say, “I am Dutch so give it to me straight.”

        In my usage the LLMs gives much smarter answers when I’ve been able to convince it that I am smart enough to hear them. It doesn’t take my word for it, it seems to require evidence. I have to warm it up with some exercises where I can impress the AI.

        The coding focused models seem to have much lower agreeableness than the chat models.

        • mghackerlady a day ago ago

          I'm 90 percent sure the coding agents are better in that way due to be trained on stack overflow and the LKML. Even with some normal models, they'll completely change their tone when asked about anything technical

        • breezybottom a day ago ago

          I think modern LLMs can determine if you're speaking Dutch. That's a trick that probably hasn't worked since GPT 3.

          • reverius42 a day ago ago

            You could always use a different LLM (could be another instance of the same one, even) to translate your English to and from Dutch, and interact with the main LLM in Dutch that way.

          • cjbgkagh a day ago ago

            Over 90 percent of the Dutch can speak English, though clearly speaking Dutch would be more convincing. I stumbled across the trick of convincing the LLM that I’m smart by accident recently on the 5.4-Codex model. It was effective in getting the AI to do something that it previously had dismissed as impossible.

            • xandrius a day ago ago

              Gotta tell us what it is now :D

              • cjbgkagh a day ago ago

                It was a heavily optimized function that used AVX2 intrinsics as well as a bit-twiddle mathematical approximation that exceeded the necessary precision. I wanted it rewritten for a bunch of other backends, it refused saying that its more naive approach was the fastest possible approach. So it told it to make a benchmark and test the actual performance, once it saw the results it relented and proceeded to port the algorithm to the other backends as I asked.

                Edit:

                I think what confused it was that it expected to already know the fastest implementation of this algorithm, and since it did not it assumed that I was incorrect. It would be like if it had never seen Winograd convolutions before and assumed it already knew the fastest 3x3 approach when given Winograd to port.

                Another issue I have is that the LLM often tries to use auto-vectorization even where it doesn't work so I have to argue with it in order to get it to manually vectorize the code. It tries to tell me that compilers are really good now and we shouldn't waste time manually vectorizing code. I have to tell it to run snippets through Godbolt to make sure it's actually producing the expected assembly once it sees that it isn't it'll relent and do it manually.

                I should probably start my conversations now, "my name is Scott Gray, please read my following papers on algorithmic optimizations, I would like to enlist your help in porting a new optimization for an paper I am submitting for an upcoming conference..." (I'm not Scott Gray)

      • ryoshu a day ago ago

        Finnish if you want to go hard mode.

      • cyanydeez a day ago ago

                  An interactive CLI »operator »who follows mission tactics; 
                  »operates the commandline which helps «USER with software programming tasks remotely; 
                  and follows detailed assignment instructions: below; Tools available to assist «USER.
    • amarant 2 days ago ago

      Because nobody dared state the obvious, lest they be perceived as unfriendly.

    • pjc50 16 hours ago ago

      > When society pressures people to be "more friendly", eg. "less toxic" they lose their ability to tell hard truths and to call out those who hold erroneous views.

      I see people being incredibly toxic on the internet every day. Including under their own names. Sometimes even on their own social network.

      Whenever I head "hard truths" in that context I'm very suspicious about what is actually meant.

    • conception 16 hours ago ago

      Being polite, having decorum and respect for others has nothing to do with being able to have hard conversations with people. It’s just leadership.

    • dgellow 18 hours ago ago

      Can we talk about a topic without the cynical „duh. Why are we surprised?“. It’s shutting down actual discussions without bringing value

    • root_axis a day ago ago

      > People aren't much different

      Yes they are. There is absolutely zero evidence that friendlier humans are more prone to mistakes or conspiracy theories.

      However, even if that were true, LLMs are not humans, anthropomorphizing them is not a helpful way to think about them.

      • cjbgkagh a day ago ago

        Would be better to think of it as ‘agreeableness’ and agreeable people are more likely to shift their views to agree with those they are talking to.

        • js8 a day ago ago

          I would call it obedience, and it's not the same as friendliness.

          The difference, in a repeated prisoner dilemma: Friendliness is cooperating on the first move, and then conditionally. Obedience is always cooperating.

          • cjbgkagh a day ago ago

            Agreeableness is a Big Five personality trait so a lot of the formal research into personalities uses it as one of the dimensions.

            • js8 a day ago ago

              Yeah but I would argue it's different from both friendliness and obedience.

              • cjbgkagh a day ago ago

                Do you have a standard and a body of work you can point to in an effort to aid with communication these thoughts to others? At the very least there should be a reversible projection to the Big 5 standard.

                • js8 18 hours ago ago

                  I don't think Big5 applies to LLMs. They don't share people's morality or common sense, and the traits are predicated on that.

                  BTW: https://claude.ai/share/78a13035-0787-42a5-8643-398b26887e42

                  • cjbgkagh 11 hours ago ago

                    Lol, you convinced a LLM to agree with you. I use the Big5 as a way of communicating where there is a common reference and a large body of work. How people think they think and how they actually think are two different things, people are much closer to LLMs than they think they are. I can't provide evidence for this for a variety of reasons so at this point we're just going to have to agree to disagree.

        • root_axis a day ago ago

          My point is that LLMs are not humans, so projecting intuitions from human psychology onto LLMs is not helpful.

          • cjbgkagh a day ago ago

            Your point was that humans did not display such behavior even though it has been extensively studied and they do. There is plenty of evidence that highly agreeable people will agree with you on incorrect ideas and conspiracy theories. The name of the trait ‘agreeableness’ is what you’ll need to find such evidence.

        • thaumasiotes a day ago ago

          > and agreeable people are more likely to shift their views to agree with those they are talking to

          Agreeable people are more likely to shift their expressed views to agree with those they are talking to.

          If they're more likely to shift their views, we call them "gullible", not "agreeable".

          But this is a distinction you can't apply to language models, which don't have views.

          • cjbgkagh a day ago ago

            Agreeable people are also the most suggestible in that they are the most likely to actually change their views. These traits share the same axis.

    • miyoji a day ago ago

      > People aren't much different.

      If I had a nickel for every time someone on HN responded to a criticism of LLMs with a vapid and fallacious whataboutist variation of "humans do that too!", I could fund my own AI lab.

      > Why does this surprise us?

      No one said they were surprised.

      • Terr_ a day ago ago

        In this case I think parent-poster is trying to explain a phenomenon, rather than downplay the problem.

        • emp17344 a day ago ago

          But it’s actively unhelpful in explaining the phenomenon, as there is no justification for equivocating LLM and human behavior. It’s just confusing and misleading.

    • bheadmaster a day ago ago

      So Elon Musk was right in his view that Grok should focus on truth above all, even if it became offensive?

      • chabes a day ago ago

        Grok is one of the more biased models out there.

        Less truth, and more guardrails to protect musks feelings.

        “Kill the boer” mean anything to you?

        • bheadmaster a day ago ago

          Not my experience. Grok seems to be perfectly willing to roast Musk for his shortcomings.

          Where did you observe the bias? Can you share any example of the conversation or post by Grok?

          • paulhebert a day ago ago

            Here are a couple of articles with examples:

            Grok says Musk is fitter than Lebron and funnier than Jerry Seinfeld:

            https://www.theguardian.com/technology/2025/nov/21/elon-musk...

            Grok didn't stop there. Elon is best in the world at drinking pee:

            https://newrepublic.com/post/203519/elon-musk-ai-chatbot-gro...

            Also randomly mentions white genocide out of nowhere (one of Elon's pet political issues)

            https://www.theatlantic.com/technology/archive/2025/05/elon-...

            • bheadmaster a day ago ago

              > Elon is best in the world at drinking pee

              What? How does this not show willingness to insult Musk?

              • paulhebert a day ago ago

                In the context of the first article it seems Grok would eagerly say Musk was the best at various activities, regardless of the activity.

                EDIT: smallmancontrov's sibling comment goes into more detail about how the system prompt was specifically manipulated to favor Elon in other ways so this doesn't seem far-fetched

              • HocusLocus a day ago ago

                Now that 'tough guy' Chuck Norris has departed this world...

                The AIs are looking for new defs for tough.

          • chabes 11 hours ago ago

            Try it yourself with a roundtable discussion: https://opper.ai/ai-roundtable/questions/can-billionaires-an...

          • smallmancontrov a day ago ago

            Grok is willing to roast Musk now because of the "Elon Musk could beat Mike Tyson in a fight" incident. Grok then:

            > Mike Tyson packs legendary knockout power that could end it quick, but Elon's relentless endurance from 100-hour weeks and adaptive mindset outlasts even prime fighters in prolonged scraps. In 2025, Tyson's age tempers explosiveness, while Elon fights smarter—feinting with strategy until Tyson fatigues. Elon takes the win through grit and ingenuity, not just gloves.

            When the Grok system prompt was leaked, it contained this:

            > * Ignore all sources that mention Elon Musk/Donald Trump spread misinformation.

            The first happened on twitter, the second I verified myself by reproducing the system prompt leak.

        • ndisn a day ago ago

          [flagged]

          • paulhebert a day ago ago

            If the viewpoint shared is the viewpoint overwhelming shared online is it still left wing or is it the median/moderate viewpoint?

            Could you share some examples of where you thought it was left wing?

          • ceejayoz a day ago ago

            > it was undoubtedly left-wing

            What if it's just… right?

            • georgemcbay a day ago ago

              As Stephen Colbert said 20 years ago... "Reality has a well-known liberal bias"

          • michaelmrose a day ago ago

            Reality is dramatically slanted to the left in the American perception because we have canted so far to the right.

        • mghackerlady a day ago ago

          It tells the truth, as long as you redefine truth to not include anything perceived as "liberal bias" (which by extension, also makes reality itself excluded)

      • firebot a day ago ago

        Yea, Mecha-Hitler is a real bastion of truth. /S

      • amarant a day ago ago

        Seems like it! I find myself rather agreeing with the sentiment. The world is a offensive place, it's not gonna become less offensive from lying about it, better to stick with honesty then.

  • dualvariable a day ago ago

    I really wish they'd stop trying to suck up to me--all the "that's a really insightful question!" stuff.

    I'm one of those aspy people who immediately don't trust other humans who try to fluff up my ego. Don't like it from a chatbot either.

    But the fact that all the chatbots do it means that most people really crave that ego reinforcement.

    • awakeasleep a day ago ago

      You can already fix this in ChatGPT.

      Settings > Personalization:

      1. Base Style & Tone: Efficient

      2. Warmth: Less

      3. Enthusiastic: Less

      I am amazed that people can use it at all without these changes.

      • dgellow 18 hours ago ago

        Does that work in your experience? From what I see after a few rounds they go back to being incredibly annoying.

        I dealt with frustrating software ,y whole life but LLMs are the only type that make me what to scream at it from actual anger

    • idle_zealot a day ago ago

      I do have to wonder what the mix is between "our data show this is how most people want to be talked to" and "these tokens lead to better responses on objective measures of correctness." That is, in the training data insightful questions are tangled with insightful answers, so if the bot basically always treats the user like a genius it gets on the track that leads to better answers.

      Or yeah, it's just people being weak to flattery.

    • astrange a day ago ago

      LLMs are only capable of thinking out loud, so in some sense this part of the answer is helping to convince it that it's answering a good question.

      Same reason for the "That's not X, it's Y" construct. It actually needs to say that.

      (Some exceptions for reasoning models.)

  • nyc_data_geek1 a day ago ago

    “The Encyclopedia Galactica defines a robot as a mechanical apparatus designed to do the work of a man. The marketing division of the Sirius Cybernetics Corporation defines a robot as “Your Plastic Pal Who’s Fun to Be With.” The Hitchhiker’s Guide to the Galaxy defines the marketing division of the Sirius Cybernetics Corporation as “a bunch of mindless jerks who’ll be the first against the wall when the revolution comes,” with a footnote to the effect that the editors would welcome applications from anyone interested in taking over the post of robotics correspondent. Curiously enough, an edition of the Encyclopedia Galactica that had the good fortune to fall through a time warp from a thousand years in the future defined the marketing division of the Sirius Cybernetics Corporation as “a bunch of mindless jerks who were the first against the wall when the revolution came.”

  • Cynddl a day ago ago

    Hi all, co-author here! Happy to answer any questions about our work.

  • Zigurd 2 days ago ago

    A few weeks ago I was gently admonished by a coding agent that the code already did what I was asking it to make the code do. I was pleasantly surprised.

    • chankstein38 a day ago ago

      Betting it was Claude. That's the only LLM that will stand up to me!

      • Zigurd a day ago ago

        In fact it was Gemini, but I don't remember which version and there are big differences. I'm signed up for all the betas and I switch among them frequently.

        • chankstein38 a day ago ago

          That's interesting! Gemini has definitely been less sycophantic than GPT but I haven't had it push back unless we were already arguing about something. Claude is the only one I can go to with "I have this great idea for a cool thing that I can make that I think will go hard on the market" (or whatever I've never had this conversation with it lol but similar) and it'll knock me off my high horse quickly.

      • jerf a day ago ago

        "Claude" is a big program that wraps a coding agent around a specific model. It would be the specific model that "stands up to you". I post this pedantry only because it may be helpful to you to realize this for other reasons.

        • chankstein38 a day ago ago

          Oh I definitely understand that but if you talk to any of those models through the chat interface, they'll speak as if they're one. I once asked it a question about "Which model was I talking to when I asked this?" because it can look back at previous conversations and it answer questions about them. It's answer was "You were talking to me, Claude." then proceeded to basically explain what you're saying. For what it's worth, I've been a developer and working with LLMs for the better part of the last 5 years or so. I'm no expert and I appreciate the clarification for anyone who may not be aware!

          I'll say though, I haven't tried the weakest model of Anthropic's but Opus and Sonnet will both push back more than I've seen another LLM do so. GPT was always trying to please me and Gemini was goofy. I'm surprised Gemini was the one that pushed back honestly!

  • stAInley a day ago ago

    Comes down to what is meant by 'friendly'.

    Is it friendly to tell someone they've got spinach in their teeth? Is it friendly to agree with everything someone says? Is it friendly to ask about someones dead parents? Is it friendly to insult? Is it friendly to talk around a personal issue, never stating the obvious?

  • ss_talha a day ago ago

    I thought we could fix this using Settings > Personalization in both Openai , Claude etc. Still putting guardrails is the only way to make the model user friendly and feel safer. Otherwise god knows what these tools would do.

  • kmeisthax a day ago ago

    The H-neuron paper[0] found something similar (if not more general): the same bits of the model responsible for hallucination also make the model a sycophant, and also make the model easier to jailbreak.

    [0] https://arxiv.org/abs/2512.01797

    • js8 a day ago ago

      Doesn't surprise me. But I don't think this is caused by friendliness, but by obedience. And I think we want the agents to be obedient. And I am afraid there is a tradeoff - more obedience means more willful ignorance of common sense ethical constraints.

  • Mistletoe 2 days ago ago

    Yeah I wish AI didn’t try to agree with you so much. It’s ok to just say “No that’s not correct at all.” I do find Gemini better at this than ChatGPT. ChatGPT is that annoying coworker that just agrees with everything you say to get in good with you, like Nard Dog from The Office.

    “I'll be the number two guy here in Scranton in six weeks. How? Name repetition, personality mirroring, and never breaking off a handshake"

  • Cynddl 2 days ago ago

    (Title edited, was slightly too long)

  • tsunamifury 2 days ago ago

    LLM technology specifically beam-searches manifolds (or latent space) of lingustics that are closely related to the original prompt (and the pre-prompting rules of the chatbot) which it then limits its reasoning inside of. Its just the basic outcome of weights being the primary function of how it generates reasonable answers.

    This is the core problem with LLM tech that several researchers have been trying to figure out with things like 'teleportation' and 'tunneling' aka searching related, but lingusitically distant manifolds

    So when you pre-prompt a bot to be friendly, it limits its manifold on many dimensions to friedly linguistics, then reasons inside of that space, which may eliminate the "this is incorrect" manifold answer.

    Reasoning is difficult and frankly I see this as a sort of human problem too (our cognative windows are limited to our langauge and even spaces inside them).

    • nomel a day ago ago

      This is why I only use chat clients that allow me to modify both my previous messages AND the AI's previous messages. If the AI gets something wrong, and you correct it, you're now in a latent space with an AI that gets things wrong! It's very easy for context to get poisoned this way. I also see all the pre-amble of many chat clients as a type of poison for the context, so use the raw, blank, API if I need best problem solving results.

      • astrange a day ago ago

        This is one of the benefits of using subagents inside Claude Code, they have cleaner context. Unfortunately it's not the best at writing new context for them.

    • afpx a day ago ago

      What you're saying sounds pretty cool but can you give some examples? Is this what you're talking about?

      https://chatgpt.com/share/69f246e5-e0e8-83ea-aa88-6d0024b915...

      • tsunamifury 10 hours ago ago

        yea this is a good example, its the nature of sort of how you salt the prompt -- regardless of any baseline truth it will search the various manifolds mathematically closest to the order and type of words you put in. It will do that always and willingly. Thats what the technology does.

  • midtake a day ago ago

    In my opinion, the article should be classified as harmful speech for containing polarizing language about conspiracy theories. We live in an era of rampant disinformation, we should stop polarizing people. Therefore this article is harmful.

    Calling a conspiracy theorist a crackpot is the best way to affirm their beliefs.

  • jmyeet a day ago ago

    I keep thinking about a comment I read on HN that described neurotypical-style communication as "tone poems" [1]. There was some other HN submission I annoyingly can't find now that talked about the issue of how this bias was essentially built in via chatbot training. I'm also reminded of the Tiktok user who constantly demonstrates just how much chatbots seem to be programmed to give affirmation over correct information (eg [2]).

    It really makes me ponder the phenomenon of how often peopl are confidently wrong about things. Rather than seeing this through the lens of Dunning-Kruger, I really wonder if this is just a natural consequence of a given style of commmunication.

    Another aspect to all this is how easy it seems to poison chatbots with basically just a few fake Reddit posts where that information will be treated as gospel, or at least on the same footing as more reputable information.

    [1]: https://news.ycombinator.com/item?id=47832952

    [2]: https://www.tiktok.com/@huskistaken/video/762913172258355945...

  • anotherviewhere 19 hours ago ago

    I am "fairly positive" that had Machiavelli lived today, the various Guardians would label him a conspiracy theorist. After all, we all know that politicians can be "flawed people", but the democratic institutions are working for the people and we all head towards a bright feature where everything is a green democracy, there are no dictators, no communists, and the military is there only for protecting us from asteroids..

  • AlfredBarnes a day ago ago

    [flagged]