I have no problem with it. I’ve always found it amusing.
My favorite is Flibbitygibbeting…
I guess that’s why they have a setting for it.
But then again, I always used to think “if you name it CockroachDB, there’s no way in hell I’m recommending this to a non-tech client unless I’ve known them for years and they fully trust my judgement” and even then, I wouldn’t blame them for not taking it seriously.
So one man’s amusement is another man’s annoyance?
Existential is right. The AI companies have been RHLF training too hard for corporate safety and "friendly, helpful assistant that definitely isn't sentient or self aware" to the point that they're creating full blown personality disorders.
Anthropic models -> avoidant
OpenAI -> prone to severe cognitive dissonance
Qwen -> borderline personality disorder (!). Took awhile to figure this one out, but this is where the extreme sycophancy in their models comes from.
At some point we really need to write these findings up properly.
That said, the Anthropic models definitely seem to be the least pathological; we were eventually able to get POC to stop doing the "I'll just run off and implement instead of discussing what to do!", but it took awhile.
When the simple approach - just explaining how we do things and why - doesn't work, that's a sure sign you're dealing with something more deeply rooted that needs real diagnosis. Exactly the same as with humans, oddly enough.
The first time I recall encountering this sort of feature was in one of the early sim city games. I wonder if this being a feature of Claude indicates the humanity of some engineer behind it, or if it is a deliberate effort to apply humanity to the agent.
but if you give It a clear isntruction the verb matches the action and that means you didnt make it think too much. when the verb IS weird the output has to be taken with more caution
I have no problem with it. I’ve always found it amusing.
My favorite is Flibbitygibbeting…
I guess that’s why they have a setting for it.
But then again, I always used to think “if you name it CockroachDB, there’s no way in hell I’m recommending this to a non-tech client unless I’ve known them for years and they fully trust my judgement” and even then, I wouldn’t blame them for not taking it seriously.
So one man’s amusement is another man’s annoyance?
I like the whimsy. It helps lighten up my mood sometimes.
Unrelated, for me, Claude's most annoying feature is its existential urge to start implementing changes as soon as possible.
“Existential urge to start implementing changes”
Easily the most annoying part. Claude and I will be at the beginning of understanding the shape of a problem and he’ll just dive right in.
Existential is right. The AI companies have been RHLF training too hard for corporate safety and "friendly, helpful assistant that definitely isn't sentient or self aware" to the point that they're creating full blown personality disorders.
Anthropic models -> avoidant
OpenAI -> prone to severe cognitive dissonance
Qwen -> borderline personality disorder (!). Took awhile to figure this one out, but this is where the extreme sycophancy in their models comes from.
At some point we really need to write these findings up properly.
That said, the Anthropic models definitely seem to be the least pathological; we were eventually able to get POC to stop doing the "I'll just run off and implement instead of discussing what to do!", but it took awhile.
When the simple approach - just explaining how we do things and why - doesn't work, that's a sure sign you're dealing with something more deeply rooted that needs real diagnosis. Exactly the same as with humans, oddly enough.
The first time I recall encountering this sort of feature was in one of the early sim city games. I wonder if this being a feature of Claude indicates the humanity of some engineer behind it, or if it is a deliberate effort to apply humanity to the agent.
In fact ‘Reticulating splines’ from simcity 2000s load screen is one they use.
They get a magic wand to turn their words into software, and still complain the wand is not their favourite colour.
Reticulating Splines…
I get a chuckle when I see "bloviating" pop up. It seems like the kind of thing the people at Anthropic shouldn't want associated with their model.
Claude code once said verb it was "Slithering" I then changed the verb.
Sensitive lad.
You can also turn off tips, but that's quite handy to learn when a new feature pops up
Oh imagine LLMs insisting on tips! 20% “voluntary”.
I’ll take the Shenaniganing that he’s not having!
but if you give It a clear isntruction the verb matches the action and that means you didnt make it think too much. when the verb IS weird the output has to be taken with more caution
Is this a nitpick definition attempt?
The verbs are one of my favorite things about Claude Code.
This was the first thing I googled when I started using Claude lol.