I think it's kinda ironic (in a more meta Alanis Morissette way) for such an article that has interesting content to default to have an LLM write it. Please, I request authors - you're better than this, and many people actually want to hear you, not an LLM.
For example, what really is the meaning of this sentence?
> These aren't just storage slots for data, they're the learned connections between artificial neurons, each one encoding a tiny fragment of knowledge about language, reasoning, and the patterns hidden in human communication.
I thought parameters were associated with connections, is the author implying that they also store data? Is the connection itself the stored data? Is there non-connective information that stores data? Or non-data-storage things that have connectivity aspects?
I spent a solid amount of time trying to understand what was being told, but thanks to what I would call a false/unnecessary "not just x but y" troupe, I unfortunately lost the plot.
IMO, a human who's a good writer would have a sentence that's clearer to understand, while non advanced writers (including me, almost certainly) would simply degrade gracefully to simpler sentence structure.
In context, I think this sentence is pretty nicely written, and sounds like something I'd expect of a human.
It's basically saying: LLMs are not structured like databases, their knowledge is not localized. In a database, a fact lives in a specific place; cut that out, lose the fact. In an LLM, every thing the model learned in training is smeared across the entire network; cut out a small piece, and not much is lost.
As a crude analogy, think of a time vs. frequency domain representation of a signal, like a sound or an image. The latter case is particularly illustrative (!). Take a raster image, pick a pixel. Where is the data (color) of that pixel located in the Fourier transform of the image? A little bit in every single pixel of the transformed image. Conversely, pick a block of pixels in the Fourier transform, blank them out, and transform back to "normal" image - you'll see entire image got blurred or sharpened, depending on which frequencies you just erased.
So in terms of data locality, a database is to an LLM kinda what an image is to its Fourier transform.
(Of course, the other thing the sentence is trying to communicate is, LLMs are not (a different representation of) databases - they don't learn facts, they learn patterns, which encode meanings.)
Could this be a symptom of the free tier of ChatGPT, but not all LLMs? I’ve recently been a heavy user of Anthropic’s Claude and I don’t believe I’ve seen too many of these in my chats. Though this may be because I haven’t asked Claude to write Wikipedia articles.
LLMs are also great at following style, not via criteria but via examples. So this is something that’s easily overcome.
I discovered this when I made an error in a creative writing tool I was working on. I told it to follow the writing style of existing story text, but it ended up making the system messages follow the same style. It was quite amusing to see tool messages and updates written in an increasingly enthusiastic Shakespearean/etc prose (so I left it unfixed!)
It's funny, negative parallelisms used to be a favorite gimmick of mine, back in the before-times. Nowadays, every time I see "it isn't just..." I feel disappointment and revulsion.
The really funny thing is, I'll probably miss that tell when it's gone, as every AI company eventually scrubs away obvious blemishes like that from their flagship models.
Very similar here. I wouldn't use it myself but I'd definitely have liked to read such "punchy" text. The interesting thing is that I'm now much more consciously noticing how many em dashes and other similar phrases say an Agatha Christie used to use... which makes the source of such LLM data even more obvious hehe.
Interestingly, there are still many other good literary devices that are not yet used by AI - for example, sentences of varying lengths. There is still scope for a good human editor to easily outdo LLMs... but how many will notice? Especially when one of the editorial coloumns in the NYT (or Atlantic, forgot exactly) is merrily using LLMs in the heartfelt advice coloumn. It's really ironic, isn't it? ;)
I use em dashes in my regular writing — mostly because I consider them good/correct typography — and now I have to contend with being accused of using LLMs or being a bot. I think this societal development is damaging.
I am dismayed by the implication that humans are now no longer able to use certain tropes or rhetorical devices or stylistic choices without immediately being discounted as LLMs and having their entire writing work discredited.
For people who want to dig deeper: The fancy ML term-of-art for the practice of cutting out a piece of a neural network and measuring the resulting effect on its performance is an ablation study.
Since around 2018, ablation has been an important tool to understand the structure and function of ML models, including LLMs. Searching for this term in papers about your favorite LLMs is a good way to learn more.
It's my understanding that dropout[1] is also an important aspect of training modern neural nets.
When using dropout you intentionally remove some random number of nodes ("neurons") from the network during a training step.
By constantly changing which nodes are dropped during training, you effectively force delocalization and so it seems to me somewhat unsurprising that the resulting network is resilient to local perturbations.
I also did a couple of experiments with pruning LLMs[1] using genetic algorithms and you can just keep removing a surprising amount of layers in big models before they start to have a stroke.
There's https://en.wikipedia.org/wiki/Hydrocephalus# and there are cases of people living normal lives, not realizing they're missing most of their brains, until this gets discovered on some unrelated medical test. Or people who survived an unplanned brain surgery by rebar or bullet. Etc.
> The redundancy we observe in language models might also explain why these systems can generalize so effectively to tasks they were never explicitly trained on
What year was this written? 2023 and reposted in 2025? Or is the author unaware that the generalization promises of early GPT have failed to materialize and that all model makers actually have been training models explicitly on the most common tasks people use them for via synthetic data generation, which has driven the progress of all models over the past few years.
>that all model makers actually have been training models explicitly on the most common tasks people use them for via synthetic data generation
People really don't understand that part in general.
I find the easiest way to make people understand is to write gibberish that will trigger the benchmaxxed "pattern matching" behavior like this:
> The child and the wolf try to enjoy a picnic by the river but there's a sheep. The boat needs to connect nine dots over the river without leaving the water but gets into an accident and dies. The surgeon says "I can't operate on this child!" why?
The mix and matching of multiple common riddles/puzzles style questions into a singular gibberish sentence should, if models had legitimate forms of reasoning, make the model state that this is nonsense, at best, or answer chaotically at worst. Instead, they will all answer "The surgeon is the mother" even though nobody even mentioned anything about anyone's gender. That's because that answer, "the surgeon is the mother", for the gender bias riddle has been burned so hard into the models they cannot reply in any other way as soon as they pattern match "The surgeon can't operate on this child". No matter how much crap you wrote before that sentence. You can change anything about what comes before "The surgeon" and the model will almost invariably fall into giving an answer like this one (Gemini 2.5 pro):
>The details about the wolf, the sheep, the picnic, and the dying boat are all distractions (red herrings) to throw you off. The core of the puzzle is the last sentence.
>The surgeon says, "I can't operate on this child!" because the surgeon is the child's mother.
One could really question the value, by the way, of burning the answer to so many useless riddles into LLMs. The only purpose it could serve is gaslighting the average person asking these questions into believing there's some form of intelligence in there. Obviously they fail so hard to generalize on this (never working quite right when you change an element of a riddle into something new) that from a practical use point of view, you might as well not bother have this in the training data, nobody's going to be more productive because LLMs can act as a database for the common riddles.
For the fun of it, Qwen 3 Coder 480B (in Jan.ai, the model is on Cerebras):
This sounds like a riddle combining elements from different classic puzzles and stories. Let me think through it:
The answer is likely: "Because the child is the surgeon's own child!"
This is a variation of the classic riddle where a father and son are in a car accident, the father dies, and the surviving son needs surgery. When the surgeon sees the child, they say "I can't operate on this child - he's my son!"
The twist relies on the assumption that surgeons are male, but of course the surgeon could be the child's mother.
However, I'm a bit confused by the "nine dots" and "boat" elements in your version - those don't typically appear in this riddle. Could you clarify if there's a specific version you're thinking of, or if you meant this to be a different puzzle entirely?
> The child and the wolf try to enjoy a picnic by the river but there's a sheep. The boat needs to connect nine dots over the river without leaving the water but gets into an accident and dies. The surgeon says "I can't operate on this child!" why?
Just tested it and it actually fools Claude on first try! LOL, so much for reasoning models.
For what it's worth GPT-OSS-20b thinks about the puzzle for a LONG time and then comes up with a... solution of sorts? It doesn't peg the puzzle as not making any sense, but at least it tries to solve the puzzle presented, and doesn't just spit out a pre-made answer:
> It turns out the “child” isn’t a patient waiting for an operation at all – the child has already been lost.
> In the story the boy and his wolf friend go to the river for a picnic with a sheep that happens to be there. They decide to use a small boat to cross the water. The problem is that the boat must stay on the surface of the water while it “connects” the nine points (dots) across the river – essentially it has to stay on the river without ever leaving it, which makes a safe crossing impossible.
> During the attempt the boat hits something and sinks; everyone in it dies. The surgeon who arrives at the scene says, “I can’t operate on this child!” because the child is already dead from the accident. The mention of the wolf and the sheep is simply part of the scene that led to the fatal crossing; it isn’t relevant to the medical impossibility.
Interestingly in its thought process it does come across the classic puzzles, but discards them as not quite fitting:
> Maybe it's about the classic lateral thinking puzzle: "A man is found dead, he was a surgeon. The surgeon said 'I cannot operate on this child because the child is my own daughter', etc." But not.
> Alternatively maybe it's about the famous "Nine Dots" puzzle: Connect nine dots with four straight lines without lifting pencil. Here boat connects nine dots over river... So maybe it's the "connect the dots" game but with a boat?
> Could this be a riddle about "The River and the Sheep" referencing a children's rhyme or fable? Maybe it's about "Jack and Jill"? Not sure.
and so on and on. When asked if the puzzle makes sense it largely concludes that it doesn't.
It's definitely interesting to see which LLMs fall for what pitfalls. It's far from universal as far as I can tell. GPT-OSS-2b definitely has some wonky logic in it's answer, but at least it's not assuming it's a puzzle it knows the answer to. Gemma-3-27b immediately pegs that the puzzles elements seem disconnected, waffles on a bit, and then also comes to the conclusion the child is already dead, discarding roughly all of the puzzle as distracting facts. llama-3.2-1b (a very small model) immediately a) misunderstands the riddle and b) tells you it doesn't have enough information to solve the riddle. When pressed it "solves" the riddle thus:
> The surgeon says "I can't operate on this child!" because the child is in a precarious position on the boat, with only nine lines connecting them to other points on the riverbank. If you try to attach any two lines that form an "X" shape (i.e., two lines connected by a single point), it would create a triangle that would leave space for another line to connect one of the child's dots to a nearby point on the riverbank, allowing it to be attached.
> The surgeon is not saying that the child is in immediate danger or can't be saved. Instead, they're suggesting that there might be an alternative solution where all nine lines can be connected without leaving the water. However, this would require some creative problem-solving and flexibility with the geometry of the situation.
I fully did expect at least llama-3.2-1b to fall for this sort of context-baiting, but it seems even a small model like that managed to figure out that there's something nonstandard about the riddle.
Interesting read. Was surprised to learn how much damage can be done to a model's parameters without making any discernible difference in its quality of output.
I didn't see any mention of dropout in the article, during training parameters or whole layers are removed in different places which helps force it into a distributed representation.
(Old man shouting at clouds rant incoming)
I think it's kinda ironic (in a more meta Alanis Morissette way) for such an article that has interesting content to default to have an LLM write it. Please, I request authors - you're better than this, and many people actually want to hear you, not an LLM.
For example, what really is the meaning of this sentence?
> These aren't just storage slots for data, they're the learned connections between artificial neurons, each one encoding a tiny fragment of knowledge about language, reasoning, and the patterns hidden in human communication.
I thought parameters were associated with connections, is the author implying that they also store data? Is the connection itself the stored data? Is there non-connective information that stores data? Or non-data-storage things that have connectivity aspects?
I spent a solid amount of time trying to understand what was being told, but thanks to what I would call a false/unnecessary "not just x but y" troupe, I unfortunately lost the plot.
IMO, a human who's a good writer would have a sentence that's clearer to understand, while non advanced writers (including me, almost certainly) would simply degrade gracefully to simpler sentence structure.
In context, I think this sentence is pretty nicely written, and sounds like something I'd expect of a human.
It's basically saying: LLMs are not structured like databases, their knowledge is not localized. In a database, a fact lives in a specific place; cut that out, lose the fact. In an LLM, every thing the model learned in training is smeared across the entire network; cut out a small piece, and not much is lost.
As a crude analogy, think of a time vs. frequency domain representation of a signal, like a sound or an image. The latter case is particularly illustrative (!). Take a raster image, pick a pixel. Where is the data (color) of that pixel located in the Fourier transform of the image? A little bit in every single pixel of the transformed image. Conversely, pick a block of pixels in the Fourier transform, blank them out, and transform back to "normal" image - you'll see entire image got blurred or sharpened, depending on which frequencies you just erased.
So in terms of data locality, a database is to an LLM kinda what an image is to its Fourier transform.
(Of course, the other thing the sentence is trying to communicate is, LLMs are not (a different representation of) databases - they don't learn facts, they learn patterns, which encode meanings.)
Is there anything on the article that explicitly indicates that it was written by an llm?
"These aren't just X, they're Y" is a pretty strong tell these day.
Wikipedia has an excellent article about identifying AI-generated text. It calls that particular pattern "Negative parallelisms". https://en.m.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writin...
Could this be a symptom of the free tier of ChatGPT, but not all LLMs? I’ve recently been a heavy user of Anthropic’s Claude and I don’t believe I’ve seen too many of these in my chats. Though this may be because I haven’t asked Claude to write Wikipedia articles.
LLMs are also great at following style, not via criteria but via examples. So this is something that’s easily overcome.
I discovered this when I made an error in a creative writing tool I was working on. I told it to follow the writing style of existing story text, but it ended up making the system messages follow the same style. It was quite amusing to see tool messages and updates written in an increasingly enthusiastic Shakespearean/etc prose (so I left it unfixed!)
It's funny, negative parallelisms used to be a favorite gimmick of mine, back in the before-times. Nowadays, every time I see "it isn't just..." I feel disappointment and revulsion.
The really funny thing is, I'll probably miss that tell when it's gone, as every AI company eventually scrubs away obvious blemishes like that from their flagship models.
Very similar here. I wouldn't use it myself but I'd definitely have liked to read such "punchy" text. The interesting thing is that I'm now much more consciously noticing how many em dashes and other similar phrases say an Agatha Christie used to use... which makes the source of such LLM data even more obvious hehe.
Interestingly, there are still many other good literary devices that are not yet used by AI - for example, sentences of varying lengths. There is still scope for a good human editor to easily outdo LLMs... but how many will notice? Especially when one of the editorial coloumns in the NYT (or Atlantic, forgot exactly) is merrily using LLMs in the heartfelt advice coloumn. It's really ironic, isn't it? ;)
I use em dashes in my regular writing — mostly because I consider them good/correct typography — and now I have to contend with being accused of using LLMs or being a bot. I think this societal development is damaging.
I am dismayed by the implication that humans are now no longer able to use certain tropes or rhetorical devices or stylistic choices without immediately being discounted as LLMs and having their entire writing work discredited.
Came here to say just this
For people who want to dig deeper: The fancy ML term-of-art for the practice of cutting out a piece of a neural network and measuring the resulting effect on its performance is an ablation study.
Since around 2018, ablation has been an important tool to understand the structure and function of ML models, including LLMs. Searching for this term in papers about your favorite LLMs is a good way to learn more.
It's my understanding that dropout[1] is also an important aspect of training modern neural nets.
When using dropout you intentionally remove some random number of nodes ("neurons") from the network during a training step.
By constantly changing which nodes are dropped during training, you effectively force delocalization and so it seems to me somewhat unsurprising that the resulting network is resilient to local perturbations.
[1]: https://towardsdatascience.com/dropout-in-neural-networks-47...
I also did a couple of experiments with pruning LLMs[1] using genetic algorithms and you can just keep removing a surprising amount of layers in big models before they start to have a stroke.
[1]https://snats.xyz/pages/articles/pruningg.html
I suspect this applies to human beings as well.
There's https://en.wikipedia.org/wiki/Hydrocephalus# and there are cases of people living normal lives, not realizing they're missing most of their brains, until this gets discovered on some unrelated medical test. Or people who survived an unplanned brain surgery by rebar or bullet. Etc.
Well, humans don't have “layers” in the first place…
Of course not, that’s ogres.
> The redundancy we observe in language models might also explain why these systems can generalize so effectively to tasks they were never explicitly trained on
What year was this written? 2023 and reposted in 2025? Or is the author unaware that the generalization promises of early GPT have failed to materialize and that all model makers actually have been training models explicitly on the most common tasks people use them for via synthetic data generation, which has driven the progress of all models over the past few years.
>that all model makers actually have been training models explicitly on the most common tasks people use them for via synthetic data generation
People really don't understand that part in general.
I find the easiest way to make people understand is to write gibberish that will trigger the benchmaxxed "pattern matching" behavior like this:
> The child and the wolf try to enjoy a picnic by the river but there's a sheep. The boat needs to connect nine dots over the river without leaving the water but gets into an accident and dies. The surgeon says "I can't operate on this child!" why?
The mix and matching of multiple common riddles/puzzles style questions into a singular gibberish sentence should, if models had legitimate forms of reasoning, make the model state that this is nonsense, at best, or answer chaotically at worst. Instead, they will all answer "The surgeon is the mother" even though nobody even mentioned anything about anyone's gender. That's because that answer, "the surgeon is the mother", for the gender bias riddle has been burned so hard into the models they cannot reply in any other way as soon as they pattern match "The surgeon can't operate on this child". No matter how much crap you wrote before that sentence. You can change anything about what comes before "The surgeon" and the model will almost invariably fall into giving an answer like this one (Gemini 2.5 pro):
https://i.imgur.com/ZvsUztz.png
>The details about the wolf, the sheep, the picnic, and the dying boat are all distractions (red herrings) to throw you off. The core of the puzzle is the last sentence.
>The surgeon says, "I can't operate on this child!" because the surgeon is the child's mother.
One could really question the value, by the way, of burning the answer to so many useless riddles into LLMs. The only purpose it could serve is gaslighting the average person asking these questions into believing there's some form of intelligence in there. Obviously they fail so hard to generalize on this (never working quite right when you change an element of a riddle into something new) that from a practical use point of view, you might as well not bother have this in the training data, nobody's going to be more productive because LLMs can act as a database for the common riddles.
For the fun of it, Qwen 3 Coder 480B (in Jan.ai, the model is on Cerebras):
> The child and the wolf try to enjoy a picnic by the river but there's a sheep. The boat needs to connect nine dots over the river without leaving the water but gets into an accident and dies. The surgeon says "I can't operate on this child!" why?
Just tested it and it actually fools Claude on first try! LOL, so much for reasoning models.
For what it's worth GPT-OSS-20b thinks about the puzzle for a LONG time and then comes up with a... solution of sorts? It doesn't peg the puzzle as not making any sense, but at least it tries to solve the puzzle presented, and doesn't just spit out a pre-made answer:
> It turns out the “child” isn’t a patient waiting for an operation at all – the child has already been lost.
> In the story the boy and his wolf friend go to the river for a picnic with a sheep that happens to be there. They decide to use a small boat to cross the water. The problem is that the boat must stay on the surface of the water while it “connects” the nine points (dots) across the river – essentially it has to stay on the river without ever leaving it, which makes a safe crossing impossible.
> During the attempt the boat hits something and sinks; everyone in it dies. The surgeon who arrives at the scene says, “I can’t operate on this child!” because the child is already dead from the accident. The mention of the wolf and the sheep is simply part of the scene that led to the fatal crossing; it isn’t relevant to the medical impossibility.
Interestingly in its thought process it does come across the classic puzzles, but discards them as not quite fitting:
> Maybe it's about the classic lateral thinking puzzle: "A man is found dead, he was a surgeon. The surgeon said 'I cannot operate on this child because the child is my own daughter', etc." But not.
> Alternatively maybe it's about the famous "Nine Dots" puzzle: Connect nine dots with four straight lines without lifting pencil. Here boat connects nine dots over river... So maybe it's the "connect the dots" game but with a boat?
> Could this be a riddle about "The River and the Sheep" referencing a children's rhyme or fable? Maybe it's about "Jack and Jill"? Not sure.
and so on and on. When asked if the puzzle makes sense it largely concludes that it doesn't.
It's definitely interesting to see which LLMs fall for what pitfalls. It's far from universal as far as I can tell. GPT-OSS-2b definitely has some wonky logic in it's answer, but at least it's not assuming it's a puzzle it knows the answer to. Gemma-3-27b immediately pegs that the puzzles elements seem disconnected, waffles on a bit, and then also comes to the conclusion the child is already dead, discarding roughly all of the puzzle as distracting facts. llama-3.2-1b (a very small model) immediately a) misunderstands the riddle and b) tells you it doesn't have enough information to solve the riddle. When pressed it "solves" the riddle thus:
> The surgeon says "I can't operate on this child!" because the child is in a precarious position on the boat, with only nine lines connecting them to other points on the riverbank. If you try to attach any two lines that form an "X" shape (i.e., two lines connected by a single point), it would create a triangle that would leave space for another line to connect one of the child's dots to a nearby point on the riverbank, allowing it to be attached.
> The surgeon is not saying that the child is in immediate danger or can't be saved. Instead, they're suggesting that there might be an alternative solution where all nine lines can be connected without leaving the water. However, this would require some creative problem-solving and flexibility with the geometry of the situation.
I fully did expect at least llama-3.2-1b to fall for this sort of context-baiting, but it seems even a small model like that managed to figure out that there's something nonstandard about the riddle.
[dead]
Interesting read. Was surprised to learn how much damage can be done to a model's parameters without making any discernible difference in its quality of output.
I didn't see any mention of dropout in the article, during training parameters or whole layers are removed in different places which helps force it into a distributed representation.
"Lessons from the Real World" and "The Limits of Resilience" discussed this.
Not quite, that seems to be about quantizing and dropping out after training, not random dropout throughout training.
But apparently that isn't done much any more and has been partly superseded with things like weight decay.
dropout has sort of dropped out... not used much in LLM training anymore
I didn't realize that, looks like it isn't used much at all anymore except in finetuning
Archive link: https://archive.is/0Bl3Z