I've noticed a fundamental shift in how I engage with longform text — both in how I use it and how I perceive its purpose.
Longform content used to be something you navigated linearly, even when skimming. It was rich with meaning and nuance — each piece a territory to be explored and inhabited. Reading was a slow burn, a cognitive journey. It required attention, presence, patience.
But now, longform has become iconic — almost like an emoji. I treat it less as a continuous thread to follow, and more as a symbolic object. I copy and paste it across contexts, often without reading it deeply. When I do read, it's only to confirm that it’s the right kind of text — then I hand it off to an LLM-powered app like ChatGPT.
Longform is interactive now. The LargeLanguageModels is a responsive medium, giving tactile feedback with every tweak. Now I don't treat text as a finished work, but as raw material — tone, structure, rhythm, vibes — that I shape and reshape until it feels right. Longform is clay and LLMs are the wheel that lets me mould it.
This shift marks a new cultural paradigm. Why read the book when the LLM can summarize it? Why write a letter when the model can draft it for you? Why manually build a coherent thought when the system can scaffold it in seconds?
The LLM collapses the boundary between form and meaning. Text, as a medium, becomes secondary — even optional. Whether it’s a paragraph, a bullet list, a table, or a poem, the surface format is interchangeable. What matters now is the semantic payload — the idea behind the words. In that sense, the psychology and capability of the LLM become part of the medium itself. Text is no longer the sole conduit for thought — it’s just one of many containers.
And in this way, we begin to inch toward something that feels more telepathic. Writing becomes less about precisely articulating your ideas, and more about transmitting a series of semantic impulses. The model does the rendering. The wheel spins. You mold. The sentence is no longer the unit of meaning — the semantic gesture is.
It’s neither good nor bad. Just different. The ground is unmistakably shifting. I almost titled this page "Writing Longform Is Now Hot. Reading Longform Is Now Cool." because, in McLuhanesque terms, the poles have reversed. Writing now requires less immersion — it’s high-definition, low-participation. Meanwhile, reading longform, in a world of endless summaries and context-pivoting, asks for more. It’s become a cold medium.
There’s a joke: “My boss used ChatGPT to write an email to me. I summarized it and wrote a response using ChatGPT. He summarized my reply and read that.” People say: "See? Humans are now just intermediaries for LLMs to talk to themselves."
But that’s not quite right.
It’s not that we’re conduits for the machines. It’s that the machines let us bypass the noise of language — and get closer to pure semantic truth. What we’re really doing is offloading the form of communication so we can focus on the content of it.
And that, I suspect, is only the beginning.
Soon, OpenAI, Anthropic, and others will lean into this realization — if they haven’t already — and build tools that let us pivot, summarize, and remix content while preserving its semantic core. We'll get closer and closer to an interface for meaning itself. Language will become translucent. Interpretation will become seamless.
It’s a common trope to say humans are becoming telepathic. But transformer models are perhaps the first real step in that direction. As they evolve, converting raw impulses — even internal thoughtforms — into structured communication will become less of a challenge and more of a given.
Eventually, we’ll realize that text, audio, and video are just skins — just surfaces — wrapped around the same thing: semantic meaning. And once we can capture and convey that directly, we’ll look back and see that this shift wasn’t about losing language, but about transcending it.
Full text:
I've noticed a fundamental shift in how I engage with longform text — both in how I use it and how I perceive its purpose.
Longform content used to be something you navigated linearly, even when skimming. It was rich with meaning and nuance — each piece a territory to be explored and inhabited. Reading was a slow burn, a cognitive journey. It required attention, presence, patience.
But now, longform has become iconic — almost like an emoji. I treat it less as a continuous thread to follow, and more as a symbolic object. I copy and paste it across contexts, often without reading it deeply. When I do read, it's only to confirm that it’s the right kind of text — then I hand it off to an LLM-powered app like ChatGPT.
Longform is interactive now. The LargeLanguageModels is a responsive medium, giving tactile feedback with every tweak. Now I don't treat text as a finished work, but as raw material — tone, structure, rhythm, vibes — that I shape and reshape until it feels right. Longform is clay and LLMs are the wheel that lets me mould it.
This shift marks a new cultural paradigm. Why read the book when the LLM can summarize it? Why write a letter when the model can draft it for you? Why manually build a coherent thought when the system can scaffold it in seconds?
The LLM collapses the boundary between form and meaning. Text, as a medium, becomes secondary — even optional. Whether it’s a paragraph, a bullet list, a table, or a poem, the surface format is interchangeable. What matters now is the semantic payload — the idea behind the words. In that sense, the psychology and capability of the LLM become part of the medium itself. Text is no longer the sole conduit for thought — it’s just one of many containers.
And in this way, we begin to inch toward something that feels more telepathic. Writing becomes less about precisely articulating your ideas, and more about transmitting a series of semantic impulses. The model does the rendering. The wheel spins. You mold. The sentence is no longer the unit of meaning — the semantic gesture is.
It’s neither good nor bad. Just different. The ground is unmistakably shifting. I almost titled this page "Writing Longform Is Now Hot. Reading Longform Is Now Cool." because, in McLuhanesque terms, the poles have reversed. Writing now requires less immersion — it’s high-definition, low-participation. Meanwhile, reading longform, in a world of endless summaries and context-pivoting, asks for more. It’s become a cold medium.
There’s a joke: “My boss used ChatGPT to write an email to me. I summarized it and wrote a response using ChatGPT. He summarized my reply and read that.” People say: "See? Humans are now just intermediaries for LLMs to talk to themselves."
But that’s not quite right.
It’s not that we’re conduits for the machines. It’s that the machines let us bypass the noise of language — and get closer to pure semantic truth. What we’re really doing is offloading the form of communication so we can focus on the content of it.
And that, I suspect, is only the beginning.
Soon, OpenAI, Anthropic, and others will lean into this realization — if they haven’t already — and build tools that let us pivot, summarize, and remix content while preserving its semantic core. We'll get closer and closer to an interface for meaning itself. Language will become translucent. Interpretation will become seamless.
It’s a common trope to say humans are becoming telepathic. But transformer models are perhaps the first real step in that direction. As they evolve, converting raw impulses — even internal thoughtforms — into structured communication will become less of a challenge and more of a given.
Eventually, we’ll realize that text, audio, and video are just skins — just surfaces — wrapped around the same thing: semantic meaning. And once we can capture and convey that directly, we’ll look back and see that this shift wasn’t about losing language, but about transcending it.