I pasted this article into ChatGPT, then challenged it to choose a topic and write a paragraph that carefully takes note of the criticisms. I think ChatGPT failed; the result sounds like LLM writing to me:
I keep a paper notebook not because it is efficient, but because it wastes my time in a very particular way. The pen skips when I write too fast, the margin fills with crossed-out sentences I decide not to rescue, and every page records how sure or unsure I was when I showed up that day. None of this scales, and none of it generalizes into advice, which is precisely the point: the notebook proves to me that I was there, thinking something that cost a little effort to hold. A clean, fluent summary of my week would be more legible and more shareable, but it would also launder away the hesitation, the bad ideas, the lopsided emphasis that marks the difference between having an experience and reporting one. What I get from the notebook isn’t polish; it’s the quiet assurance that I didn’t outsource the thinking.
> We know the tells of LLM-generated writing—the awkward lists of three, the parallel sentence structure, the “and then it hit me”, the “uncomfortable,” “brutal,” or “honest” assessments—but even without them we know instantly.
Heh, great writing for those paying attention, on multiple levels.
This post is technically incorrect. You can finetune or even just prompt engineer an LLM to more or less follow your style. There wer experiments, that have shown people (even authors themselves!) prefer output of LLMs finetuned on specific author's works over the actual author's (even by authors themselves!).
I pasted this article into ChatGPT, then challenged it to choose a topic and write a paragraph that carefully takes note of the criticisms. I think ChatGPT failed; the result sounds like LLM writing to me:
I keep a paper notebook not because it is efficient, but because it wastes my time in a very particular way. The pen skips when I write too fast, the margin fills with crossed-out sentences I decide not to rescue, and every page records how sure or unsure I was when I showed up that day. None of this scales, and none of it generalizes into advice, which is precisely the point: the notebook proves to me that I was there, thinking something that cost a little effort to hold. A clean, fluent summary of my week would be more legible and more shareable, but it would also launder away the hesitation, the bad ideas, the lopsided emphasis that marks the difference between having an experience and reporting one. What I get from the notebook isn’t polish; it’s the quiet assurance that I didn’t outsource the thinking.
> We know the tells of LLM-generated writing—the awkward lists of three, the parallel sentence structure, the “and then it hit me”, the “uncomfortable,” “brutal,” or “honest” assessments—but even without them we know instantly.
Heh, great writing for those paying attention, on multiple levels.
This post is technically incorrect. You can finetune or even just prompt engineer an LLM to more or less follow your style. There wer experiments, that have shown people (even authors themselves!) prefer output of LLMs finetuned on specific author's works over the actual author's (even by authors themselves!).
Some sense of perspective: https://news.ycombinator.com/item?id=46273466
This is one of the best posts on I’ve read on this topic in the years since ChatGPT launched. Was hoping it would have gotten more discussion here!
Wonderful thanks for sharing.
Beautifully said.