This is a very interesting way of saying that by offloading the procession of thoughts we take to come to a conclusion individually we become more dull and homogenized as a whole when using ai "shortcuts".
I think this is true with many tools but pronounced here with how ubiquitous language is. For most of human history most people could not read and write. Given llm interaction is almost exclusively written we may take a small step back where a mass of people are lazy using shortcuts and others are not.
It already is, most of my friends don't read anything beyond social networks anymore. This is already happening and was in progress long before LLMs, they are mostly fast tracking the process.
Just the other day, someone saw a pile of textbooks in my room and commented incredulously that I ‘still learn things from books?’
It was one of the most jarringly alien things I’ve ever heard, like being told that everyone has moved on from toilet paper to just using their hands, but I missed the memo.
I can't imagine. I get so much gratification from reading all kinds of things. I enjoy a bit of stuff like this, but to go without things like science fiction, research papers, fantasy, other novels... I'd feel like there was so much missing.
I get the point that the author is trying to make here and to an extent I think it’s prudent. If only it wasn’t bogged down by allusions that are withering in cultural relevancy.
AI is not a faithful representation of human intelligence—or the human essence at all, for that matter—and prolonged dependence on its technologies will subdue human expression and the means in which we come to know about ourselves and life around us.
He is exercising his point through his prose. A cursory glance at his Wikipedia bio and this passage gives insight into the objective of this article:
> As Socrates sees things, the proper use of logos is to work toward, and to be transformed by, an increasingly clear grasp of the good. He regards this as an erotic undertaking. The more clearly we see the good, the more we long to bring ourselves into closer proximity to it. And the most promising path to the apprehension and internalization of the good is prolonged union and thoughtful conversation with a worthy lover.
The problem is that he’s attempting to display the art of literary lovemaking in the age of hook-up culture.
Hint: It may serve this crop of readers to start immediately from the section titled “Another Creation Story”.
Aside: I’d like to see a study that scores contemporary literacy rates using articles from the Hedgehog Review instead of Jane Austen.
Quite an astonishing amount of quotes -- I counted Heidegger, Nietzsche, Murdoch, the Bible, and Plato as key players -- to contribute very little of substance. It feels like the same kind of hand-wringing that you might have read about the invention of the typewriter, the telegraph, or recorded music. Humans are complex creatures! We can be "in the process of becoming" about a great many things along a great many axes. Using ChatGPT to draft a letter of complaint to your local government isn't going to change that.
The basilisk consumes the minds of those who worship it.
It's mostly in the tech industry though that you find people who have such a tenuous grasp on the natural order that they honestly believe serving the basilisk is what's best
other comments mention things like the "geat filter" and how (further) abandonment of the written word could become a class definition, these are not trivial speculations......many of us here will have experienced predjudice and bullying for bieng "overly" literate in school so the notion of literacy becoming a liability is reasonable, especialyconsidering that the president just finnished his first book, and it didn't go so well, so he might just save everyone from having to suffer as he has, and ban crayons.
While I agree, making the assumption that our GP was not indeed using an LLM, that has a lot to say about the essence of the article’s argument. We were maybe already stochastic parrots and are just building tools to be more efficient at it.
Not sure about that, we are happy to assume that a stochastic process such as evolution culminated in advanced life forms but reluctant to admit that our own creations could be the outcome of a stochastic process on communication patterns.
Oh hey, its that shower thought I had a few years ago.
With more seriousness, I find people who drill deep into the bedrock of knowledge and expertise will find that tact and conviction in informed opinion is a healthy counterweight to this possible habit.
People will still keep up the class stratification, because power feels good, and a myriad of other human properties, or failings if you will. So, the problems will also never end.
What is life without struggle, though? Reading through a long tech manual, overcoming failure dozens of times, telling myself "just one more try, you're so close" and the ecstatic feeling of success that follows as I see my project come to fruition. Without it, how do we reaffirm ourselves...?
Digital gods can't take away your ability to struggle purely for the enjoyment of it.
Think of all the things we consider fun or worthwhile that are purely unnecessary struggles. Ever play sports? Build a side project just for the fun of it? Video games? Music? Art?
I think it's even more than affirmation and pride, I think the process is the most critical part of learning. And now we're indulging people who want shortcuts that are going to significantly dull them.
I cant. I think of a species that evolved at the time soonest after the big bang and is still alive today. Maybe they have? idk Billions of years is a long time.
This highlights an interesting point that I hadn't examined before... the thoughts behind the words, and the experiences linked to those thoughts / the moment for which the words are needed.
Although from a chemical perspective and learning perspective humans may seem like stochastic machines, I think that the capability for inner thought and emotion does differentiate us from genAI. After all, we would not ascribe sentience to such a "stochastic parrot", but in a similar way we see the text output that AI generates and wrongly assume (in a previously reasonable assumption) that there must have been "thoughts" behind said text.
No different than a book. Maybe the places where the experiences were had were real and maybe they weren’t. Maybe we recall the emotion or it is projected from the story (or resonates in frequency with our own experiences).
The text on this site is too small. When you spend so much time on the design of your blog and don’t optimize for viewing on smaller mobile devices, it’s like painting the Mona Lisa and leaving it in your garage.
It's about 49 characters, which is maybe a little small but much closer to the size I normally read my ebooks at then most website these days. Most recommendations I see online for digital prose line length recommend around 55 characters at the lowest, so I think it's doing okay.
I can’t read the Hedgehog Review on my phone either. It’s the combination of the small letters and font. HN has a similar width of characters, but the font is legible at that size.
It's funny that I had the exact opposite reaction. I was happy to get more information with less eye movement. Large text is everywhere these days and causes me some eye strain after a while.
I find it ironic, if that's the right word, that this essay quotes someone else in almost every paragraph and yet insists that humans are not stochastic parrots, and the essence of human communication is original thought.
You are a state machine. You have finite internal state that roughly adheres to a particular structure, you take in input, and as a function of internal state and input, you produce output and a new state. Sufficiently large models are a rough approximation. We are perhaps different stochastic parrots than the models we create, but likely stochastic parrots none the less.
You are a hydraulic system. You are composed of interlinked pipes within which the pressure rises and lowers in order to produce all your thoughts and actions.
You are a chemical soup. Your body is a closed system of proteins and amino acids reacting with each other, driving behavior in order to sustain the reactions.
You are an electric grid. A system of interconnected wires where electric impulses respond to one another in a synchronized manner, from which your life force is derived.
Probably better to say that we contain multitudes of probabilistic graphs that resemble state machines, but those graphs do not make up our entire system. Further, those graphs interact constantly with stochastic systems (the environment, other graphs, etc) through couplings of varying degree.
A state machine is a very specific thing in Comp Sci, and I’m not clear you have a strong grasp on it.
You’re not a state machine. A state machine does one serial task, which is why the input+state can create a consistent and deterministic output+state. There are no secondary input streams or exogenous factors to consider for calculating a state machine transition.
Humans create output from many streams of input, arriving at across many different time horizons. Because of this, you cannot create a deterministic model of a human’s state transition for a given input - a requirement of state machines.
This isn’t philosophical or semantics. Mathematically, you’re not a state machine.
I believe that the different topologies of the kind of "idea graph" in the human mind is becoming less diverse. That, as media consolidates and becomes more accessible in all parts of the globe, the diversity of modes of thought decreases as people become, more or less, Americanized
Perhaps Americanized isnt the right term. Fundamentally, there is something at play where the more “known” the less magical the brain is. That is, it doesnt have to think outside the box because the box is seemingly fully explored.
Why do hard thing when everyone says hard thing is too hard. What if there was no everyone? Is thing that hard?
I dont think original thinking is going to go away but i do think it will be owned by those who control the all thing which absorbs it from the mass of information.
Yes, completely agree. Marshall Mcluhan used the term "extensions of man" in the subtitle for "Understanding Media" to refer to information technology (in his day, TV). He said that in the same way a car or our clothes become an extension of our bodies, information technology becomes an extension of our central nervous system, and as the world becomes more connected through information, we begin to form a global village, where people on the other side of the country or even the earth begin to share a common memetic understanding. Things far away become immediate and personal and the layers which we filter them through become thinner, giving a kind of sameness to how we react to new information
This is a very interesting way of saying that by offloading the procession of thoughts we take to come to a conclusion individually we become more dull and homogenized as a whole when using ai "shortcuts".
I think this is true with many tools but pronounced here with how ubiquitous language is. For most of human history most people could not read and write. Given llm interaction is almost exclusively written we may take a small step back where a mass of people are lazy using shortcuts and others are not.
It's going to be depressing if "reading is hard" is the Great Filter
It already is, most of my friends don't read anything beyond social networks anymore. This is already happening and was in progress long before LLMs, they are mostly fast tracking the process.
Just the other day, someone saw a pile of textbooks in my room and commented incredulously that I ‘still learn things from books?’
It was one of the most jarringly alien things I’ve ever heard, like being told that everyone has moved on from toilet paper to just using their hands, but I missed the memo.
A charitable explanation can be that they mean why are you still reading ebooks and not paper books.
I can't imagine. I get so much gratification from reading all kinds of things. I enjoy a bit of stuff like this, but to go without things like science fiction, research papers, fantasy, other novels... I'd feel like there was so much missing.
I get the point that the author is trying to make here and to an extent I think it’s prudent. If only it wasn’t bogged down by allusions that are withering in cultural relevancy.
AI is not a faithful representation of human intelligence—or the human essence at all, for that matter—and prolonged dependence on its technologies will subdue human expression and the means in which we come to know about ourselves and life around us.
He is exercising his point through his prose. A cursory glance at his Wikipedia bio and this passage gives insight into the objective of this article:
> As Socrates sees things, the proper use of logos is to work toward, and to be transformed by, an increasingly clear grasp of the good. He regards this as an erotic undertaking. The more clearly we see the good, the more we long to bring ourselves into closer proximity to it. And the most promising path to the apprehension and internalization of the good is prolonged union and thoughtful conversation with a worthy lover.
The problem is that he’s attempting to display the art of literary lovemaking in the age of hook-up culture.
Hint: It may serve this crop of readers to start immediately from the section titled “Another Creation Story”.
Aside: I’d like to see a study that scores contemporary literacy rates using articles from the Hedgehog Review instead of Jane Austen.
Quite an astonishing amount of quotes -- I counted Heidegger, Nietzsche, Murdoch, the Bible, and Plato as key players -- to contribute very little of substance. It feels like the same kind of hand-wringing that you might have read about the invention of the typewriter, the telegraph, or recorded music. Humans are complex creatures! We can be "in the process of becoming" about a great many things along a great many axes. Using ChatGPT to draft a letter of complaint to your local government isn't going to change that.
The basilisk consumes the minds of those who worship it.
It's mostly in the tech industry though that you find people who have such a tenuous grasp on the natural order that they honestly believe serving the basilisk is what's best
other comments mention things like the "geat filter" and how (further) abandonment of the written word could become a class definition, these are not trivial speculations......many of us here will have experienced predjudice and bullying for bieng "overly" literate in school so the notion of literacy becoming a liability is reasonable, especialyconsidering that the president just finnished his first book, and it didn't go so well, so he might just save everyone from having to suffer as he has, and ban crayons.
Invoking the cliched comparison of AI to past inventions in a discussion about parroting is brilliant satire.
So is writing an ultimately empty essay mostly composed of other people's quotes, thoughts, and concerns
While I agree, making the assumption that our GP was not indeed using an LLM, that has a lot to say about the essence of the article’s argument. We were maybe already stochastic parrots and are just building tools to be more efficient at it.
> We were maybe already stochastic parrots and are just building tools to be more efficient at it.
I think we can and should firmly reject this notion
If humans were mere stochastic parrots we would not have the ability to build the Internet we're talking on right now
Not sure about that, we are happy to assume that a stochastic process such as evolution culminated in advanced life forms but reluctant to admit that our own creations could be the outcome of a stochastic process on communication patterns.
Oh hey, its that shower thought I had a few years ago.
With more seriousness, I find people who drill deep into the bedrock of knowledge and expertise will find that tact and conviction in informed opinion is a healthy counterweight to this possible habit.
What happens when/if the machines do everything? We will have no more problems to solve.
I don't believe in digital gods.
But also the machine stops[1] and idiocracy both address this.
[1] https://www.gutenberg.org/cache/epub/72890/pg72890-images.ht...
People will still keep up the class stratification, because power feels good, and a myriad of other human properties, or failings if you will. So, the problems will also never end.
What is life without struggle, though? Reading through a long tech manual, overcoming failure dozens of times, telling myself "just one more try, you're so close" and the ecstatic feeling of success that follows as I see my project come to fruition. Without it, how do we reaffirm ourselves...?
Digital gods can't take away your ability to struggle purely for the enjoyment of it.
Think of all the things we consider fun or worthwhile that are purely unnecessary struggles. Ever play sports? Build a side project just for the fun of it? Video games? Music? Art?
If you've ever used an LLM to try and build something, you know there's still plenty of struggle involved ;)
I think it's even more than affirmation and pride, I think the process is the most critical part of learning. And now we're indulging people who want shortcuts that are going to significantly dull them.
Paying your bills and finding something worthwhile to do with your life may be problems enough.
We can spend more time on art, music, camaraderie, love, play, and the pursuit of grace.
None of which is possible without the lived experiences of problem solving.
Playing games, creating art, and pursuing love ARE the lived experience of problem solving… for me, at least. Probably not for everyone.
That won't happen, don't kid yourself
The same thing will happen to us that happens to surplus livestock
We will always have a problem to solve. Isn't it the point of Asimov's story, "The Last Question"?
Disease? Mortality? Finding a good partner?
How do you imagine a robot solving those?
I cant. I think of a species that evolved at the time soonest after the big bang and is still alive today. Maybe they have? idk Billions of years is a long time.
Bold of you to assume the machine gods would allocate resources for our survival.
If there is one thing that AI is really good at it’s me typing in a few words and it coming up with what I want to say.
This highlights an interesting point that I hadn't examined before... the thoughts behind the words, and the experiences linked to those thoughts / the moment for which the words are needed.
Although from a chemical perspective and learning perspective humans may seem like stochastic machines, I think that the capability for inner thought and emotion does differentiate us from genAI. After all, we would not ascribe sentience to such a "stochastic parrot", but in a similar way we see the text output that AI generates and wrongly assume (in a previously reasonable assumption) that there must have been "thoughts" behind said text.
No different than a book. Maybe the places where the experiences were had were real and maybe they weren’t. Maybe we recall the emotion or it is projected from the story (or resonates in frequency with our own experiences).
I liken LLMs to https://en.wikipedia.org/wiki/Broca%27s_area
It's a functional subunit of the brain responsible for language interpretation and production.
Could be said more succinctly: use it or lose it
Hmm. What would Ian Betteridge say?
"What kind of parrot?"
Norwegian blue. Beautiful plumage.
Very low energy though.
But everyone is so thrilled about not having to think again any more ever!
It would be so sad to disappoint
The text on this site is too small. When you spend so much time on the design of your blog and don’t optimize for viewing on smaller mobile devices, it’s like painting the Mona Lisa and leaving it in your garage.
It's about 49 characters, which is maybe a little small but much closer to the size I normally read my ebooks at then most website these days. Most recommendations I see online for digital prose line length recommend around 55 characters at the lowest, so I think it's doing okay.
I can’t read the Hedgehog Review on my phone either. It’s the combination of the small letters and font. HN has a similar width of characters, but the font is legible at that size.
Reader mode for the win
It's funny that I had the exact opposite reaction. I was happy to get more information with less eye movement. Large text is everywhere these days and causes me some eye strain after a while.
[dead]
[dead]
[dead]
I find it ironic, if that's the right word, that this essay quotes someone else in almost every paragraph and yet insists that humans are not stochastic parrots, and the essence of human communication is original thought.
> yet insists that humans are not stochastic parrots
It seems trivially obvious to me that you cannot stochastic parrot yourself from apes with sticks and rocks to space flight and internet
You are a state machine. You have finite internal state that roughly adheres to a particular structure, you take in input, and as a function of internal state and input, you produce output and a new state. Sufficiently large models are a rough approximation. We are perhaps different stochastic parrots than the models we create, but likely stochastic parrots none the less.
You are a hydraulic system. You are composed of interlinked pipes within which the pressure rises and lowers in order to produce all your thoughts and actions.
You are a chemical soup. Your body is a closed system of proteins and amino acids reacting with each other, driving behavior in order to sustain the reactions.
You are an electric grid. A system of interconnected wires where electric impulses respond to one another in a synchronized manner, from which your life force is derived.
And if you're Steven Wolfram all of those are just automata as well.
Probably better to say that we contain multitudes of probabilistic graphs that resemble state machines, but those graphs do not make up our entire system. Further, those graphs interact constantly with stochastic systems (the environment, other graphs, etc) through couplings of varying degree.
A state machine is a very specific thing in Comp Sci, and I’m not clear you have a strong grasp on it.
You’re not a state machine. A state machine does one serial task, which is why the input+state can create a consistent and deterministic output+state. There are no secondary input streams or exogenous factors to consider for calculating a state machine transition.
Humans create output from many streams of input, arriving at across many different time horizons. Because of this, you cannot create a deterministic model of a human’s state transition for a given input - a requirement of state machines.
This isn’t philosophical or semantics. Mathematically, you’re not a state machine.
Not all state machines are deterministic.
> You are a state machine.
Humans are analog, state machines are not. And the analogue I will use here is that a model of anything is not the thing itself by definition.
> We are perhaps different stochastic parrots than the models we create, but likely stochastic parrots none the less.
To parrot is to "to repeat by rote"[0]. Algorithms, such as LLM's, do so as that is all they can do.
I choose to not limit myself to being a parrot. Which is why I am not one.
As Descartes proffers:
0 - https://www.merriam-webster.com/dictionary/parrot1 - https://www.azquotes.com/quote/1521522
> You have finite internal state
As Terry Pratchett might have said - what about quantum?
What about it? That only exponentiates the state space, it doesn’t make it non-finite.
I believe that the different topologies of the kind of "idea graph" in the human mind is becoming less diverse. That, as media consolidates and becomes more accessible in all parts of the globe, the diversity of modes of thought decreases as people become, more or less, Americanized
Perhaps Americanized isnt the right term. Fundamentally, there is something at play where the more “known” the less magical the brain is. That is, it doesnt have to think outside the box because the box is seemingly fully explored.
Why do hard thing when everyone says hard thing is too hard. What if there was no everyone? Is thing that hard?
I dont think original thinking is going to go away but i do think it will be owned by those who control the all thing which absorbs it from the mass of information.
Yes, completely agree. Marshall Mcluhan used the term "extensions of man" in the subtitle for "Understanding Media" to refer to information technology (in his day, TV). He said that in the same way a car or our clothes become an extension of our bodies, information technology becomes an extension of our central nervous system, and as the world becomes more connected through information, we begin to form a global village, where people on the other side of the country or even the earth begin to share a common memetic understanding. Things far away become immediate and personal and the layers which we filter them through become thinner, giving a kind of sameness to how we react to new information