What makes me stupid is hearing about "AI" day after day like it is the best thing since sliced bread, and yet 99.9% of useful things that ive seen come from LLMs is just low level programming tasks or fluffed up nonsense that any manager could spew. I can't even trust what LLMs tell me unless the answer is so simple a 2015 google search's top result would be just as adequate. Except now the top 20 google results are all AI answers from the same source material, packed full of fluff but stripped entirely of nuance or useful adjacent knowledge. Just changing the question slightly can give contradictory answers with both given with full confidence.
This is not true at all - I have been using the Pro level AIs to automate my 150k a year automation engineer job for over 2 years and have reduced my workload by about 95%, no joke (AI writes great selenium tests). This is a real, measurable amount of work - it used to be that you had to be pretty smart to write code and now anybody can vibe code an automation test framework in literally one afternoon. I know because I did it a few months ago for my new role. It is beyond game changing for the reason - I can only imagine what actually productive people are doing - this is a 100x productivity multiplier.
It doesn't even make mistakes anymore - the biggest issue is making sure it doesn't get lazy with the number of assertions
well i am the only qa engineer i do all the cicd too and load testing - the company only had manual testing and I wrote the framework that we use today - but it is easy work yes - and it's 150k - um i guess i thought a bigger number would sound better
Everything you think you know about AI was true until about 6 months ago. Now the frontier models and agentic tools are good at programming—better than most professional programmers would be unguided. And even if Claude Mythos isn't half as good as they say it is, it's changed the calculus of security significantly: use AI to vet your code before deployment... or someone else will, right before they 0wn you.
it's not always the coding. it's keeping straight what you've asked it. I've seen mistakes that almost mimic memory loss. Node A is measuring node B. OK, next step is to apply this update to the measurement codebase in node B. Uh, no dude, we are working on node A.
this one anonymous guy may be vibe coding himself to $200k, but there might be bombs in there that won't go off until later.
I remember when my school introduced calculators and my parents got upset about it: "They won't learn to do sums in their heads!" Yet it opened us up to working on more interesting, larger problems, at a faster pace. LLMs could atrophy skills if used solely out of laziness (like the cover letters in the post), but they could also help you punch higher, and learn more, and faster, if you're motivated and mentally integrate them properly.
What larger problem can you do in a school setting with a calculator?
When doing algebra you need to be able to effortlessly do sums, multiplications, divisions, factorizations.
Meanwhile if you’re doing a physics or engineering calculation, it’s better to manipulate all the symbols algebraically and only plug in values at the final step.
I don’t see how a calculator is actually useful in driving learning outcomes.
I'll need to engage in conjecture over elementary school lessons from 35 years ago, but one thing that comes to mind is we were calculating circle circumferences and areas quite quickly following the formulas. We still learnt arithmetic techniques by hand (though never logarithms, for whatever reason - I guess calculators replaced the log tables!), but when we moved on to broader things like geometry and statistics, calculator use let us focus on the actual topics and formulas and not repeating the grunt work like generations past.
For anything beyond that, I'd need to take it up with whoever wrote our curriculum! But I know it was mildly contentious at the time, much as the use of even more elaborate technologies are now.
There’s a bunch of answers to this question, but I think the easiest one is that a pocket calculator contains a table of logarithms.
You can do much of that other stuff by yourself, but no one alive carries a table of logarithms around in their head.
Once you accept that you should also accept that it contains Taylor series expansions for sine and cosine, which you also do not carry around in your head.
I recommend telling a physicist that you feel this way and seeing what they think about calculating machines.
Calculators are cheap commodities. LLMs are owned by rent-seeking Napoleons, with debts bigger than the GDP of Norway. So they won't be cheap for very long.
You are making a grave mistaking here of thinking by analogy. Just because parents said something similar about something else long time ago has no bearing on the current situation.
Feels like one of these things that's been known for decades in the general form: tools that take cognitive load off your working memory (a calculator, writing) free your brain up for higher level thinking make you "smarter", whereas tools that take the higher level tasks off you and load up your working memory (hypertext, AI) make you "stupider".
tracking down the relevant text from a reference is loading your working memory and, i would argue, inhibiting your ability to form semantic links as a consequence.
Using an LLM to handle a task for you seems a lot like letting a car move you. Cars will make you “fat and lazy” if you never move your body otherwise, but it’s fairly clear to see that this is avoidable.
The research seems to always get (intentionally?) misconstrued at headlines that LLM is “bad for you” as opposed to more mundanely stealing opportunities for exercise and practice of mental activities if you let it.
I like how people come up with some analogy (and all analogies are wrong by definition) and then attack said analogy and based on that make a declaration on the original statement. But what if we use a different analogy: basically using an LLM is like skipping the whole learning process - not learning how to read, not learning how to write and not learning how to think, then what?
> basically using an LLM is like skipping the whole learning process - not learning how to read, not learning how to write and not learning how to think, then what?
I take this same argument and fold it slightly.
Think back to Cliff's Notes. A student has a paper due. They are low on time. They use Cliff's Notes to help them write a paper and get at least a passing grade.
If the student does this one time or for an occasional crunch, there's not a big issue.
If the student does this all the time, and then later complains they didn't get a good education, who should have the accountability for that?
"What it means to be human is to work 16 hours a day for someone else taking home just enough salary to survive another day, because you too, someday could become a millionaire" --HN
A lot of what humanity does seems to be a persistent terrible take.
I hope you count "stimulating our minds for either learning or imaginative purposes" as one of those outcomes because if you only count "work produced and kpis met" as an outcome then that sounds pretty bleak.
Just so long as we don't get something that is to LLMs as car-centric urban design is to cars.
Someone suggests putting all the stuff the average person needs within 15 minutes of the average person's home, and soon after we got a conspiracy theory about 15 minute cities being soviet control gates you'll need permission to get out of.
LLMs are already capable of inventing their own conspiracy theories, and are already effective persuaders, so if we do get stuck, we're not getting un-stuck.
I recommend people look at the actual study and think about how representative are the subjects, the tasks involved (SAT essay writing), and the way LLMs are being used.
To be concrete, this is taking a task in isolation that LLMs can do much better than humans (writing garbage essays) and using LLMs to do that task. In the real world, tasks have parts and they exist in a larger context. When we use LLMs for one part of a task, there are other things we're doing that the LLM is not helping with. If you compared people doing arithmetic by hand and with a calculator, you would also see very big differences in how active their brains are. But it's not anyone's job to add up numbers. Adding up numbers is a subtask of a subtask in someone's job.
In a forklift, I can softly manipulate a lever to lift thousands of pounds, that will not make my arm muscles grow. It's my responsibility to still go to the gym: but even 16 hours a day at the gym: no one is ever going to lift a literal 'ton'. I don't take a forklift to the gym, but I would use it at work...
And this gym metaphor breaks down quickly if you think about it.
A forklift can lift far more than the average human. Just like a train can carry more, faster, than a couple people carrying those goods. Your comment seems to imply that a forklift replaces the need to be strong or physically fit, which is obvious nonsense, so I'm not really sure what you're trying to say, here.
that's purely based on the amount of cognitive effort we output when achieving a task, isn't that the same kind of worry people had when the internet became a thing?
In hindsight, we could have listened to the people who warned about how the internet would make our lives worse. Can our society withstand another generation of worsening on par with the effects of social media etc?
Whether the internet has made our lives better or worse depends on the perspective (half full or half empty), and is an excellent water cooler conversation. :-)
The internet has objectively made life worse, and the people who say it hasn't haven't experienced the alternatives. Many people will never know true joy because of the internet.
Oh no. I looked at a screen. There goes all my joy...
/s
Objectively worse in some vectors, Objectively better in others. Being able to get medical advice quickly. Being able to communicate to vastly different people broadening your horizons. And yes, more comparisons to make (the thief of joy).
My biggest worry isn't that it will make me dumb (it won't), or that it will make me lazy (it will), but that people raised with it wont learn things in the first place. I'm split on if this is a real issue or an old man rants about slide rules and the decline of mental math kinda situation.
I often find AI makes me angry and stressed out, especially when it suggests dumb solutions to problems. Honestly makes me wonder if I'm more likely to die early from chronic AI-induced stress rather than dementia.
Isn’t there a saying you only truly know something when you're able to explain it to someone else? When I get angry at LLMs proposing stupid solutions, I see it as a positive thing. "damn, this is garbage, here is a much better solution ..." - i know, not really efficient, but enjoyable :)
What makes me stupid is hearing about "AI" day after day like it is the best thing since sliced bread, and yet 99.9% of useful things that ive seen come from LLMs is just low level programming tasks or fluffed up nonsense that any manager could spew. I can't even trust what LLMs tell me unless the answer is so simple a 2015 google search's top result would be just as adequate. Except now the top 20 google results are all AI answers from the same source material, packed full of fluff but stripped entirely of nuance or useful adjacent knowledge. Just changing the question slightly can give contradictory answers with both given with full confidence.
This is not true at all - I have been using the Pro level AIs to automate my 150k a year automation engineer job for over 2 years and have reduced my workload by about 95%, no joke (AI writes great selenium tests). This is a real, measurable amount of work - it used to be that you had to be pretty smart to write code and now anybody can vibe code an automation test framework in literally one afternoon. I know because I did it a few months ago for my new role. It is beyond game changing for the reason - I can only imagine what actually productive people are doing - this is a 100x productivity multiplier.
It doesn't even make mistakes anymore - the biggest issue is making sure it doesn't get lazy with the number of assertions
Cannot tell if you're a parody account or not[1], but if so, well done.
[1] https://news.ycombinator.com/item?id=47785627
I cannot believe there is a 200k job to write selenium tests.
well i am the only qa engineer i do all the cicd too and load testing - the company only had manual testing and I wrote the framework that we use today - but it is easy work yes - and it's 150k - um i guess i thought a bigger number would sound better
There are 200k jobs fixing the “frameworks” AI-slopped into creation at the very edge of Dunning-Kruger competence, maybe?
Everything you think you know about AI was true until about 6 months ago. Now the frontier models and agentic tools are good at programming—better than most professional programmers would be unguided. And even if Claude Mythos isn't half as good as they say it is, it's changed the calculus of security significantly: use AI to vet your code before deployment... or someone else will, right before they 0wn you.
it's not always the coding. it's keeping straight what you've asked it. I've seen mistakes that almost mimic memory loss. Node A is measuring node B. OK, next step is to apply this update to the measurement codebase in node B. Uh, no dude, we are working on node A.
this one anonymous guy may be vibe coding himself to $200k, but there might be bombs in there that won't go off until later.
I genuinely can't tell if this is a parody, because this exact post could have been, and was, written every month for the past few years.
I remember when my school introduced calculators and my parents got upset about it: "They won't learn to do sums in their heads!" Yet it opened us up to working on more interesting, larger problems, at a faster pace. LLMs could atrophy skills if used solely out of laziness (like the cover letters in the post), but they could also help you punch higher, and learn more, and faster, if you're motivated and mentally integrate them properly.
What larger problem can you do in a school setting with a calculator?
When doing algebra you need to be able to effortlessly do sums, multiplications, divisions, factorizations.
Meanwhile if you’re doing a physics or engineering calculation, it’s better to manipulate all the symbols algebraically and only plug in values at the final step.
I don’t see how a calculator is actually useful in driving learning outcomes.
I'll need to engage in conjecture over elementary school lessons from 35 years ago, but one thing that comes to mind is we were calculating circle circumferences and areas quite quickly following the formulas. We still learnt arithmetic techniques by hand (though never logarithms, for whatever reason - I guess calculators replaced the log tables!), but when we moved on to broader things like geometry and statistics, calculator use let us focus on the actual topics and formulas and not repeating the grunt work like generations past.
For anything beyond that, I'd need to take it up with whoever wrote our curriculum! But I know it was mildly contentious at the time, much as the use of even more elaborate technologies are now.
There’s a bunch of answers to this question, but I think the easiest one is that a pocket calculator contains a table of logarithms.
You can do much of that other stuff by yourself, but no one alive carries a table of logarithms around in their head.
Once you accept that you should also accept that it contains Taylor series expansions for sine and cosine, which you also do not carry around in your head.
I recommend telling a physicist that you feel this way and seeing what they think about calculating machines.
Unfortunately people are inherently lazy. Curious and driven indivdiuals will excel with the availability of LLM's, but the majority will atrophy.
Calculators are cheap commodities. LLMs are owned by rent-seeking Napoleons, with debts bigger than the GDP of Norway. So they won't be cheap for very long.
You are making a grave mistaking here of thinking by analogy. Just because parents said something similar about something else long time ago has no bearing on the current situation.
> if you [...] mentally integrate them properly.
There it is.
Feels like one of these things that's been known for decades in the general form: tools that take cognitive load off your working memory (a calculator, writing) free your brain up for higher level thinking make you "smarter", whereas tools that take the higher level tasks off you and load up your working memory (hypertext, AI) make you "stupider".
i don't understand what you're saying about hypertext.
i think they're saying you don't form semantic links in your own mind bc you're mindlessly clicking around
tracking down the relevant text from a reference is loading your working memory and, i would argue, inhibiting your ability to form semantic links as a consequence.
Using an LLM to handle a task for you seems a lot like letting a car move you. Cars will make you “fat and lazy” if you never move your body otherwise, but it’s fairly clear to see that this is avoidable.
The research seems to always get (intentionally?) misconstrued at headlines that LLM is “bad for you” as opposed to more mundanely stealing opportunities for exercise and practice of mental activities if you let it.
I like how people come up with some analogy (and all analogies are wrong by definition) and then attack said analogy and based on that make a declaration on the original statement. But what if we use a different analogy: basically using an LLM is like skipping the whole learning process - not learning how to read, not learning how to write and not learning how to think, then what?
> basically using an LLM is like skipping the whole learning process - not learning how to read, not learning how to write and not learning how to think, then what?
I take this same argument and fold it slightly.
Think back to Cliff's Notes. A student has a paper due. They are low on time. They use Cliff's Notes to help them write a paper and get at least a passing grade.
If the student does this one time or for an occasional crunch, there's not a big issue.
If the student does this all the time, and then later complains they didn't get a good education, who should have the accountability for that?
Interestingly enough, the act of writing notes is evidentially a very effective learning method.
> Interestingly enough, the act of writing notes is evidentially a very effective learning method.
In case you don't know, CliffsNotes isn't you writing notes, it's using someone else's.
https://en.wikipedia.org/wiki/CliffsNotes
Learning to read, learning to write, learning to think, only have value because of the outcomes they produce.
If the outcomes can be reached with just AI, then AI has all the value.
Don‘t take it personally, you might be right in your extreme position, but this feels like a horrible take on what it means to be human.
Does a human only have value then once they have learned to do those things?
"What it means to be human is to work 16 hours a day for someone else taking home just enough salary to survive another day, because you too, someday could become a millionaire" --HN
A lot of what humanity does seems to be a persistent terrible take.
I hope you count "stimulating our minds for either learning or imaginative purposes" as one of those outcomes because if you only count "work produced and kpis met" as an outcome then that sounds pretty bleak.
"If" is doing an enormous amount of work here.
> Learning to read, learning to write, learning to think, only have value because of the outcomes they produce
Only have economic value maybe.
Humans have more value than just whatever economic crap they produce
Just so long as we don't get something that is to LLMs as car-centric urban design is to cars.
Someone suggests putting all the stuff the average person needs within 15 minutes of the average person's home, and soon after we got a conspiracy theory about 15 minute cities being soviet control gates you'll need permission to get out of.
LLMs are already capable of inventing their own conspiracy theories, and are already effective persuaders, so if we do get stuck, we're not getting un-stuck.
> Cars will make you “fat and lazy” if you never move your body otherwise, but it’s fairly clear to see that this is avoidable.
why would you choose to compare ai to cars? you seem to be defending ai, but to compare it to cars... cars have been a horrible development.
I recommend people look at the actual study and think about how representative are the subjects, the tasks involved (SAT essay writing), and the way LLMs are being used.
https://arxiv.org/abs/2506.08872
To be concrete, this is taking a task in isolation that LLMs can do much better than humans (writing garbage essays) and using LLMs to do that task. In the real world, tasks have parts and they exist in a larger context. When we use LLMs for one part of a task, there are other things we're doing that the LLM is not helping with. If you compared people doing arithmetic by hand and with a calculator, you would also see very big differences in how active their brains are. But it's not anyone's job to add up numbers. Adding up numbers is a subtask of a subtask in someone's job.
I like to think back on this scene from Galaxy Quest, when the team sits around the conference table[1].
"I have one job on this lousy ship, it's stupid, but I'm gonna do it! Okay?" -- Sigourney Weaver
[1] - https://www.youtube.com/watch?v=W4CgQMJCpZI
LLMs have absolutely made my mechanical ability to write code much worse day-to-day. I'm still not sure if this is a good thing or not.
> The results haven't been published in a scientific journal yet, but they were none-the-less eye-opening, according to Kosmyna.
"Someone said something about AI"
AI chatbots could be making you smarter though tbh
It's so easy to learn at the same time
You're absolutely right!
The same way using a forklift makes you stronger!
Unironically, yes.
In a forklift, I can softly manipulate a lever to lift thousands of pounds, that will not make my arm muscles grow. It's my responsibility to still go to the gym: but even 16 hours a day at the gym: no one is ever going to lift a literal 'ton'. I don't take a forklift to the gym, but I would use it at work...
And this gym metaphor breaks down quickly if you think about it.
A forklift can lift far more than the average human. Just like a train can carry more, faster, than a couple people carrying those goods. Your comment seems to imply that a forklift replaces the need to be strong or physically fit, which is obvious nonsense, so I'm not really sure what you're trying to say, here.
The usage of AI for everything is analogous to using a forklift to take your groceries home.
The constant whining about them certainly is.
that's purely based on the amount of cognitive effort we output when achieving a task, isn't that the same kind of worry people had when the internet became a thing?
In hindsight, we could have listened to the people who warned about how the internet would make our lives worse. Can our society withstand another generation of worsening on par with the effects of social media etc?
Whether the internet has made our lives better or worse depends on the perspective (half full or half empty), and is an excellent water cooler conversation. :-)
The internet has objectively made life worse, and the people who say it hasn't haven't experienced the alternatives. Many people will never know true joy because of the internet.
Oh no. I looked at a screen. There goes all my joy... /s
Objectively worse in some vectors, Objectively better in others. Being able to get medical advice quickly. Being able to communicate to vastly different people broadening your horizons. And yes, more comparisons to make (the thief of joy).
It might be making some people lazier but not more stupid, it's not like you are literally losing brain cells by using it.
My biggest worry isn't that it will make me dumb (it won't), or that it will make me lazy (it will), but that people raised with it wont learn things in the first place. I'm split on if this is a real issue or an old man rants about slide rules and the decline of mental math kinda situation.
I often find AI makes me angry and stressed out, especially when it suggests dumb solutions to problems. Honestly makes me wonder if I'm more likely to die early from chronic AI-induced stress rather than dementia.
Isn’t there a saying you only truly know something when you're able to explain it to someone else? When I get angry at LLMs proposing stupid solutions, I see it as a positive thing. "damn, this is garbage, here is a much better solution ..." - i know, not really efficient, but enjoyable :)
It's almost like you have to think for yourself still, wild concept.
More stupid*
The Merriam-Webster dictionary claims 'stupider' is a perfectly valid word
The original title is on point, like The Derek Zoolander Center for Kids Who Can’t Read Good and Who Wanna Learn to Do Other Stuff Good Too.
[dead]