I Quit Teaching Because of ChatGPT

(time.com)

59 points | by williamstein 4 hours ago ago

141 comments

  • adamc 3 hours ago ago

    I read this and it struck a chord. A decade or more ago, a friend of mine teaching English in a major state university told me that he was getting lots of students who couldn't seem to read a novel. They could read the words on the page, but they couldn't focus enough to really understand it.

    My experience is that writing things out always improves my understanding of the subject. It's similar to forcing yourself to write a proof in mathematics, to verify your understanding. The formalism is hugely helpful.

    If the result of chatgpt is even more kids skating through life without really picking up these skills... it may have tragic consequences. Learning to think well is the single most important thing I got from education. I would argue that improvements in human thinking -- via philosophy, logic, mathematics, the scientific method, and yes, literacy -- have made a huge difference in human lives over the past few thousand years.

    Not that chatgpt would be the root cause. Chatgpt would just be a symptom of a much bigger problem.

    • lucianbr 3 hours ago ago

      > If the result of chatgpt is even more kids skating through life without really picking up these skills... it may have tragic consequences. Learning to think well is the single most important thing I got from education.

      Do teachers have no role here? "If many kids make a certain choice, there will be big trouble". Maybe teach them to do better? Is the role of educator reduced to giving assignements and grading them in a mindless way? Seems like this would be exactly the point.

      After all, without schools, people learned less. So now we have schools, and people learn more. We didn't just lament "well if people don't learn, we're all screwed". We did something about it. Is this AI thing really beyond any solution? What has been tried so far?

      • adamc 3 hours ago ago

        Most teachers have large classes. Their ability to monitor each kid and prevent them from using chatgpt or similar tools may be limited.

        I come from a family with many teachers. The harsh reality is that teachers can provide a conducive environment, but they just don't have enough time per kid to make nearly the impact that people expect, which is why your home environment is so critical.

        Teachers don't really teach. They create an environment to help you teach yourself. They can provide lessons, but you always have to do the work of internalizing it. Watching youtube videos is not the same as struggling to write programs. Etc.

        • lucianbr 3 hours ago ago

          > Most teachers have large classes. Their ability to monitor each kid and prevent them from using chatgpt or similar tools may be limited.

          I'm not a teacher, and I very well may be missing your point due to my own shortcomings. But I have to ask, would this sentence not apply to people not wanting to learn math because it's hard? (or learn anything at all)

          > Most teachers have large classes. Their ability to monitor each kid and prevent them from cheating or simply not learning may be limited.

          I have been in school, and have learned a few things, not many. Next to me were people who learned even less, because they didn't want to. ChatGPT didn't exist then, nor smartphones, and we didn't even have internet access.

          That just points to schools being imperfect, and seems to have zero connection to ChatGPT.

          • adamc 2 hours ago ago

            Math is usually assessed via quizzes and tests where you have to show your work, which makes it easier for the teacher to tell if you are getting it. It's a lot harder to assess whether a student has read a novel -- you basically have to make them write a paper on it (and be aware of what they would get from Cliffs notes). Grading papers is a ton of work, and that limits that technique.

            Also, it's easy to see if someone is using a calculator. But needing to show your work makes that less of an issue anyhow.

          • vel0city 3 hours ago ago

            > But I have to ask, would this sentence not apply to people not wanting to learn math because it's hard?

            It is pretty easy for a teacher to find a student not learning math; they'll fail the test.

            I wouldn't fault a teacher for not reliably detecting every student using a lot of LLMs to write their longer papers.

      • chx 3 hours ago ago

        You can lead a horse to water, but you can't make it drink.

        We as educators can show the interesting parts, the wonders of a field but more we can't do. The author of the article seems to have not managed to get through. Truth to be told, such happens every day in schools because most schools use completely outdated ways to teach -- frontal pedagogy itself is such -- and so ChatGPT is not their biggest problem but it certainly makes a bad situation worse. Recommended watch: Ken Robinson's Ted talk Do schools kill creativity. Recommended reading: Carl Rogers Freedom to Learn: A View of What Education Might Become. For a shorter read, Aronson's Social Animal has a chapter.

        • lucianbr 3 hours ago ago

          > You can lead a horse to water, but you can't make it drink.

          This must have been a problem before the invention of LLMs. How was it solved before? Why didn't teaching go out of fashion when TV was invented, or why didn't math disappear when calculators were built?

          You yourself highlight problems that existed before LLMs, different problems in teaching. Obviously not all teachers quit because of those, and we still have schools. People still learn. Perhaps fewer, and less, but they still do. And perhaps more do. I'm not sure how this can be accurately measured for the entire planet. After all, science still progresses. We've invented LLMs.

          Once again, as almost all discussions about AI, I feel the main point of this post is "ChatGPT is different than anything that has come before, and will change everything", without actual supporting arguments. Sounds a lot like advertising.

          • gopher_space 2 hours ago ago

            > This must have been a problem before the invention of LLMs. How was it solved before?

            - Encouraging the people who were bored to tears by 8th grade to go work in a lumber mill about it. - Encouraging parents to read to their children just all the fucking time. - Kicking out the troublemakers.

            But the main solution has always been parents caring about education enough to get involved and spend time with their kids on it. It's something parents need to prioritize right when things are getting complicated and interesting in their own lives, and it's way easier to see schools as a community creche rather than a de facto cooperative at that point.

          • falcolas 2 hours ago ago

            > why didn't math disappear when calculators were built?

            Because you can see a child using a calculator.

            > Why didn't teaching go out of fashion when TV was invented

            Laws (which put students in classrooms) and a lack of TV use in classrooms. TV use in classrooms was generally driven by the teachers.

            > "ChatGPT is different than anything that has come before, and will change everything"

            It's effectively undetectable when used, unless you have students write essays in your presence and on paper. That's what makes it so much harder to counter than previous aids.

            WRT writing large essays on paper - it's a skill that's not useful and is thus not emphasized in any class - a trend that's existed longer than ChatGPT. Reversing it now would impact all grades, and face major pushback from parents.

          • adamc 2 hours ago ago

            I don't think chatgpt created the problem, although it may contribute to it.

            Here's a hypothesis: The problem is that thinking analytically, rigorously, is hard work. People do not often seem to come to it by accident. It's something we have learned to do, and schools/teachers are one of many mechanisms we have used to culturally transmit these skills. Until (pick a date -- 1950, say) it was assisted by the fact that one of the most widespread forms of entertainment was reading. Yes, even by then movies were a thing, but most people couldn't afford to go to a movie every night, and they were limited to what was currently showing. So most people in western countries, where literacy was high, spent a fair amount of time reading. Reading, by its nature, tends to focus the mind. You learn to spend long periods ingesting and contemplating information.

            We also taught writing -- and we still do -- but it benefits hugely from all that time spent reading. It is much easier to learn to write if you read a lot. And writing further develops your thinking skills.

            The modern era has given us wonderful technologies, but there is a lot of evidence that people are struggling to focus in the way we once did. People read less, write less, and spend more time on other media. I'm not slagging on film or games or cell phones -- conceivably, other useful skills are imparted, and people enjoy them. But our ability to focus seems to be declining. Chatgpt is just another time-saver that makes it easier to avoid learning critical thinking skills.

            It's not impossible that teachers will find a way to compensate. I think there are some things working against that: 1) we don't fund schools in a way that is going to allow them to spend additional time per student. 2) societal expectations seem to be that teachers magically pour knowledge into children, rather than learning, fundamentally, being something the child has to do, and which requires a large parental investment. 3) a lot of what lead to focus in the first place didn't come from the schools, but from the social environment -- i.e., reading was the most practical source of entertainment -- and so we are expecting schools to compensate for a problem they didn't create and for which we may have no totally satisfactory solution.

            I suspect that's only part of it. The pace of life is very different than it once was. Even an illiterate in 1800 probably worked on fewer things per day and perhaps had more chances to focus. But that's sheer speculation.

          • chx 2 hours ago ago

            Of course it has been a problem, I provided you with books from the 60s and the 70s to show a better way to educate people.

            See https://www.youtube.com/watch?v=GEmuEWjHr5c for why TV or anything else didn't obsolete teaching.

      • from-nibly an hour ago ago

        Its the parents job to reinforce love of learning and an understanding of what the heck we are all here for.

      • eesmith 2 hours ago ago

        > Maybe teach them to do better? Is the role of educator reduced to giving assignements and grading them in a mindless way?

        Please don't present a strawman point when link is written by a teacher who is clearly not doing that.

        "As an experienced teacher, I am familiar with pedagogical best practices. I scaffolded assignments. I researched ways to incorporate generative AI in my lesson plans, and I designed activities to draw attention to its limitations. I reminded students that ChatGPT may alter the meaning of a text when prompted to revise, that it can yield biased and inaccurate information, that it does not generate stylistically strong writing and, for those grade-oriented students, that it does not result in A-level work. It did not matter. The students still used it."

    • chx 3 hours ago ago

      How many dystopian scifi did we read where the majority of the populace can no longer read or write they just press icons?

      If we add "the result of a consistent and total substitution of lies for factual truth is not that the lie will now be accepted as truth and truth be defamed as a lie, but that the sense by which we take our bearings in the real world—and the category of truth versus falsehood is among the mental means to this end—is being destroyed" as Hannah Arendt put it, the consequences are indeed catastrophic.

  • kibwen 4 hours ago ago

    As long as LLM companies can hang on for another 15 years or so, there will be an entire generation of humans who will be as utterly incapable of living without LLMs in the same way that most people in developed countries are incapable of growing their own food. Intellectual lock-in will be their moat.

    • adamc 3 hours ago ago

      You can already see it in people who cannot navigate their automobile without google maps or some equivalent.

      • kennethrc 3 hours ago ago

        Define "cannot navigate"? I've been driving for decades, but where I live traffic is everywhere so I don't go more than a few miles without using Waze and its real-time traffic monitoring.

        • mrbungie 3 hours ago ago

          People who can't guide themselves without relying entirely on the app.

          Normally, I just glance at what action I should take next (i.e. continue straight for 2km and then take a left turn at X) and then follow the road signs to do so. But I've noticed some Uber drivers and other people who keep their eyes almost permanently at the app for locating themselves, even when there's a clear GPS or network disconnection, like in an underground pass. This often results in wasted time when the app doesn’t catch up with reality, and the driver misses a turn.

        • Version467 3 hours ago ago

          I can confidently say that a car would be useless to me without gps. I can navigate the small town (10k people) I grew up in, but that’s it. I wouldn’t try to navigate the city I live in now (~350k people) in a car, beyond the street I live on. If I had to and knew gps was broken (hypothetically), then I’d try to reschedule, instead of attempting to navigate with signs and a paper map.

          I could probably work with printed turn by turn instructions, but that’s about it.

          And I think that probably comes close to what the other post had in mind with „cannot navigate“

          • banku_brougham 3 hours ago ago

            You are in for a fun adventure, if you have an afternoon to yourself and no particular destination. My grandmother always began our day trips that way, in her shiny Buick. Remember to stop to rest.

          • kingofthehill98 3 hours ago ago

            That's unfathomable to me. I live in a 1kk metropolitan area and I can go basically anywhere without a navigation system, in fact I'll only pull the GPS if I'm going to some remote neighborhood.

            P.S: I was born here and lived here all my life.

          • SoftTalker 3 hours ago ago

            GPS is super-convenient when you're in an unfamiliar area. I sometimes catch myself thinking how did we ever get along without it. But we did. You just looked at a map before you set off, noted street names and turns, and paid attention. You would do the same thing and manage pretty well if you had to.

            • adamc 3 hours ago ago

              Of course it's convenient. But I learned to drive before we had it, and learned to find my way around without it. Maps helped, but you can do a lot with logic and by understanding direction.

          • nicolas_t 3 hours ago ago

            You don't ever walk in your city without gps? Wouldn't you know the streets after a while?

            • vel0city 3 hours ago ago

              Lots of people don't live in walkable cities. There's no realistic way they'd get to where they're going by walking. They might walk their neighborhood and know that, but these days lots of people don't even bother walking around their neighborhood.

              So they'll end up driving everywhere they go. Work, groceries, restaurants, etc. Always driving. Many won't go down paths that weren't previously suggested by their GPS. And often those destinations aren't designed to be walkable either. Massive parking lots separating the various storefronts. Corporate campuses completely surrounded by a sea of parking lots and garages. Nowhere to walk.

              That said, there's still exploring possible in a car-centric place. I tend to take alternate paths to get places, purposefully "get lost" driving around, and explore places I've never been before. But that has costs and lots of people don't bother doing that.

        • adamc 3 hours ago ago

          Find your way around an unfamiliar city or area without google maps or similar.

      • Workaccount2 3 hours ago ago

        I think what is more readily apparent is that younger folks cannot reverse their car without looking at the camera. Older folks still turn around and look.

        • vel0city 3 hours ago ago

          I cannot reverse my car without looking at the camera but that's because cars these days are so high off the ground, the rear windows are so tiny, and the pillars are so huge that I can't see much turning around.

          Someone about four feet tall can easily walk past the rear of my car and aside from the half second of them in the side mirrors (which I can't see if I'm looking backwards anyways) there's no way I'd be able to see them looking backwards. But the ultra-wide angle camera mounted low on the gate I'd see them easily.

          Plus, with the position of the camera at the end of the vehicle and its FOV, there are angles I wouldn't possibly be able to see with a car parked on either side of me. Coupled with radar sensors able to detect cross traffic before its even possibly visible in that situation the sensors greatly enhance by ability to see the situation than if I was entirely relying on turning around.

          I do turn around while reversing, but more of just a double check of the overall scene. Same with double checking side mirrors while reversing. But a large amount of what I'm looking at is the screen.

          • Yizahi 13 minutes ago ago

            Exactly. I was learning to drive in an ancient Fiat from 60s, it was like sitting in the aquarium, every direction was super visible. Next I had Daewoo Lanos which had reduced visibility in comparison, but still good. Now in the big modern sedan I wouldn't see anything when turning head, and mirrors have dead zones. Camera is a second best car upgrade I had in past decades, after automatic gearbox.

        • inerte 2 hours ago ago

          For 13 years I drove stick and for 18 years I drove without a backup camera. Nowadays I don't turn around and look back. Automatic and camera are so much better.

        • 6gvONxR4sf7o 3 hours ago ago

          I wouldn't call those the same. My wife's car has a camera and mine doesn't. Since having children, I prefer backing up in her car because I'm so much more aware of the blind spots I have without it. We also live somewhere with lots more kids now, which adds to it. I can back up my own (camera-less) car, obviously, but it made me realize that I can't back it up without a bit of "cross my fingers and hope no kid darts into a blind spot from a blind spot."

        • scblock 3 hours ago ago

          I wish I had a camera in my Jeep. As I get older it gets harder to just turn around, and new cars have pretty bad sight lines. I think this is just using the tools available to you.

          • SoftTalker 3 hours ago ago

            > new cars have pretty bad sight lines

            This is really a big part of the reason for them. Newer cars are all aerodynamically contoured and this results in a high rear deck and a shallow angle on the rear window. It's much harder to see behind the car.

            I still turn my head and also use the wing mirrors to reverse. It's habit and I find it easier than looking at a screen.

        • adamc 3 hours ago ago

          I'm old. I like the camera. It gives a better view for many things.

        • kennethrc 3 hours ago ago

          Nah, this "older folk" loves my cameras and it's an excellent enhancement

        • umbra07 3 hours ago ago

          What is this based off of? Most young people (who have a car) have an old car, without backup cameras.

          • vel0city 3 hours ago ago

            Backup cameras were mandated in the US in 2018 and were already getting to be pretty popular on even midrange trim vehicles by that time. Every six year old car has a backup camera, and a large percentage of 7–10-year-old cars do as well.

      • malfist 3 hours ago ago

        I don't really see that as a bad thing. Einstein famously didn't memorize phone numbers because you could look it up in book.

        Quick external lookups are a huge productivity boost since you don't have to spend all the time memorizing something.

        • kevlened 3 hours ago ago

          This is true to a point. You can get huge performance gains using L1 cache rather than accessing network storage.

          • malfist 3 hours ago ago

            I like your analogy. I agree. This doesn't fit all cases, and it really depends on how often and how long it takes you to look something up.

            Memorizing directions though? Nah. Takes a while to memorize and is quick to lookup and not used frequently.

            Knowing basic syntax for a programming language? That needs to be in RAM.

        • ChrisMarshallNY 3 hours ago ago

          That’s a good description of my programming.

          I use the documentation panel of the inspector in Xcode, all the time, and write my code to generate this documentation.

          I’m too grizzled to care much about being sneered at by insecure folks. I am able to get the job done; quickly, and of extremely high quality.

        • rectang 3 hours ago ago

          A conversation between Sherlock Holmes and Dr. Watson from A Study in Scarlet, by Arthur Conan Doyle:

          > That any civilized human being in this nineteenth century should not be aware that the earth travelled round the sun appeared to be to me such an extraordinary fact that I could hardly realize it.

          > "You appear to be astonished," he said, smiling at my expression of surprise. "Now that I do know it I shall do my best to forget it."

          > "To forget it!"

          > "You see," he explained, "I consider that a man's brain originally is like a little empty attic, and you have to stock it with such furniture as you choose..."

          > "But the Solar System!" I protested.

          > "What the deuce is it to me?" he interrupted impatiently; "you say that we go round the sun. If we went round the moon it would not make a pennyworth of difference to me or to my work."

          • kibwen 2 hours ago ago

            This is a very inapropos quote, because A Study In Scarlet was the first Holmes story, and Conan Doyle later retconned this because it turns out to be very important for a detective of Holmes' sort to be able to draw on seemingly-useless knowledge in order to make serendipitous leaps of logic.

            • rectang 25 minutes ago ago

              I maintain that it's relevant because it illustrates how unreliably people assess just what knowledge is likely to be useful. If anything, the fact that it was retconned reinforces that such assessments can be wrong.

              For example, there's a notion in this thread relying on GPS makes you soft and that unless you learn how to navigate the way your forebears did you'll be unprepared or something. I find this proposition just as dubious as the notion that actively forgetting that the earth goes around the sun helps you solve murder mysteries.

              It seems to me as though such assessments are often driven by self-interest and the desire to maximize the social utility of one's own knowledge and experience. Of course, those who believe that that "GPS makes you soft" is unquestionably true may find fault with my perspective.

              I also just find that passage hilarious. It's simultaneously so misguided and yet so compellingly crafted, which is what makes for great satire whether intentional or not. My Dad read Sherlock Holmes stories aloud to me when I was a kid and that passage has always stuck with me. But perhaps this wasn't the right audience to share it with.

            • 39 minutes ago ago
              [deleted]
          • thfuran 3 hours ago ago

            It can be pretty hard to tell in advance which knowledge will eventually make a difference (or would have, if you had it).

        • bamboozled 3 hours ago ago

          It's not a bad thing so long as GPS works forever, I also think driving is much more enjoyable when you don't need to be thinking about GPS, it's nice to just focus on where you're going.

          I have an awesome sense of direction which I attribute to almost never using navigation apps unless I'm really worried about traffic, and even then, I can usually find a less crowded non-recommended router, because I know my way around.

    • djcooley 3 hours ago ago

      This is terrifying. Our ability to think has been our biggest differentiator as a species. LLMs threaten this, and I'll die on that hill.

      • ericskiff 3 hours ago ago

        I’ll share my experience and the experience of my kids so far.

        Aside from blindly copying and pasting a response, in which case the learner wasn’t interested in learning and probably would have plagiarized from somewhere else anyway, I have found LLM to be an incredible, endlessly patient teacher that I’m never afraid to ask a question of.

        My kids who are in the tween and teenage years, are incredibly skeptical and dismissive of AI. They regard AI art as taking away creative initiative from artists and treat LLM similar to the way we treated Google growing up, if they use them at all. It’s a tool which can be helpful for answering questions that is part of the landscape of their knowledge building.

        That knowledge acquisition includes school, YouTube and other short videos, their peers (online and off) Internet searches, and asking AI. Generally, I regard asking AI as one of the least problematic sources of info in that environment.

        While I tend to be optimistic as a default, I truly do think that the ability to become less ignorant by asking questions is a net positive for humanity.

        The only thing I truly lean on AI for right now is as an editor, helping me turn my detailed bullet points into decently crafted prose, and for generating clear and concise transcripts and takeaways from long meetings. To me that doesn’t seem like the downfall of human knowledge.

      • ReverseCold 3 hours ago ago

        > [Writing] will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.

        - Socrates (written down by Plato)

        • Der_Einzige 2 hours ago ago

          Thank you. Luddite tendencies are as deep within humans as the desire to kill is. It is a personification of the death-drive and all self-destructive tendencies within humans.

          It's also the tendency towards the "precautionary principal" AKA Nietzschian "last-man" style thinking applied to the world infinitely.

          We should root this kind of thinking out aggressively, at least from the academy.

      • blindhippo 3 hours ago ago

        I think this is overestimating the impact of LLMs.

        Fact is, even if they are capable of fully replicating and even replacing actual human thought, at best they regurgitate what has come before. They are, effectively, a tutor (as another commentator pointed out).

        A human still needs to consume their output and act on it intelligently. We already do this, except with other tools/mechanisms (i.e. other humans). Nothing really changes here...

        I personally still don't see the actual value of LLMs being realized vs their cost to build anytime soon. I'll be shocked if any of this AI investment pays off beyond some minor curiosities - in ten years we're going to look back at this period in the same way we look at cryptocurrency now - a waste of resources.

        • 6gvONxR4sf7o 3 hours ago ago

          > A human still needs to consume their output and act on it intelligently. We already do this, except with other tools/mechanisms (i.e. other humans). Nothing really changes here...

          What changes is the educational history of those humans. It's like how the world is getting obese. On average, we have areas we empirically don't choose our own long term over our short term. Apparently homework is one of those things, according to teachers like in TFA. Instead of doing their own homework, they're having their "tutor" do their homework.

          Hopefully the impact of this will be like the impact of calculators, but I also fear that the impact will be like having tutors do your homework and take your tests until you hit a certain grade and suddenly the tools you're reliant on don't work, but you don't have practice doing things any other way.

        • kalinkochnev 3 hours ago ago

          I appreciate your faith in humanity. However you would be surprised to the lengths people would go to avoid thinking for themselves. Ex: a person I sit next to in class types every single group discussion question into chatgpt. When the teacher calls on him he word for word reads the answer. When the teacher follows up with another question, you hear "erh uhm I don't know" and fumbles an answer out. Especially in the context of learning, people who have self control and deliberate use of AI will benefit. But for those who use AI as a crutch to keep up with everyone else are ill prepared. The difference now is that shoddy work/understanding from AI is passable enough that somebody who doesn't put in the effort to understand can get a degree like everybody else.

          • blindhippo 3 minutes ago ago

            I'd suggest this is a sign that most "education" or "work" is basically pointless busy work with no recognizable value.

            Perpetuating a broken system isn't an argument about the threat of AI. It's just highlighting a system that needs revitalization (and AI/LLMs is not that tool).

        • Workaccount2 3 hours ago ago

          >at best they regurgitate what has come before

          I keep seeing this repeated, but it seems people either take it as being self evident or have a false assumption about how transformers work.

      • golergka 3 hours ago ago

        What's your opinion on calculators?

        Update: I meant to compare calculators to something like a slide ruler for logarithms. I'm not from US and I tend to forget that some people use calculators to take 20% of 500.

        • helboi4 3 hours ago ago

          I will occasionally do long multiplication in my minds eye just to make sure I can lol. Anything more complicated than that, most people will not be doing anyway. University students however, almost universally do need to write sometimes. Similarly if I had decided to do something maths heavy at uni I would be expected to be able to do some pretty complex maths without a calculator first, even if I don't need to do that all the time. It's pretty standard that higher education requires a level of intellectual rigour that is totally unecessary for day to day life. In the case of ChatGPT, it's allowing people to completely bypass that process even in those settings. Meaning you NEVER learn to do it, not just that you don't do it day to day.

          • thfuran 3 hours ago ago

            >Similarly if I had decided to do something maths heavy at uni I would be expected to be able to do some pretty complex maths without a calculator first, even if I don't need to do that all the time

            I got an engineering degree and don't remember ever being required to do math without a calculator. Of course, some things are easier if you don't need to bust out a calculator for everything.

        • atwrk 3 hours ago ago

          Where I live these are not allowed in the classroom until 7th grade or so, i.e. when the kids have learned the skills and can then employ calculators mindfully.

          • SoftTalker 3 hours ago ago

            This seems reasonable. When I was in school we started using calculators and other technology in 9th grade. That was in 1980 though.

        • JasserInicide an hour ago ago

          False equivalence.

          With a calculator, the end result is still the same: a (typically numerical) answer of some kind. Writing one's own essay vs. getting an LLM to regurgitate it results in vastly different outcomes.

        • mlyle 3 hours ago ago

          Not a super big fan, honestly. I'm a bit horrified when I see high school seniors who are smart, and have been through the entire HS math sequence... dig around in their backpack for a calculator to find 5 times 1.5 or 20% of 11.

          I'm glad that we have calculators and computing devices, but I'm not glad that they have made teens with basic numeracy into an endangered species. Many tools we use expand our understanding, but the calculator causes our arithmetic skills to atrophy.

          • ARandumGuy 3 hours ago ago

            From my experience, the more advanced math you learn, the worse you become at arithmetic. I knew a lot of math majors in college, and all of them used calculators all the time.

            • mlyle 3 hours ago ago

              Yes, I've gotten worse at arithmetic, too.

              The point is, one is hard pressed to find anyone who can do much arithmetic-- even trivial things.

        • m4tthumphrey 3 hours ago ago

          Does calculating numbers based on concrete rules require "thinking" in the same way OP talks about? I think not.

        • Ekaros 3 hours ago ago

          Depends on how far in basic arithmetic they get.

          But, if symbolic manipulation is done by hand and then numbers just plopped in to get final result and estimation if answer is realistic enough. Well I think that is fair enough.

          And spreadsheets are also useful when you need to add up bunch of things or multiply them.

        • foobazgt 3 hours ago ago

          You don't allow students to use calculators for operations they haven't personally mastered. If you don't learn how to add two numbers on your own, the rest of your learning is in serious jeopardy.

          This is the author's lament. These students are skipping over personal mastery.

        • 6gvONxR4sf7o 2 hours ago ago

          Calculators are reliable and predictable, so losing skill at that kind of calculation is a safe, compartmentalized offloading. We offload an extremely clearly defined set of tasks, and it gets executed cheaply, immediately, and perfectly.

          LLMs are different.

          A closer analogy would be something like computer algebra systems, especially integration. We can offload differentiation cheaply, immediately, and perfectly, but integration will frequently have a "unable to evaluate" result. I genuinely wonder whether integral-requiring-workers are better or worse at it as a result of growing up with CAS tools. People on the periphery (a biologist, for example) are undoubtably better off since they get answers they couldn't get before, but people on the interior (maybe a physicist) might be worse at some things they wish they could do better, relative to those who came up without those tools.

        • Workaccount2 3 hours ago ago

          A more apt comparison might be asking the abacus what it thinks of the calculator.

    • fulafel 3 hours ago ago

      Maybe in the future the "alien intelligences" for next generations will be people who had classic educations before social media and LLMs.

    • golergka 3 hours ago ago

      8 billion people growing their own food wouldn't be able to live in our planet even if they had all the necessary skills. Industrial agriculture and related technologies have radically increased the amount of food we can grow compared to individual small farmers.

      • ghusto an hour ago ago

        > 8 billion people growing their own food wouldn't be able to live in our planet even if they had all the necessary skills

        In the way we currently live, no they wouldn't. But it sounds like you're saying "wouldn't be able to", full stop, which I don't believe is true. There is no reason that all 8 billion people (with the "necessary skills") couldn't grow their own food, theoretically. It would require the west giving up the way we live, but it's worth noting that this doesn't make it impossible.

        • golergka 2 minutes ago ago

          Well, if by "the way we live" you mean "not having widespread malnutrition problems and hunger periodically killing significant parts of the population", then sure thing.

      • atwrk 3 hours ago ago

        Both of these points can be true at the same time (and IMO are). Tech changes society, but society should also reflect on these changes - and decide whether this is the direction we want to go, the tradeoff we want to have.

    • gorbachev 3 hours ago ago

      But think of 8 billion people paying $40 / month to OpenAI.

      Put yourself on the OpenAI IPO waiting list right now!

      /s

    • MisterBastahrd 3 hours ago ago

      Sooner or later, think tanks are going to start coming up with price charts showing how much your bills would be with and without crypto and AI. And that's going to be a huge political hot potato for these companies to manage. I hope they're ready to invest in massive solar farms.

  • acbart 3 hours ago ago

    I wonder what can be done. It's terrifying to realize how dependent students are going to become on these tools. Too many people are just not willing to live with the discomfort that comes with learning something difficult, when the alternative is so readily accessible. Short term gain over long term gain, exemplified.

    • pton_xd 3 hours ago ago

      I like the use of the word discomfort here. It does take an acceptance of some level of discomfort to engage with material you don't yet understand. Similar to how you experience some physical discomfort / strain when pushing your limits exercising. As you engage with discomfort your tolerance builds and what was once a difficult exercise becomes routine. The reward is worth the effort but I worry what the future looks like with many just opting out.

      • windexh8er 3 hours ago ago

        > I like the use of the word discomfort here.

        It really is spot on. I've been reading "The Coddling of the American Mind" [0] by Greg Lukianoff and Jonathan Haidt. The book's precipice feeds directly into this idea and has been a fun read thus far. It seems that LLMs will feed negatively into that "coddling", described in the book, in a very negative way as they are providing discomfort avoidance.

        [0] https://www.thecoddling.com/

    • Ekaros 3 hours ago ago

      I'm starting to think that maybe we need to start failing students again. However brutal and harmful it seems.

      And go back to in person examinations. Suck for those with issues with them, but we don't need to limit time too much.

  • AndyNemmity 3 hours ago ago

    Technology has always brought fear of dumbing us down, but it rarely does. When the internet came along, people worried we'd stop remembering things. When Google Maps appeared, people thought we'd forget how to read maps.

    Wait... I'm not sure anyone can read a map... maybe you're right.

    • xnx 2 hours ago ago

      Socrates warned against the spread of writing and the subsequent loss of the ability to memorize.

      • kennethrc an hour ago ago

        Some things never change, apparently- one wonders if "Get off my lawn, you damn kids!" has been around since the First Grandfather

  • light_triad 3 hours ago ago

    > With the easy temptation of AI, many—possibly most—of my students were no longer willing to push through discomfort.

    There's been an important shift in education in the last 20 years: a push for lessons to be short, entertaining and unchallenging. When students struggle with tasks, in many cases they are just told not to do them.

    LLMs are the next wave of this shift - what could be awesome tools for research and writing will become a crutch. It's not so much an issue with LLMs as it is a shunning of any discomfort while learning in favour of amusement and enjoyment.

    • throwawayffffas 3 hours ago ago

      > There's been an important shift in education in the last 20 years: a push for lessons to be short, entertaining and unchallenging. When students struggle with tasks, in many cases they are just told not to do them.

      I was in school 20 years ago. That is not my recollection. Lessons were long, focused and challenging if you engaged.

      • thfuran 3 hours ago ago

        That's what they said.

    • 3 hours ago ago
      [deleted]
  • azhenley 3 hours ago ago

    I just went back to teaching! I’m hopeful that AI makes the classroom experience better for both students and instructors.

  • tonetegeatinst 3 hours ago ago

    I don't trust LLM's to create novel ideas, and I'd never copy paste any AI output. Would I rework the output or maby use certain through processes, maby. But I'm not trusting an AI blindly, its a statistical capture of its training data not a magic wand.

    That said, most professors still force students to write code on paper with no autocorrect, google, or even access to the reference documentation which seems counter productive. I remember when people were calling auto completing text editors cheating, yet despite how much we accept auto complete or how many professional developers use AI tools, were still forcing college students to use pen and paper and trying to memorize syntax and all the functions of a library, all because professors can't be decent enough to even allow access to notes or the basic language or library docs.

    The AI or the documentation isn't going to solve your problems. Its enabling you to not memorize an entire language that you might not even work with in your career. Do I think their is people who can and will misuse the technology, yep you bet ya. But we need to stop forcing people to suffer due to the people who disregard the rules.

    You can also tell a major difference between someone who just uses AI to write the essay or paper, and someone who uses it to develop better ideas or arguments and develop multiple ways of phrasing something. LLM's are nothing new, just like how computer vision has been good for a long time.(see opencv, YOLO, and DARKNET)

    • vel0city 3 hours ago ago

      I agree with this. I'll use AI as a starting point to begin to understand a new topic or help flesh out definitions of things, but it's a starting point of knowledge not the final destination. Kind of like looking up something in an encyclopedia or Wikipedia; I'm using it for a quick essence of the knowledge, so I know more of what to dig into later.

    • SoftTalker 3 hours ago ago

      I'm probably in a small minority but I don't use auto-completion in code editors. I dislike it, and find it disrupts my thinking.

  • 6gvONxR4sf7o 3 hours ago ago

    > The best educators will adapt to AI. In some ways, the changes will be positive. Teachers must move away from mechanical activities or assigning simple summaries.

    So much of education always comes down to efficiency, scale, and standardization. The best way to teach or test someone is by having a one-on-one conversation with them, coincidentally the least standardized. The most scalable and standardized way to do it is to publish some material and then asynchronously collect answers to a predefined set of multiple choice questions.

    It seems like everything you can fake or cheat on is a byproduct of choosing to be further along the efficiency/standardization/mechanization scale. Which is a byproduct of many many systemic factors, not least of which is funding.

    It's frustrating as hell that so many of our teaching problems have obvious answers that are in many ways better for the students anyways, but those answers just aren't systemically or economically feasible. Maybe they'll have to become more viable as our current, ultra-mechanical system breaks.

    • gigatree 3 hours ago ago

      > It's frustrating as hell that so many of our teaching problems have obvious answers that are in many ways better for the students anyways, but those answers just aren't systemically or economically feasible. Maybe they'll have to become more viable as our current, ultra-mechanical system breaks.

      I think it’s a lie that they aren’t feasible, just like it’s a lie that sustainable, small-scale agriculture isn’t feasible. It’s more that the powerful are only interested in developing centralized systems because those are the ones they can control and profit from. You can’t indoctrinate without a standardized education system, you can’t price-fix without a monopoly/oligopoly, you can’t brainwash an entire population unless everyone’s watching the same thing controlled by the same entity. That said, it’s true that decentralized, sustainable systems aren’t feasible so long as there are people on earth vying for power over others.

  • righthand 3 hours ago ago

    This kind of thing is going to be the split between the technically inclined and the non-technically inclined. Those that believe thinking and understanding is too hard or that school is not cool or whatever will have technology to slide by on. They will be apart of the non-technical society and they will languish there. Comfortable in their lower pay and social media and grocery store tomatoes. Truly one of the herd of corporate loyalty.

    Fine, I say, we can’t elevate everyone. It’s just a bit amusing though that for a long time the youth was smarter than the previous generation due to embrace of tech. Now the youth is going to be more and more clever and not necessarily more intelligent instead because that is what is valued, taking a shortcut that technology enables.

    Oddly it might end up being a rift between the lower class and the middle class. Or perhaps more of a reshuffling as those who are willing to put in the efforts to understand will be apart of the techno-literate class and those who “dont care” will be apart of the “dont care” class.

    Digital welfare for the non-nerdy.

    • windexh8er 3 hours ago ago

      My concern is the assessment divide. In the district I live I happen to be on a parent board that's providing input to the leadership with respect to technology. Privacy and security are a huge component of LLMs currently, but beyond that I think the biggest area of interest is the assessment divide.

      Currently the district is looking at it through the lens of having students still "test" in traditional ways so that if, and when, assessment doesn't align with daily work they can start to understand where this divide exists.

      I really like the Ted Chiang quote from the article: "Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way". I can already see this divide in some of the surrounding friend-circle wherein a lot of young (under the age of 12) are leveraging LLMs very heavily to direct them. I fear that these kids will lose the confidence, at a very early age, to even start something. It's widely discussed how getting started in projects is often the most cumbersome in a task timeline, and so without making things uncomfortable for this young generation to work at this skill I feel as though we're going to see the start of a significantly handicapped generation because they will over-rely on these tools. And, really, this is just one of many issues of over-reliance.

      > Now the youth is going to be more and more clever and not necessarily more intelligent instead because that is what is valued, taking a shortcut that technology enables.

      I 100% agree with this. A lot of students will fool their parents, and even a number of educators by leveraging this new "cleverness". But it's going to hit a dead end as soon as they're forced to do. I also think you're right in that this may become a class divide that further segments the population and also provides facilities for unfortunate control over the lower class through trained and targeted LLM responses.

      • righthand 3 hours ago ago

        They will also be passed over by others who put in the work and can use logic, reasoning, arguments to defend or attack work. When the time comes to review work or output, what will the clever LLM kids say without an LLM?

        We’re basically asking people to stop being interested and stop having agency because a computer might have some incorrect but accessible summary of whatever topic.

        A solution might be to eliminate homework, since anything outside the classroom can be cleverly mounted. What happens when students can only work on their paper during class time? Or maybe handing out paper assignments again. They can cheat all they want but filling out the answers with a pen would go a little ways to instill the non-researched answer.

  • MarcScott 3 hours ago ago

    I'm thinking of going back into teaching because of ChatGPT.

    I created an entire scheme of work for my wife today, including all the lesson plans, and next I'll work on some student resources and quizzes. It took about thirty minutes. She'll need to check them over to make sure they're okay, but still a massive time saver.

    I've taken photos of my son's revision activities and had ChatGPT mark them. It's surprisingly accurate given his awful handwriting.

    Report writing becomes a thing of the past, as I can upload a CSV of grades along with a sentence or two of description, and have it generate unique reports for each student.

    This would all allow me to do what I used to love. I can just spend my time with students in the classroom, engaging them, teaching them, discussing with them. I won't be bringing home mounds of paperwork, that eats into my evenings and weekends. I'll go into work each day feeling fresh and ready to actually educate kids.

    ChatGPT takes away the busy work from both teachers and students.

    • SoftTalker 3 hours ago ago

      Why would anyone pay you to be a teacher when they could just use ChatGPT directly. Prompt it to develop a lesson plan to learn whatever, create the learning materials, and then evaluate their mastery of it. And be endlessly available, nights and weekends, for further discussion and help with any difficulties?

    • righthand 3 hours ago ago

      When I “scan for correctness” a code change (ex code review), I do my best to look for code correctness. Often that change has to go through product review for visual correctness. A lot of my scanning is brief and determining logic with the variables I know. However I often exclude the cases that have already been tested through e2e and unit tests. These tests are valuable to ensure regressions don’t occur.

      Please tell me what validation and regression testing can you guarantee by having an LLM generate a lesson plan? Why is it important to have your own unique generated lesson plan, even if that lesson plan is just a common template with synonyms swapped out?

      You’ve eliminated a bunch of extra work for yourself but have no long standing regression check from the output of this generator.

      These “actually LLMs are great for X topic” comments are just here for evangelism then? What do students gain from having you generate partially random lesson plans? Please don’t tell me “time savings”.

  • gigatree 3 hours ago ago

    > However, these types of comparative analyses failed because most of my students were not developed enough as writers to analyze the subtleties of meaning or evaluate style. “It makes my writing look fancy,” one PhD student protested when I pointed to weaknesses in AI-revised text. IMO teachers, and academia specifically, are to blame for what AI is doing to students because they’re the ones that have defined good writing as “fancy writing”. The hardest part of learning to be a good writer has been undoing all the years of style over substance engrained in high school and college. If we’re taught to write like robots, is it any surprise we just have robots write for us given the opportunity? And is it any surprise we don’t see the point of learning to write if robots seemingly do it so much better?

  • mjevans 3 hours ago ago

    IMO most of the reason this teacher quit is students using ChatGPT to get around the puffery and wasteful production assigned.

    It is important to have the skills to be able to do that kind of task, and transforming information rather than just transcribing it is a key way of training our biological supercomputers. However I can't recall even one example of that being the case when I was in school at any level. I think if it ever did happen it was as an accidental side effect rather than an intended part of the process.

    It would be much harder for such generative tools to regurgitate accurate content for novel things. As an example, a report about what the student was presently working on or had just completed in labs, or a design for something they'd like to do. Maybe a focus on the hypothesis or 'request for funding' for a project would better model real world writing and have sufficient local focus to require human writing.

    • SoftTalker 3 hours ago ago

      > I think if it ever did happen it was as an accidental side effect rather than an intended part of the process.

      It's a side-effect but it's not accidental.

      The objective of most school work (below PhD-level research) is knowledge mastery and skill development. This comes from repeated "practice" assignments. Look at any code you wrote as an undergrad compared to what you write today. You weren't inventing anything new, and you will probably laugh at your coding style and quality, but that puffy and wasteful production was what developed the skills to think and the mastery of the information and techniques necessary to move on to solving novel problems and perhaps creating new approaches.

      • mjevans 2 hours ago ago

        The rote repetition for math is more obvious. Drilling in the repeated proof that yes, this is how things proceeded from start to finish and it is known.

        However at least my History classes wanted to instill trivia, rather than a general overview and mastery of the general course of events and knowledge. Precise dates and figures, rather than knowing what to lookup if I ever did need __specific__ data. Which is exactly what I'd want to do if I did have a need for that data, to make sure I had the correct date, spelling, and maybe refresh my context years or decades later.

  • smokel 3 hours ago ago

    The real problem seems to be with the testing and grading system, and the fact that some people game them to receive some title or other credentials.

    If one is really interested in learning to write well, or to understand what good writing is, then this particular teacher might still have liked their job.

    That approach probably doesn't make much money though.

    • doctorpangloss 3 hours ago ago

      I don't know. When have you heard a single student or educator in a class suggest, "All submitted papers should be available to be read by everyone in the class"? Think deeply about the impacts of that. Other than a creative writing class, I can't recall a single time when I could read someone else's papers without their express permission. I'm bringing this up because you're talking about a tired trope, that making money or credentials are the conspiratorial force deciding everything you don't like in education, inspecific to writing and thus not really engaging with the article; when really, there are much simpler and more toxic aspects of writing education that educators could reverse in an afternoon.

      • SoftTalker 2 hours ago ago

        In 10th grade English, once a week we as a class reviewed one student's writing (without being told who the student was). By the end of the year everyone had a "peer review" of his or her writing at least once.

      • smokel 3 hours ago ago

        I'm not sure I understand your argument, except for the needless stab.

        All Master's thesis papers at my university were publicly available at the library, apart from some that did theirs at a company which did not allow this.

        Do you suggest that group reading would be a better approach for grading texts?

  • JasserInicide an hour ago ago

    Solution: Go back to in-person pen/paper tests while having those bags you put your phones into that you get back at the end of the test.

  • indigo0086 2 hours ago ago

    Would he have this problem with smaller classrooms of kids that elect to be in the class rather than having an academic obligation to. There will be always kids who skate by, it just seems like a is exposing that there is a limitation of choices for students in higher education.

  • gmaster1440 3 hours ago ago

    I imagine it's difficult to be a good teacher and find effective ways to encourage students to rigorously think about things they care about in spite of the discomfort it might cause.

    I also believe increasingly capable and sophisticated AI systems will play a formative role in transforming education, not as the current chatbots that are disrupting education as mentioned in the article, but as active participants in the reimagined classrooms of the future. The transition will probably be rough, but it has the potential to bring about a better future and more fruitful learning and writing.

  • baudpunk 3 hours ago ago

    I'm excited for these kids, to be honest. My experience in the education system in the 90's was a goddamn nightmare. I didn't make it to the 9th grade. It just wasn't designed for someone with my ADHD and chaotic situation at home. I didn't care about most of the subjects they were teaching me, and I would get beaten regularly for doing poorly. I get hyper focused on things I care about, and that system provided very few things that I cared about. Today, I'm a senior DevOps engineer. Guess what I do care about?

    And it's not just that I only care about computers. I became an autodidact after I left school, and learned about the things that interested me, and only those things. I still got a great education and know a lot of things that provide value to society, and enrich others. It was just that the education system packaged my value as a human being into one big bundle that was graded in aggregate.

    I have high hopes that our world's societies can have such an amazing tool as their disposal that kids don't feel like they have to cram the entirety of human existence into their brains for 12/16/18/20 years or suffer the consequences of a failed life; that they can be productive through a creative use of the tools at their disposal, and feel accomplished even if their brains don't work the same way as others'.

    Not to mention the social benefits of having nearly instantaneous fact checking available, and building their opinions around it. Then, they can also be good people instead of allowing lazy idiot talking heads convince them that their situation is an immalleable doom spiral, locking them into a ecosystem of fear and idolatry that's only return is manifest destiny.

  • ChrisArchitect 3 hours ago ago

    Related:

    The Elite College Students Who Can't Read Books

    https://news.ycombinator.com/item?id=41707605

  • lucianbr 3 hours ago ago

    > I noted where arguments were unsound. I pointed to weaknesses such as stylistic quirks that I knew to be common to ChatGPT (I noticed a sudden surge of phrases such as “delves into”). That is, I found myself spending more time giving feedback to AI than to my students.

    > So I quit.

    This strikes me as a non-sequitur. Her students were making a certain class of mistakes, so... she quit? Don't students always make mistakes of one kind or another? Teach them to do better, in this case by not using AI, or revising manually the AI output, or some other way. Isn't that the job?

    • raphael_kimmig 3 hours ago ago

      Can’t you empathise? The thought of having to grade and give advice on AI generated content makes me want to run away and live in a remote forest.

      • lucianbr 3 hours ago ago

        Well I am not very good at empathy, indeed. Sorry.

        But really. Teachers have complained about stupid or uninterested kids for ever. This does not make schools pointless. I am not good at teaching, but some people are, and manage to do it despite some setbacks. Is this AI thing really insurmontable? Did she at least attempt to fix the problem, before quitting?

        Feels to me like "we've tried nothing, and we're all out of ideas".

    • SoftTalker 3 hours ago ago

      He did those things. He said his students recognized the flaws and hazards of relying on AI. But they used it anyway.

    • righthand 3 hours ago ago

      Yeah it seems like they quit to make a point but the point is lost on me unless the point is that they can write an article about how they quit.

  • banku_brougham 3 hours ago ago

    It bodes ill.

    The hand of fate doth steer our course,

    As wisdom wanes to machines' force.

    A hollow voice, devoid of soul,

    Whispers false, yet takes its toll.

    Oracles warned of knowledge lost—

    Now we reap the bitter cost.

  • 3 hours ago ago
    [deleted]
  • baggy_trough 3 hours ago ago

    The students use AI to complete assignments; the teachers use AI to create and grade them. It's a giant self-licking ice cream cone.

  • Eumenes 3 hours ago ago

    A friend of mine teaches an intro to web development course - its a 2 day thing, usually over the weekend at a co-working place that does events, and the stories he tells me are insane. These are mostly college graduates and white collar professionals who want to learn some code. They are often perplexed by the idea of a file and folders. Right clicking something is novel for some. He gives all the instructions ahead of time so they can get started right away. That first day is almost exclusively helping people get the "dev environment" setup. Its just a folder on their desktop with some html, css, and js files.

  • christkv 3 hours ago ago

    Corporate writing (web-sites, marketing, internal hr emails) before LLM's might as well have been written by an LLM for its predictable platitudes, word usage and patterns.

  • FrustratedMonky 3 hours ago ago

    Is this all bad?

    The conversational model of learning, the dialectic, can be better for learning than just reading walls of text.

    Instant access to a lot of information, in a more conversational model. I've found it to be more natural.

    Are we reaching the stage where every kid has a "Young Lady's Illustrated Primer" from the Diamond Age? That would be a good thing.

    • deegles 3 hours ago ago

      If they can solve the hallucination problem, sure. Otherwise we're going to have a lot of people who "know" things about the world that are simply made up.

      • FrustratedMonky 2 hours ago ago

        Was going to say it was getting better. But all the articles I could find said GPT-4o was worse than previous versions.

      • teaearlgraycold 3 hours ago ago

        That is and always has been the status quo. The question is if proportionally more or less is made up now. People may be able to learn more true facts thanks to LLMs.

  • whiplash451 3 hours ago ago

    I feel for the author, being a part-time teacher myself and seeing the impact of ChatGPT first-hand.

    However, I think there is a viable alternative:

    1. Spend the beginning of the year/semester showing the potentially disastrous effects of GenAI (e.g. through various exercises involving GenAI)

    2. Once students have been "vaccined" against ChatGPT, assume they will still cheat and switch to a type of teaching that leaves little room for cheating with ChatGPT, e.g. long in-person sessions where students write in classroom. Then grade their production (i.e. don't leave room for them to update after class).

    The world is changing fast and brutally, but teachers are the head of the spear against mass enshitification.

  • Der_Einzige 3 hours ago ago

    Good.

    One of the rock solid facts of pedagogy is blooms two sigma problem (kids who are tutored perform at ~95-98th percentile compared to standard education).

    https://en.wikipedia.org/wiki/Bloom%27s_2_sigma_problem

    If you're not private tutoring your kids, you're failing them compared to their potential. AI gives everyone the potential for a private tutor.

    Current AI systems are by far the worst that they will ever be. 5-10 years from now, I will trust AI tutor/teachers and will make sure my kids is on the right side of the achievement curve.

    Edit: Seems that HN is full of teachers-pets who don't see them for the authoritarian tyrants that they are (mass reflexive downvotes). I fking hate cultural marxism, but you all who want to downvote me should give Pedagogy of the Oppressed by Friere a read. He's pretty much in 100% agreement with how shitty, authoritarian, and tyrannical mainstream education/educators are - and that book is considered "required reading" in many (most?) education departments.

    https://en.wikipedia.org/wiki/Pedagogy_of_the_Oppressed

    "As of 2000, the book had sold over 750,000 copies worldwide.[1]: 9 It is the third most cited book in social science.[2]"

    • lucianbr 3 hours ago ago

      > 5-10 years from now

      Doesn't anyone think the future is unpredictable anymore? Does your life experience not include things tanking an unexpected turn, ever?

      I can't believe how many commenters here and elsewhere take the future trajectory of AI for granted. Where are the flying cars, the moon bases, the nuclear powered appliances? Things don't always go up on an exponential curve, and even when they do, results are not always what we thought initially. Isn't this obvious, and rather basic?

    • ffujdefvjg 3 hours ago ago

      > Current AI systems are by far the worst that they will ever be. 5-10 years from now, I will trust AI tutor/teachers and will make sure my kids is on the right side of the achievement curve.

      Pretty sure I read that line about self driving cars 8 or 9 years ago.

      Realistically, kids will always take the easiest path, why learn when AI will do all the thinking for them? And if AI does all the thinking and people begin to depend on them, what autonomy do these people have left? They've been completely neutered.

    • ahmeneeroe-v2 3 hours ago ago

      This was spot-on until you the final sentence. And that was more an individual position ("I will trust") rather than a general prescription ("You should trust")...so I am not sure why this is unpopular.

      AI does have the potential to be a private tutor. Think that scene from Star Trek (2009)[1] or The Young Ladies Illustrated Primer.

      Companies are making 11-12 figure bets that AI will be huge, so this comment isn't out of line.

      [1] https://www.youtube.com/watch?v=KvMxLpce3Xw

    • Miraste 3 hours ago ago

      The downvotes are likely not because of the criticism of teachers, but the lack of recognition that AIs are not private tutors, and AI companies are working very hard to make their products more authoritarian than any human system could ever be.

    • realce 3 hours ago ago

      With this progress, what will be the point of being in any upper percentile at all? Everyone will be in it if they have access to the LLM wizard.

      We're outsourcing our sovereignty to a machine blob in a black box, the blob will do the thinking for all of those children.

      edit: for the authoritarian tyrants that they are (mass reflexive downvotes)

      Your response is to rely on opaque knowledge agents produced by the largest and shadiest corporations that humanity has ever known. Your lack of paranoia is a severe vulnerability. Good luck.

      • jimkoen 3 hours ago ago

        > Everyone will be in it if they have access to the LLM wizard.

        You're arguing that everyone might be good at sports if they had a personal trainer. How is this not a good thing? If you take these models only by the base value they provide, you might be able to eliminate private tutoring from the equation and make performance of students more comparable.

        I am far from happy that these models originate from shady for-profit corporations with dubious incentives, but a state actor could just as well train a model for support in education.

        > Your response is to rely on opaque knowledge agents produced by the largest and shadiest corporations that humanity has ever known.

        This is ironic. My teachers organized a big week about the importance of the personal carbon footprint when I was still in high school (~2015). The personal carbon footprint is criticized for shifting blame for climate change to personal consumer behavior, rather than corporations and was subject of a big advertising campaign from BP [0].

        I think you overestimate the average teacher in terms of their ability to think critically and overall skill in their respective subjects.

        [0] https://en.wikipedia.org/wiki/Carbon_footprint#Shifting_resp...

      • ahmeneeroe-v2 3 hours ago ago

        >what will be the point of being in any upper percentile at all

        Not the OP, but throughout human history, being in the upper percentile for intelligence has been an advantage. The safe bet is that LLMs won't change that.

        >opaque knowledge agents

        By definition this true for every knowledge agent in history. If Person A is ignorant and Person B offers to teach them, there is always (by definition) an asymmetry. Aristotle was an opaque knowledge agent to Alexander.

      • teaearlgraycold 3 hours ago ago

        It won’t be a competitive advantage against other humans. But it will improve our species’ ability to communicate and learn.

      • Der_Einzige 2 hours ago ago

        What's opaque about mistral large? https://huggingface.co/mistralai/Mistral-Large-Instruct-2407

        I have total control over the model, literally dozens of generation parmaters to play with, literally thousands of open source projects like this

        https://github.com/Mihaiii/llm_steer or https://colab.research.google.com/drive/1a-aQvKC9avdZpdyBn4j...

        which allow me to jailbreak the model to my hearts content - except I didn't need to do that since Mistral isn't actually aligned anyway!

        While you're the one relying on "opaque" knowledge agents, I will have DPO'd my open access model with my custom huge debate evidence dataset to be a better competitive debater than any living human on the planet.

        Good luck.