I think the solution is to not aim to go online to "consume content". Instead, go online to learn new techniques and investigate well-reasoned opinions.
Generic "content" is that which fills out the space between the advertisements. That's never been good for you, whether written by humans or matrix multiplication.
Respectfully, I think you're missing the point that this is a societal rather than an individual concern. What will the average person's response to AI be? Probably to not recognize it, let alone spurn it. The cumulative effects of your neighbors, particularly the young ones who will grow up amidst this, or the old and gullible, being led along by computers over years is the thing you need to be more concerned about.
Sure, and there are people who stuff themselves full of fast food, alcohol, and/or cigarettes. I get that those things are different in that it is possible to levy vice taxes on them, but the primary defense is and will be education.
What we can do as technologists is establish clear norms around information junk food for our children and close acquaintances, and influence others to do the same.
It's not going to happen overnight -- as with many such things, I expect it'll take decades of mistakes followed by decades of repairing them. What we've learned from other such mistakes is that saying "feel bad about the dumb thing" ("be worried") is less effective than "here's a smart thing you can do instead".
I’m not sure education or awareness is a solution. It doesn’t hurt, of course, but I think the real issue is that we’re frequently feeling “low energy” (for my lack of a better term) so entry barriers become important and least-effort options start to win (“just picking a phone/tablet” easily wins here most of time), even if were well aware that they’re not as rewarding.
I blame all the background stress and I think it’s a more important factor.
When I look at the state of how humans have manipulated each other, how the media is noxious propaganda, how businesses have perfected emotional and psychological manipulation of us to sell us crap and control our opinions, I don't think AI's influence is worse. In fact I think it's better. When I have a spicy political opinion, I can either go get validated in an echo chamber like reddit or newsmedia, or let ChatGPT tell me I'm a f'n idiot and spell out a much more rational take.
Until the models are diluted to serve the true purpose of the thoughtcontrol already in fully effect in non-AI media, they're simply better for humanity.
ChatGPT has been shown to spend much more time validating people's poor ideas than it does refuting them, even in cases where specific guardrails have supposedly been implemented, such as to avoid encouraging self-harm. See recent articles about AI usage inducing god-complexes and psychoses, for instance[1]. Validation of the user giving the prompt is what it's designed to do, after all. AI seems to be objectively worse for humanity than what we've had before it.
Strongly disagree, and you've misread what you've linked. These linked cases are situations where people are staying in one chat and posting thousands and thousands of replies into a single context, diluting the system prompt and creating a fever-dream of hallucination and psychosis. These are also rarely thinking and tool calling models, relying more on raw-LLM generation instead of thinking and sourcing (cheap/free models versus high powered subscriber only thinking models).
As we all know, the longer the context, the worse the reply. I strongly recommend you delete your context frequently and never stay in one chat.
What I'm talking about is using fresh chat for questions about the world, often political questions. Grab statistics on something and walk through major arguments for and against an idea.
If you think ChatGPT is providing worse answers than X.com and reddit.com for political questions, quite frankly, you've never used it before.
Try it out. Go to reddit.com/r/politics and find a +5,000 comment about something, or go to x.com and find the latest elon conspiracy, and run it by ChatGPT 5-thinking-high.
I guarantee you ChatGPT will provide something far more intellectual, grounded, sourced and fair than what you're seeing elsewhere.
Why would an LLM give you a more "rational take"? It's got access to a treasure trove of kooky ideas from Reddit, YouTube comments, various manifestos, etc etc. If you'd like to believe a terrible idea, an LLM can probably provide all of the most persuasive arguments.
Apologies, it sounds like you have no experience with modern models. Yes, you can push and push and push get it to agree with all manner of things, but off-rip on the first reply in a new context it will provide extremely grounded and rational takes on politics. It's a night and day difference compared to your average reddit comment or X post.
In my years of use and thousands and thousands of chat uses, I have literally never seen chatGPT provide a radical answer to a political question without me forcing it, heavy-handedly, to do so.
The author needn't regret not publishing this two years ago, it's a thought that had occurred to pretty much everyone long before then. It's just not clear that anything can be done to stop the snowball from gathering speed.
I think it’s more that there’s no will to do anything about it. As a piece earlier this week pointed out nothing about tech is genuinely inevitable[0]. There are humans making decisions to keep the snowball gathering speed.
This reminds me of when everyone was saying that "everything on the internet is written in ink" - especially during the height of social media in the 2010s. So imagine my surprise in the first half of the 2020s when tons of content starts getting effectively deleted from the internet - either through actual deletion or things like link rot. Heck, I literally just said "the height of social media" - even that has pulled back.
So yeah, remember that tech ultimately serves people. And it only happens so long as people are willing to enable it to happen.
I suspect almost all of that data still exists - it just isn’t readily available.
In the desperate end-game of this most recent round of “it’s shit, but what if we collected enough of it?” every last bit of human generated content will be resurrected.
Meanwhile, I do a lot of photography and haven't posted anything in the past 2 years on Instagram because the AI garbage and influencer garbage now gets more attention than real photos of places on Earth you can actually go to. It feels not worth my time to post anything, considering how much effort it takes to post, time posts, and find hashtag soup, because if you don't do all of that, the platform doesn't show your images to people anyway.
switch to another platform? flickr? 500px? Or did you just want the likes? I still post a curated set of my photos to flickr. All CC licensed FWIW. There's no AI/influencer stuff there.
Nobody I know looks at those platforms. 500px is filled with bots, Flickr is unknown to people under 30. If humans don't look at it, it's not worth my time either.
I want a platform that real humans, including some sizeable chunk of my social circle, look at, and is filled with real content.
Agreed, once Instagram started favoring reels/video, I stopped posting my photography there.
Ive been looking at using Photo.glass, but the subscription cost puts me off a bit emotionally after having been told to believe that 'social media is free' from the tech oligarchs. Logically though, I know that it theoretically attracts a higher bar of photographers who are willing to pay entry/support a new form of ad-free internet through that subscription - Similar to the idea of paid search engines.
I only just realized that the site is actually https://glass.photo/, not photo.glass. I can't edit the comment now, but the suggested domain in the original comment leads to some spam site. Don't recommend visiting it
While I agree I do think this way of looking at things is kind of insightful. I hadn't thought of it this way and it really rings true:
> I find my fear to be kind of an ironic twist on what the Matrix foresaw—the AI apocalypse we really should be worried about is one in which humans live in the real world, but with thoughts and feelings generated solely by machines. The images we see, the words we read, all generated with the intent to control us. With improved VR, the next step (out of the real world) doesn’t seem very far away, either.
We had more than half a century of for example sci-fi literature describing that future, and over all those decades nobody was able to come up with even half-good plausible/feasible idea of how to deal with that. That suggests that it is outside of human intelligence capabilities to stop that snowball. Personally, i read a lot of sci-fi in my youth and i'm prepared to accept such my/our fate, even happily working where i can to speed it up (the faster the changes, the faster the human evolution (or at least adaptation) and what can be more exciting than that).
Current humans can't even deal with very simple and obvious issue of global warming. Thus it seems very unreasonable to expect any effective dealing with significantly more complex issues. And thus if not evolution then at least very accelerated adaptation is in order.
In the long run, the internet will be so riddled with trash that no one will trust it. Instead people will turn to authorities they trust for the truth the same way they did with encyclopedias and local papers. Information provenance will be a massive market.
The return to that world will be very painful and chaotic however.
Propaganda and lies can defeat all sort of human constructs—it can cause people to destroy their institutions and governments. However it eventually it comes into contact with reality and loses miserably.
I think this is happening already, no? People seem to have found their enclaves, each with its distinct thought leaders, and now follow that enclave and believe all others to be full of lies and deceipt. The return to a central truth seems liek a pipe dream. The concept of truth and fact has been fractured seemingly beyond repair. If it is repaired, I don't think it will happen over any medium controlled by profiteering corporations: they have a vested interest in the fracture. And yet, all forms of modern communication fall under this umbrella. So, I believe we are at an impasse.
Not really. In my opinion the current behaviour mainly plagues big and very large platforms/communities (think instagram, facebook, reddit).
I think this will create a push for going back to smaller “gated” communities: think phpbb forums from the early 2000s, maybe with invitation-only sign up (similar to lobste.rs, where somebody already in must invite you, and admins can track who-invited-who).
I have an issue with "inherently superior ... by dopamine output" part. It's the foundation of the whole article/worry but it's not supported by anything (The Matrix quotes don't count), making the whole article hang on a dubious premise of impending doom that is not shown to exist in reality.
LLMs are the latest progression in decades of technology and social changes that leave people less connected and less capable in exchange for more comfort. I think it's likely that AI technology eclipses humans at least partially by atrophying our own skills and abilities, particularly 1. our ability to endure discomfort in service of a goal and 2. our capacities to make decisions.
I don't really know what to do about it, even with ground rules of engagement, we all still need to participate in a larger culture where it seems like it's a runaway guarantee that LLMs erode more critical skills that leave us with less and a handful of companies who develop this tech with more.
I'm slowly changing my life around what LLMs tell me, but not necessarily in the ways you'd expect:
1. I have a very simple set of rules of engagement for LLMs. For work, I don't let LLMs write code, and I won't let myself touch an LLM before suffering on a project for an hour at least.
2. I am an experienced meditator with a lot of experience in the Buddhist tradition. I've dusted off my Christian roots, and started exploring these ideas with new eyes, partially from a James Hillman-esq / Rob Burbea Soulmaking Dharma look. I've found a lot of meaning in personal fabrication and myth, and my primary practice now is Centering Prayer.
3. I've been working for a little while on a personal edu-tech idea with the goal of using LLM tech as an auxiliary tech to help people re-develop lost metacognitive skills and not use LLMs as a crutch. I don't know if this will ever see the light of day, it is currently more of a research project than anything, and it has a certain kind of iconoclastic frame like Piotr Wozniak's around what education is and what it should look like.
I think humans having a platform to tell the masses to "be worried" is as troublesome these days as AI content. mass media that can be manipulated has been around for 100 years. I don't think AI is unique.
It's fairly trivial to write code that can autogenerate hundreds or even thousands of AI-generated videos using Veo 3 with individual characters to push any narrative you'd like and push to Instagram or TikTok.
That's way scarier to me than a newspaper having a bias, or someone with an audience publishing a controversial blog post.
Indeed. Totalitarians of the past century didn't need any AI to control masses and cause more than 100M deaths. And those ideologies are far from dead.
Television and the commercial Internet are optimized to consume as much life as possible so that part of the captured attention can be auctioned to advertisers and other propagandists for pennies a minute. Returning to doing the same thing but Certified With No AI™ doesn't substantially reduce the badness of the thing.
> Increasing numbers of people who consume content on the Internet will completely sacrifice their ability to think for themselves.
Bless the author's heart.
All the major social media apps have been doing machine learning-driven getNext() for years now. Well before LLMs were even a thing. The Youtube algorithm was doing this a decade ago. This isn't on the horizon, we've already drowned in it.
Even as horrible as the current state of that already is, there is a difference between letting AI pick the next video in line or having the next video be DONE by AI
That's what most people would say - but why do they say this?
As I understand it:
1. Because machine-generated content is not as good. Recent technical improvements are (IMHO) showing obvious and significant improvements over last year SOTA tech, indicating that the field is still very green. As long as machine-generated content is distinguishable, as long as there are quirks in there that we easily notice, of course it'll be less preferable.
2. Our innate "our vs foreign" biases. I suspect that until something happens to our brains, we'll always tend to prefer "human" to "non-human", just like we prefer "our" products (for arbitrary definition of "our" that drastically varies across societies, cultures and individuals) to other products because we love mental binary partitioning.
Not always! I have definitely had some AI-generated songs ("BBL Drizzy" being a notorious example) that were stuck in my head for weeks. I think the music industry is at the greatest risk in the near term.
BBL Drizzy seems to me like a case where the cultural zeitgeist was more important than the actual additions made by AI. The lyrics were human - King Willonius admitted as much - so wasn't it just Udio AI reading them out + sampled backing track? Then Metro Boomin remixed the far more popular version by sampling bits and pieces, and I think that his contributions were 100% transformative. There's no way BBL DRIZZY BPM 150.mp3 could have been made by an AI any time soon
A few years ago I was on an airplane back from Asia and I saw for the first time somebody using both hands to scroll tiktok.
A woman in front of me had her phone cradled in both hands, with index and thumb from both hands on the screen - one hand was scrolling and swiping and the other one was tapping the like and other interaction buttons. It was at such a speed that she would seemingly look at two consecutive posts in 1 second and then be able to like or comment within an additional second.
It left me really shaken as to what the actual interaction experience is like if you’re trying to consume short form content but you’re only seeing the first second before you move on.
It explains a lot about how thumbnails and screenshots and beginnings of videos have evolved overtime in order to basically punch you right in the face with what they want you to know.
It’s really quite shocking the extent to which we’re at the lowest possible common denominator for attention and interaction.
But seriously, if you don't know that it's incorrect information, it does make a difference. Knowing it was produced by AI at least gives you foreknowledge that it may include hallucinations.
Before LLMs were mainstream, rationalists and EA types would come on Hacker News to convince people that worrying about how "weak" AI would be used was a waste of time, because the real problem was the risk of "strong" AI.
Those arguments looked incredibly weak and stupid when they were making them, and they look even stupider now.
And this isn't even their biggest error, which, in my opinion, was classifying AI as a bigger existential risk than climate change.
An entire generation of putatively intelligent people lost in their own nightmares, who, through their work, have given birth to chaos.
Weak ai is a problem, but isn't going to lead to 100% human extinction
Human extinction won't happen until a couple years later, with stronger ai (if it does happen, which I unfortunately think it will- if we remain on our current trajectory)
"This theoretical event that I just made up would lead to 100% human extinction"
Neat, go write science fiction.
Hundreds of billions of dollars are currently being lit on fire to deploy AI datacenters while there's an ecosystem destabilizing heat wave in the ocean. Climate change is a real, measurable, present threat to human civilization. "Strong AI" is something made up by a fan fiction author. Grow up.
It can't be true because it sounds like science fiction to you?
Everything about every part of AI in 2025 sounds exactly like science fiction in every way. We are essentially living in the exact world described in science fiction books this very moment, even though I wish we didn't.
Have you ever used an ai chatbot? How is that not exactly like something you'd find in science fiction?
The idea of “Strong AI” as an “existential risk” is based entirely on thought experiments popularized by a small, insular, drug-soaked fan fiction community. I am begging you to touch grass.
for fear of being overly utilitarian here its not really an issue that people are manipulatable but that they are manipulated into doing the wrong things (consumerism, political divide-and-conquer strategies)
and rejecting manipulation from a deontological stance reduces agency and output for doing good in the real world
manipulation = campaigns = advertisements = psyops (all the same, different connotations)
Because you are I will not be able to tell whether something is machine- or human-generated, and the machine generated stuff will get more clicks than the human generated stuff, it’s likely that the majority of popular online content (and even printed content post-2023) will have been created by AI (and perhaps solely by AI).
Sorry, but when you make claims like this, it just tells me that you are not very familiar with popular culture. Most people hate AI content and at best find it a meme-esque joke. And young people increasingly get their news from individuals on TikTok/YouTube/etc. - who are directly incentivized to be as idiosyncratic and unique (read: not like AI) as possible in order to get followers. Platforms like YouTube do not benefit from their library being entirely composed of AI slop, and so will be implementing ways to filter AI content from "real people" content.
Ultimately AI tools are mostly going to be useful in situations where the author doesn't matter: sports scores, stock headlines, etc. Everything else will likely be out-competed by actual humans being human.
I think you're overgeneralising here. People don't hate AI content. Just content so low quality that they recognise it as AI. This is not universal and the recognition will drop further: https://journals.sagepub.com/doi/10.1177/09567976231207095
> from individuals on TikTok/YouTube/etc. - who are directly incentivized to be as idiosyncratic and unique (read: not like AI
AI content can be just as unique. It's not all-or-nothing. People can inject a specific style and direction in an otherwise generated content to keep it on brand.
Any attempt to create an AI “influencer” has been met with massive backlash.
At best you’re going to get some generically anonymous bot pretending to be human, that has limited reach because they don’t actually exist in the real world. Much of the media influence game involves podcasts, events, interviews, and a host of other things that can’t be faked.
I just don’t really see what scenario the doomsayers are imagining here. An entire media sphere of AIs that somehow shift public opinion without existing or interacting with the real media world? The practicalities don’t make sense.
Again, I don’t really see what scenario here is actually some kind of doomsday.
So someone makes a fake video of X famous person saying an absurd thing on Joe Rogan’s podcast.
It’s not on the official Rogan account, but just on some low quality AI slop channel. Maybe it fools a handful of people…but who cares? People are already pretty trained to be skeptical of video.
I think we’ll mostly just see a focus on identity verification. If the content isn’t verified as being by the real person, it’ll just be treated as mindless entertainment.
It's of my opinion that there is going to be a market for wrangling AI content from a consumer perspective to help maintain human-to-human knowledge transfer. I just have no idea what that looks like.
> What am I personally going to do about this? Well, to start, I’m going to start taking content way less seriously unless it was created before 2022
There's an old fable about this, The Boy Who Cried "Wolf" about people adapting to false claims. They just discount the source, which is what is going to happen with social media once it is dominated by AI slop. Nobody will find it worth anything anymore, and the empires will melt down. I'm not on any of the big social sites, but I'm already watching a lot less on YouTube, basically only watching channels that I know to be real people. My other recommendations are mostly AI garbage now, outside of that.
Honestly the biggest way in which LLMs have changed society is in the desperate, almost pathetic way every every business leader, career influencer, advice guru insists that they must use AI, that you should "learn" AI, that AI is taking over.
Anyway, in terms cultural change, I think the emerging image and video models will be a lot more disruptive. Text has been easy to fake for a while now, and barely gets people's attention anymore.
I think there is a big difference from other fads here, listing some from memory - SL, Cloud Computing, Web 3.0, NFC, Big Data, Blockchain, 3D printing, IOT, VR, metaverse, NFTs.
If we plot all of these on a scale of how much it impacted day to day experience of an average user there is something highly unusual about AI. The slop is everywhere, every single person who is interacting with digital media is affected. I don't really know what this means, but this is pretty unusual when compared with other fads.
Most comments seem to agree with the article, and I don't quite understand why.
People have been manipulated since forever, and coerced before that. You used to be burned or hanged if your opinions differed even a little from orthodoxy (and orthodoxy could change in a span of a couple of years!)
AI slop is mostly noise. It doesn't manipulate, it makes thinking a little more difficult. But so did TV.
This is not really comparable with TV, not even close.
There was/is a relatively small amount of channels you have access to, and effectively all your neighbours and friends have the same content.
Short form video took this to the extreme by figuring out what specific content you like and just feed you that - as a result people spend significantly more time watching TikTok and Youtube than they (or their previous generation) did with TV. TV was also often in background, not really actively watched, which is not the case on the internet.
Now, once you put AI generated content there combined with AI recommendation systems, this problem becomes even worse - more content, faster feedback loop, infinite amount of "creators" tailored to what your sweet spot is.
That's content where the author did but care about masking. Don't mistake "there's lots of AI content that's easy to identify" for "AI content is always easy to identify".
Sure, but the average level of effort out there is so low that I'm not worried about losing my ability to notice this content instinctively any time soon. That's kinda why these tools are popular in the first place, after all.
(Also, a lot of AI operators come across like they wouldn't be capable of fixing those issues even if they cared.)
No, I shall not "be worried." Unless you are some spineless blob of agencyless ooze... you should be able to parse reality in such a way that your day not need to be filled with worrying
> Therefore, increasing proportions of people consuming text online will be unwittingly mind-controlled by LLMs and their handlers.
The "and their handlers" part is the part I find frightening. I would actually be less concerned if the AIs were autonomous.
Reminds me of a random podcast I heard once where someone was asked: "if you woke up in the middle of the night and saw either a random guy or a grey alien in your bedroom, which would scare you more?" The person being interviewed said the dude, and I 100% agree. AI as proxy for oligarchs is much scarier than autonomous alien AI.
The genie's out of the bottle. I think it's better that everyone have access to it, and are fully aware of its capabilities, rather than it being unknown to everyone and under the control of specific entities
>The genie's out of the bottle. I think it's better that everyone have access to it, and are fully aware of its capabilities, rather than it being unknown to everyone and under the control of specific entities
The majority of people only have access to proprietary models, whose weights and training are closed source. The prospect of a populace that all out source their thinking to Google's LLM is horrifying.
Are there any grassroots(?) organizations doing activism such as FSF and ACLU in the AI space with local chapters? if not, it might be time for something like that, though with all the money flooding into LLMs (ignoring LLMs' manipulative power if it put its mind to it), we probably don't stand a chance.
I think the solution is to not aim to go online to "consume content". Instead, go online to learn new techniques and investigate well-reasoned opinions.
Generic "content" is that which fills out the space between the advertisements. That's never been good for you, whether written by humans or matrix multiplication.
You can't control other people, and this article is mostly about the effect of AI on other people, whom you can't control.
I like this take. Unfortunately most people don't have this level of self-control.
Respectfully, I think you're missing the point that this is a societal rather than an individual concern. What will the average person's response to AI be? Probably to not recognize it, let alone spurn it. The cumulative effects of your neighbors, particularly the young ones who will grow up amidst this, or the old and gullible, being led along by computers over years is the thing you need to be more concerned about.
Sure, and there are people who stuff themselves full of fast food, alcohol, and/or cigarettes. I get that those things are different in that it is possible to levy vice taxes on them, but the primary defense is and will be education.
What we can do as technologists is establish clear norms around information junk food for our children and close acquaintances, and influence others to do the same.
It's not going to happen overnight -- as with many such things, I expect it'll take decades of mistakes followed by decades of repairing them. What we've learned from other such mistakes is that saying "feel bad about the dumb thing" ("be worried") is less effective than "here's a smart thing you can do instead".
I’m not sure education or awareness is a solution. It doesn’t hurt, of course, but I think the real issue is that we’re frequently feeling “low energy” (for my lack of a better term) so entry barriers become important and least-effort options start to win (“just picking a phone/tablet” easily wins here most of time), even if were well aware that they’re not as rewarding.
I blame all the background stress and I think it’s a more important factor.
When I look at the state of how humans have manipulated each other, how the media is noxious propaganda, how businesses have perfected emotional and psychological manipulation of us to sell us crap and control our opinions, I don't think AI's influence is worse. In fact I think it's better. When I have a spicy political opinion, I can either go get validated in an echo chamber like reddit or newsmedia, or let ChatGPT tell me I'm a f'n idiot and spell out a much more rational take.
Until the models are diluted to serve the true purpose of the thoughtcontrol already in fully effect in non-AI media, they're simply better for humanity.
ChatGPT has been shown to spend much more time validating people's poor ideas than it does refuting them, even in cases where specific guardrails have supposedly been implemented, such as to avoid encouraging self-harm. See recent articles about AI usage inducing god-complexes and psychoses, for instance[1]. Validation of the user giving the prompt is what it's designed to do, after all. AI seems to be objectively worse for humanity than what we've had before it.
[1]: https://www.psychologytoday.com/us/blog/urban-survival/20250...
Strongly disagree, and you've misread what you've linked. These linked cases are situations where people are staying in one chat and posting thousands and thousands of replies into a single context, diluting the system prompt and creating a fever-dream of hallucination and psychosis. These are also rarely thinking and tool calling models, relying more on raw-LLM generation instead of thinking and sourcing (cheap/free models versus high powered subscriber only thinking models).
As we all know, the longer the context, the worse the reply. I strongly recommend you delete your context frequently and never stay in one chat.
What I'm talking about is using fresh chat for questions about the world, often political questions. Grab statistics on something and walk through major arguments for and against an idea.
If you think ChatGPT is providing worse answers than X.com and reddit.com for political questions, quite frankly, you've never used it before.
Try it out. Go to reddit.com/r/politics and find a +5,000 comment about something, or go to x.com and find the latest elon conspiracy, and run it by ChatGPT 5-thinking-high.
I guarantee you ChatGPT will provide something far more intellectual, grounded, sourced and fair than what you're seeing elsewhere.
Why would an LLM give you a more "rational take"? It's got access to a treasure trove of kooky ideas from Reddit, YouTube comments, various manifestos, etc etc. If you'd like to believe a terrible idea, an LLM can probably provide all of the most persuasive arguments.
Apologies, it sounds like you have no experience with modern models. Yes, you can push and push and push get it to agree with all manner of things, but off-rip on the first reply in a new context it will provide extremely grounded and rational takes on politics. It's a night and day difference compared to your average reddit comment or X post.
In my years of use and thousands and thousands of chat uses, I have literally never seen chatGPT provide a radical answer to a political question without me forcing it, heavy-handedly, to do so.
Or seek out specific entertainment.
The author needn't regret not publishing this two years ago, it's a thought that had occurred to pretty much everyone long before then. It's just not clear that anything can be done to stop the snowball from gathering speed.
I think it’s more that there’s no will to do anything about it. As a piece earlier this week pointed out nothing about tech is genuinely inevitable[0]. There are humans making decisions to keep the snowball gathering speed.
0: https://deviantabstraction.com/2025/09/29/against-the-tech-i...
> nothing about tech is genuinely inevitable
This reminds me of when everyone was saying that "everything on the internet is written in ink" - especially during the height of social media in the 2010s. So imagine my surprise in the first half of the 2020s when tons of content starts getting effectively deleted from the internet - either through actual deletion or things like link rot. Heck, I literally just said "the height of social media" - even that has pulled back.
So yeah, remember that tech ultimately serves people. And it only happens so long as people are willing to enable it to happen.
I think you are mistaken.
I suspect almost all of that data still exists - it just isn’t readily available.
In the desperate end-game of this most recent round of “it’s shit, but what if we collected enough of it?” every last bit of human generated content will be resurrected.
Meanwhile, I do a lot of photography and haven't posted anything in the past 2 years on Instagram because the AI garbage and influencer garbage now gets more attention than real photos of places on Earth you can actually go to. It feels not worth my time to post anything, considering how much effort it takes to post, time posts, and find hashtag soup, because if you don't do all of that, the platform doesn't show your images to people anyway.
switch to another platform? flickr? 500px? Or did you just want the likes? I still post a curated set of my photos to flickr. All CC licensed FWIW. There's no AI/influencer stuff there.
Nobody I know looks at those platforms. 500px is filled with bots, Flickr is unknown to people under 30. If humans don't look at it, it's not worth my time either.
I want a platform that real humans, including some sizeable chunk of my social circle, look at, and is filled with real content.
Agreed, once Instagram started favoring reels/video, I stopped posting my photography there.
Ive been looking at using Photo.glass, but the subscription cost puts me off a bit emotionally after having been told to believe that 'social media is free' from the tech oligarchs. Logically though, I know that it theoretically attracts a higher bar of photographers who are willing to pay entry/support a new form of ad-free internet through that subscription - Similar to the idea of paid search engines.
I only just realized that the site is actually https://glass.photo/, not photo.glass. I can't edit the comment now, but the suggested domain in the original comment leads to some spam site. Don't recommend visiting it
While I agree I do think this way of looking at things is kind of insightful. I hadn't thought of it this way and it really rings true:
> I find my fear to be kind of an ironic twist on what the Matrix foresaw—the AI apocalypse we really should be worried about is one in which humans live in the real world, but with thoughts and feelings generated solely by machines. The images we see, the words we read, all generated with the intent to control us. With improved VR, the next step (out of the real world) doesn’t seem very far away, either.
We had more than half a century of for example sci-fi literature describing that future, and over all those decades nobody was able to come up with even half-good plausible/feasible idea of how to deal with that. That suggests that it is outside of human intelligence capabilities to stop that snowball. Personally, i read a lot of sci-fi in my youth and i'm prepared to accept such my/our fate, even happily working where i can to speed it up (the faster the changes, the faster the human evolution (or at least adaptation) and what can be more exciting than that).
Current humans can't even deal with very simple and obvious issue of global warming. Thus it seems very unreasonable to expect any effective dealing with significantly more complex issues. And thus if not evolution then at least very accelerated adaptation is in order.
In the long run, the internet will be so riddled with trash that no one will trust it. Instead people will turn to authorities they trust for the truth the same way they did with encyclopedias and local papers. Information provenance will be a massive market.
The return to that world will be very painful and chaotic however.
Nah, they will just find the content to confirm their bias and not seek truth. This is essentially already the state of affairs for the internet.
Propaganda and lies can defeat all sort of human constructs—it can cause people to destroy their institutions and governments. However it eventually it comes into contact with reality and loses miserably.
>Instead people will turn to authorities they trust for the truth the same way they did with encyclopedias and local papers
I think a large portion of the population actively distrust experts.
Which is why I said "authorities they trust."
I think this is happening already, no? People seem to have found their enclaves, each with its distinct thought leaders, and now follow that enclave and believe all others to be full of lies and deceipt. The return to a central truth seems liek a pipe dream. The concept of truth and fact has been fractured seemingly beyond repair. If it is repaired, I don't think it will happen over any medium controlled by profiteering corporations: they have a vested interest in the fracture. And yet, all forms of modern communication fall under this umbrella. So, I believe we are at an impasse.
> The concept of truth and fact has been fractured seemingly beyond repair.
It’s always been this way. That the you thought otherwise is just evidence of how good a central power was at controlling “the truth”.
Trust doesn’t scale. There are methods that work better than others, but it’s a very hard problem.
Yes - but the authority most people have decided to trust is the Algorithm or their favorite LLM.
Not really. In my opinion the current behaviour mainly plagues big and very large platforms/communities (think instagram, facebook, reddit).
I think this will create a push for going back to smaller “gated” communities: think phpbb forums from the early 2000s, maybe with invitation-only sign up (similar to lobste.rs, where somebody already in must invite you, and admins can track who-invited-who).
It would probably be a better experience overall.
I have an issue with "inherently superior ... by dopamine output" part. It's the foundation of the whole article/worry but it's not supported by anything (The Matrix quotes don't count), making the whole article hang on a dubious premise of impending doom that is not shown to exist in reality.
LLMs are the latest progression in decades of technology and social changes that leave people less connected and less capable in exchange for more comfort. I think it's likely that AI technology eclipses humans at least partially by atrophying our own skills and abilities, particularly 1. our ability to endure discomfort in service of a goal and 2. our capacities to make decisions.
I don't really know what to do about it, even with ground rules of engagement, we all still need to participate in a larger culture where it seems like it's a runaway guarantee that LLMs erode more critical skills that leave us with less and a handful of companies who develop this tech with more.
I'm slowly changing my life around what LLMs tell me, but not necessarily in the ways you'd expect:
1. I have a very simple set of rules of engagement for LLMs. For work, I don't let LLMs write code, and I won't let myself touch an LLM before suffering on a project for an hour at least.
2. I am an experienced meditator with a lot of experience in the Buddhist tradition. I've dusted off my Christian roots, and started exploring these ideas with new eyes, partially from a James Hillman-esq / Rob Burbea Soulmaking Dharma look. I've found a lot of meaning in personal fabrication and myth, and my primary practice now is Centering Prayer.
3. I've been working for a little while on a personal edu-tech idea with the goal of using LLM tech as an auxiliary tech to help people re-develop lost metacognitive skills and not use LLMs as a crutch. I don't know if this will ever see the light of day, it is currently more of a research project than anything, and it has a certain kind of iconoclastic frame like Piotr Wozniak's around what education is and what it should look like.
I think humans having a platform to tell the masses to "be worried" is as troublesome these days as AI content. mass media that can be manipulated has been around for 100 years. I don't think AI is unique.
What's different is the available leverage.
It's fairly trivial to write code that can autogenerate hundreds or even thousands of AI-generated videos using Veo 3 with individual characters to push any narrative you'd like and push to Instagram or TikTok.
That's way scarier to me than a newspaper having a bias, or someone with an audience publishing a controversial blog post.
Indeed. Totalitarians of the past century didn't need any AI to control masses and cause more than 100M deaths. And those ideologies are far from dead.
Television and the commercial Internet are optimized to consume as much life as possible so that part of the captured attention can be auctioned to advertisers and other propagandists for pennies a minute. Returning to doing the same thing but Certified With No AI™ doesn't substantially reduce the badness of the thing.
> Increasing numbers of people who consume content on the Internet will completely sacrifice their ability to think for themselves.
Bless the author's heart.
All the major social media apps have been doing machine learning-driven getNext() for years now. Well before LLMs were even a thing. The Youtube algorithm was doing this a decade ago. This isn't on the horizon, we've already drowned in it.
Watch a teenage for 3 hours just endlessly scrolling.
Most of the content is basically Idiocracy's "Ow my balls".
[dead]
Even as horrible as the current state of that already is, there is a difference between letting AI pick the next video in line or having the next video be DONE by AI
Algorithm-chosen human-made content is on some level preferable to algorithm-chosen and created content, right?
That's what most people would say - but why do they say this?
As I understand it:
1. Because machine-generated content is not as good. Recent technical improvements are (IMHO) showing obvious and significant improvements over last year SOTA tech, indicating that the field is still very green. As long as machine-generated content is distinguishable, as long as there are quirks in there that we easily notice, of course it'll be less preferable.
2. Our innate "our vs foreign" biases. I suspect that until something happens to our brains, we'll always tend to prefer "human" to "non-human", just like we prefer "our" products (for arbitrary definition of "our" that drastically varies across societies, cultures and individuals) to other products because we love mental binary partitioning.
Not always! I have definitely had some AI-generated songs ("BBL Drizzy" being a notorious example) that were stuck in my head for weeks. I think the music industry is at the greatest risk in the near term.
BBL Drizzy seems to me like a case where the cultural zeitgeist was more important than the actual additions made by AI. The lyrics were human - King Willonius admitted as much - so wasn't it just Udio AI reading them out + sampled backing track? Then Metro Boomin remixed the far more popular version by sampling bits and pieces, and I think that his contributions were 100% transformative. There's no way BBL DRIZZY BPM 150.mp3 could have been made by an AI any time soon
A few years ago I was on an airplane back from Asia and I saw for the first time somebody using both hands to scroll tiktok.
A woman in front of me had her phone cradled in both hands, with index and thumb from both hands on the screen - one hand was scrolling and swiping and the other one was tapping the like and other interaction buttons. It was at such a speed that she would seemingly look at two consecutive posts in 1 second and then be able to like or comment within an additional second.
It left me really shaken as to what the actual interaction experience is like if you’re trying to consume short form content but you’re only seeing the first second before you move on.
It explains a lot about how thumbnails and screenshots and beginnings of videos have evolved overtime in order to basically punch you right in the face with what they want you to know.
It’s really quite shocking the extent to which we’re at the lowest possible common denominator for attention and interaction.
And they make a terrible job. At least on the surface, what's being fed to the humans.
I find it funny that the ending is an "in summary"... was it AI generated?
The thing to ask yourself: does what I'm reading provide any value to me? If it does, then what difference does it make where it comes from.
Ok, as the author I have to admit that's a hilarious observation. But no, I wrote this myself.
> was it AI generated?
You're absolutely right!
But seriously, if you don't know that it's incorrect information, it does make a difference. Knowing it was produced by AI at least gives you foreknowledge that it may include hallucinations.
Before LLMs were mainstream, rationalists and EA types would come on Hacker News to convince people that worrying about how "weak" AI would be used was a waste of time, because the real problem was the risk of "strong" AI.
Those arguments looked incredibly weak and stupid when they were making them, and they look even stupider now.
And this isn't even their biggest error, which, in my opinion, was classifying AI as a bigger existential risk than climate change.
An entire generation of putatively intelligent people lost in their own nightmares, who, through their work, have given birth to chaos.
Weak ai is a problem, but isn't going to lead to 100% human extinction
Human extinction won't happen until a couple years later, with stronger ai (if it does happen, which I unfortunately think it will- if we remain on our current trajectory)
"This theoretical event that I just made up would lead to 100% human extinction"
Neat, go write science fiction.
Hundreds of billions of dollars are currently being lit on fire to deploy AI datacenters while there's an ecosystem destabilizing heat wave in the ocean. Climate change is a real, measurable, present threat to human civilization. "Strong AI" is something made up by a fan fiction author. Grow up.
It can't be true because it sounds like science fiction to you?
Everything about every part of AI in 2025 sounds exactly like science fiction in every way. We are essentially living in the exact world described in science fiction books this very moment, even though I wish we didn't.
Have you ever used an ai chatbot? How is that not exactly like something you'd find in science fiction?
The idea of “Strong AI” as an “existential risk” is based entirely on thought experiments popularized by a small, insular, drug-soaked fan fiction community. I am begging you to touch grass.
> distracting us from a scarier notion
A more immediate notion, perhaps, but definitely not scarier than human extinction.
for fear of being overly utilitarian here its not really an issue that people are manipulatable but that they are manipulated into doing the wrong things (consumerism, political divide-and-conquer strategies)
and rejecting manipulation from a deontological stance reduces agency and output for doing good in the real world
manipulation = campaigns = advertisements = psyops (all the same, different connotations)
Because you are I will not be able to tell whether something is machine- or human-generated, and the machine generated stuff will get more clicks than the human generated stuff, it’s likely that the majority of popular online content (and even printed content post-2023) will have been created by AI (and perhaps solely by AI).
Sorry, but when you make claims like this, it just tells me that you are not very familiar with popular culture. Most people hate AI content and at best find it a meme-esque joke. And young people increasingly get their news from individuals on TikTok/YouTube/etc. - who are directly incentivized to be as idiosyncratic and unique (read: not like AI) as possible in order to get followers. Platforms like YouTube do not benefit from their library being entirely composed of AI slop, and so will be implementing ways to filter AI content from "real people" content.
Ultimately AI tools are mostly going to be useful in situations where the author doesn't matter: sports scores, stock headlines, etc. Everything else will likely be out-competed by actual humans being human.
> Most people hate AI content
I think you're overgeneralising here. People don't hate AI content. Just content so low quality that they recognise it as AI. This is not universal and the recognition will drop further: https://journals.sagepub.com/doi/10.1177/09567976231207095
> from individuals on TikTok/YouTube/etc. - who are directly incentivized to be as idiosyncratic and unique (read: not like AI
AI content can be just as unique. It's not all-or-nothing. People can inject a specific style and direction in an otherwise generated content to keep it on brand.
Any attempt to create an AI “influencer” has been met with massive backlash.
At best you’re going to get some generically anonymous bot pretending to be human, that has limited reach because they don’t actually exist in the real world. Much of the media influence game involves podcasts, events, interviews, and a host of other things that can’t be faked.
I just don’t really see what scenario the doomsayers are imagining here. An entire media sphere of AIs that somehow shift public opinion without existing or interacting with the real media world? The practicalities don’t make sense.
> Much of the media influence game involves podcasts, events, interviews, and a host of other things that can’t be faked.
Have you not been following how fast video gen is improving? We're not far off convincing fake video interviews
The backlash only happens when people can tell it's AI
Again, I don’t really see what scenario here is actually some kind of doomsday.
So someone makes a fake video of X famous person saying an absurd thing on Joe Rogan’s podcast.
It’s not on the official Rogan account, but just on some low quality AI slop channel. Maybe it fools a handful of people…but who cares? People are already pretty trained to be skeptical of video.
I think we’ll mostly just see a focus on identity verification. If the content isn’t verified as being by the real person, it’ll just be treated as mindless entertainment.
It's of my opinion that there is going to be a market for wrangling AI content from a consumer perspective to help maintain human-to-human knowledge transfer. I just have no idea what that looks like.
I'm not really sure that the title filter shortening "You Should Be Worried" to just "Be Worried" is making it any less clickbaity....
If the optimizing function is engagement it wouldn’t be too different than what we’re doing now. It’s just what humans want isn’t it?
> What am I personally going to do about this? Well, to start, I’m going to start taking content way less seriously unless it was created before 2022
There's an old fable about this, The Boy Who Cried "Wolf" about people adapting to false claims. They just discount the source, which is what is going to happen with social media once it is dominated by AI slop. Nobody will find it worth anything anymore, and the empires will melt down. I'm not on any of the big social sites, but I'm already watching a lot less on YouTube, basically only watching channels that I know to be real people. My other recommendations are mostly AI garbage now, outside of that.
Honestly the biggest way in which LLMs have changed society is in the desperate, almost pathetic way every every business leader, career influencer, advice guru insists that they must use AI, that you should "learn" AI, that AI is taking over.
Anyway, in terms cultural change, I think the emerging image and video models will be a lot more disruptive. Text has been easy to fake for a while now, and barely gets people's attention anymore.
I think there is a big difference from other fads here, listing some from memory - SL, Cloud Computing, Web 3.0, NFC, Big Data, Blockchain, 3D printing, IOT, VR, metaverse, NFTs.
If we plot all of these on a scale of how much it impacted day to day experience of an average user there is something highly unusual about AI. The slop is everywhere, every single person who is interacting with digital media is affected. I don't really know what this means, but this is pretty unusual when compared with other fads.
Most comments seem to agree with the article, and I don't quite understand why.
People have been manipulated since forever, and coerced before that. You used to be burned or hanged if your opinions differed even a little from orthodoxy (and orthodoxy could change in a span of a couple of years!)
AI slop is mostly noise. It doesn't manipulate, it makes thinking a little more difficult. But so did TV.
This is not really comparable with TV, not even close.
There was/is a relatively small amount of channels you have access to, and effectively all your neighbours and friends have the same content.
Short form video took this to the extreme by figuring out what specific content you like and just feed you that - as a result people spend significantly more time watching TikTok and Youtube than they (or their previous generation) did with TV. TV was also often in background, not really actively watched, which is not the case on the internet.
Now, once you put AI generated content there combined with AI recommendation systems, this problem becomes even worse - more content, faster feedback loop, infinite amount of "creators" tailored to what your sweet spot is.
> AI slop is mostly noise. It doesn't manipulate
Not until you start mass-producing fake photos, fake videos, fake audios, put all of it into social media, shake shake shake.
> Best-in-class AI detection is barely better than random chance and will only get worse
Really? Because I still see blatantly obvious AI-generated results in web searches all the time.
That's content where the author did but care about masking. Don't mistake "there's lots of AI content that's easy to identify" for "AI content is always easy to identify".
Sure, but the average level of effort out there is so low that I'm not worried about losing my ability to notice this content instinctively any time soon. That's kinda why these tools are popular in the first place, after all.
(Also, a lot of AI operators come across like they wouldn't be capable of fixing those issues even if they cared.)
No, I shall not "be worried." Unless you are some spineless blob of agencyless ooze... you should be able to parse reality in such a way that your day not need to be filled with worrying
> Therefore, increasing proportions of people consuming text online will be unwittingly mind-controlled by LLMs and their handlers.
The "and their handlers" part is the part I find frightening. I would actually be less concerned if the AIs were autonomous.
Reminds me of a random podcast I heard once where someone was asked: "if you woke up in the middle of the night and saw either a random guy or a grey alien in your bedroom, which would scare you more?" The person being interviewed said the dude, and I 100% agree. AI as proxy for oligarchs is much scarier than autonomous alien AI.
Or stand together and demand the madness stops, rather than pretend there's nothing to do about it; which could actually help improve the situation.
The people involved in making these decisions deserve to be locked up for life, and I'm sure they will be eventually.
The genie's out of the bottle. I think it's better that everyone have access to it, and are fully aware of its capabilities, rather than it being unknown to everyone and under the control of specific entities
>The genie's out of the bottle. I think it's better that everyone have access to it, and are fully aware of its capabilities, rather than it being unknown to everyone and under the control of specific entities
The majority of people only have access to proprietary models, whose weights and training are closed source. The prospect of a populace that all out source their thinking to Google's LLM is horrifying.
I agree.. but what's the solution there? Somehow enforce global regulation on it?
Are there any grassroots(?) organizations doing activism such as FSF and ACLU in the AI space with local chapters? if not, it might be time for something like that, though with all the money flooding into LLMs (ignoring LLMs' manipulative power if it put its mind to it), we probably don't stand a chance.
Technological inevitability is a plague. There was a good article shared on HN about this the other day.
What new human madness has ever been stopped?
At risk of Godwinisation, there's a very obvious example.
As we are recently seeing, it was only paused temporarily.
Wouldn't exactly call that a grassroots effort, though...
Human cloning, nuclear bombs (other than for sabre rattling)... to name a couple.
[dead]