I only see two outcomes for this problem : an internet of verified identities (start by uploading your ID card). Or a paid internet, where it doesn't matter who you are, but since you're going to pay for that email or that reddit account, the probability that it's AI spam is greatly reduced.
I want cool cryptography where I can, e.g. verify where I'm writing from and what my age is without giving away any other information.
Or if I want, I can verify that I'm myself, and eschew anonymity, and certain platforms should only accept contributions from people who don't hide their identity.
People in the town square only see my face, they do not automatically have my name, birth date and ID available unless I give it to them or they go to lengths obtaining those (il)legally.
“A zero-knowledge rollup (zk-rollup) is a layer-2 scaling solution that moves computation and state off-chain into off-chain networks while storing transaction data on-chain on a layer-1 network (for example, Ethereum). State changes are computed off-chain and are then proven as valid on-chain using zero-knowledge proofs.”
It's kind of bizarre that Zoom is still bothering to keep the lights on at Keybase when it's been completely fossilized for six years now. The writing is so obviously on the wall that nobody should be relying on it for anything, and yet they just won't let it die.
It's not fossilized, it's just that no one uses it. Put hot chicks on there or make it mandatory for logging into Slack and suddenly everyone will be using keybase.io, and honestly I think web of trust is a good idea and if a webapp can make it seem easy or intuitive then I'm all for it.
We're scratching our heads wondering why there's no forward motion when it's simply that no one is pushing it.
They haven't added or really changed anything since the acquisition AFAICT, it's just trucking along exactly as it was the day Zoom bought them out. Twitter account proofs were broken by the API changes years ago and nobody is at the wheel to fix or even just deprecate them.
Switzerland just voted recently to officially implement Selective Disclosure JWT, which does exactly all that. Social network registration can ask "are you 18?" and run with that - and only that. Or the club entrance. Or whatever, because it's all controlled by yourself in your app.
That seems like a good idea. The question is how the JWT is generated. A standard one would be more akin to a traditional crypto keypair. That is a "signal" key insomuch as it tells us who controls an account. It can't tell us the owner is the controller and that is the current weakness of crypto right now. To know the owner, we need another type of keypair to go alongside the traditional kind. That would be a "tone key" and is generated by a refreshing seed derived from the entropy of long-running, unfakeable conversations. The same way a friend might recognize us as being ourselves.
But you don't need to prove to all others that you are yourself, do you? You are only asked whether you're 18, the bouncer doesn't care about your name. So you can still hold the phone (like last summer the ID) of someone else and fake their answer.
Anonymity is important for many things. But on the flip side it's responsible of many issues with the internet today, because it makes moderation pretty much impossible (anyone can always just create a new account).
What we're missing is a way to have cryptographically secure pseudonymity: you log in to a website, you don't give any information whatsoever, but you cannot make two different accounts.
Most likely because your second sentence is impossible in one way or another.
Even if it's some kind of government encoded key, governments cannot be trusted to create imaginary people and hand them out to companies like palantir for large scale population manipulation.
I can imagine a government creating a moderate number of fake profiles for use by police and intelligence services, and honestly I'm fine with it, but creating a ghost population for propaganda purpose is entirely different and if you live in a country where you cannot trust your government not to do something that bad, you're already screwd.
In any case, it is still better than the status quo where even foreign authoritarian states can do that in countries where the local government wouldn't.
Do you propose to only let people from a whitelist of countries use internet? Because many countries would have no qualms giving their troll farms bunch of fake electronic ids.
Paid option doesn't really deter this behavior, it encourages it - a botter will see a price tag on a "real" account (see what happened to twitter's blue checkmark sub) and go oh goody, I can pay for people to think I'm real.
If you make the price high enough sure, but I'm unsure you can find the right price to simultaneously 1) deter bot traffic and 2) be appealing to actual users.
in other words, it just becomes the cost of doing business.
the individual user is now priced out and cannot speak candidly and anonymously, while large, wealthy orgs simply price that into their market-capture and consensus-building techniques
I'm trying to imagine this new paid app in different angles and versions, ie a new Reddit... Pay to be in there, get paid for being in there, only humans can be in there, ads pay for humans being there, humans use some govt on-line ID system, karma systems improve so that only humans are rewarded, Voight-Kampf captchas, humans mail the app their dna to verify their identity, humans login on a street 24/7 login post (think phone booths).... I just don't see any good, unbreakable, viable and/or sustainable way. We just need to get used to coexist with bots everywhere while we adjust our expectations and social codes. Fast forward until AI is massively on the streets and indistinguishable from us physically (or very distinct and fascinating), and all supposing that we can keep them in control...
Dead internet is the prequel to dead world, let's seize the opportunity to learn how to coexist with synthetics and develop the code that will make life with a higher intelligence species possible on Earth. And remember, we humans vary widely, and just like there are people happy to share LinkedIn slop today, there will be humans gladly living surrounded exclusively by overpowering synthetics. So lower your expectations for universal solutions and focus on niche.
I see a simpler outcome, smaller communities where you can verify humans are human. I've already started doing this, and mostly with people that already live in my community.
The corporate internet was never good to begin with, it was just forced on the masses.
If by worked you mean "worked so well they replaced all the big actors" then sure, nothing has worked.
But plenty has worked on a smaller scale. Raph Levien's Advogato worked fine.
There's also a reason most new social networks start up as invite only - it works great for cutting down on spam accounts. But once they pivot to prioritizing growth at all costs, it goes out the window.
PGP is niche. This would be far more mainstream. If you applied it to HN I could probably verify > 50 people already. For PGP I wouldn't know anybody...
Someone, somewhere, salivating at the idea of combining both ideas. A paid for digital ID service that you can use as authentication for the web.
Actually, if I'm thinking about it. Social Media platforms already started this with the paid blue badge for verification, and it's also monthly subscription. But it's for their respective platform only, not universal.
Isn’t this what World Coin is? Definitively not a fan of the project but I think the general goal is to get people to verify they are human and then somehow “waves hands blockchain” that can be carried with them on the internet.
Would that work though? Unless it checks your pulse every 30 minutes I don't see how that would make it better. Bots would use stolen IDs for that. It would only contain it at a smaller scale probably
There's definitely a price where it doesn't scale and that price is almost certainly lower than what people would be willing to pay once for themselves.
It would have to integrate with some kind of official government ID, so that there can be extremely serious criminal penalties for ID theft. But that's something for the next republic, because the current one's justice system is unlikely to be up to the task.
Neither of those solves it, just tries to conserve the status quo.
The issue, as I understand it, is literally a new Eternal November, just that instead of “noobs” there are “clankers” this time.
Personally, I don’t give a flying fuck about things like gender, organs (like skin or genitalia) or absence thereof, or anything alike when someone posts something online, unless posted content is strongly related to one of those topics. Ideas matter no matter who or what produces them. Species fit into the same aspects-I-don’t-care-about list just fine - on the Internet nobody knows^W cares you’re a dog. Or a bunch of matrices in a trench coat. As long as you behave socially appropriate.
The problem with bots is that they’re not just noobs - unlike us meatbags they don’t just do wrong and stupid things but can’t possibly learn to stop (because models are static). Solving that, I think, is the true solution, bringing Internet back to life. Anything else seems to be just addressing the correlations to the symptoms.
(Yea, I’m leaning towards technooptimist and transhumanist views - I was raised in culture that had a lot of those, and was sold a dream of a progress that transcends worlds, and haven’t found a reason to denounce that. Your mileage may vary.)
Yes but I think bots can be very good, and many people have legitimate online-only relationships. It gets hairy quickly, with real users getting culled and bots slipping through.
Also, if the bots are smart, they'll add real people too and take them down with them.
Yeah that’s the trade off of this implementation. Lobste.rs already uses this implementation
https://lobste.rs/about#invitations
The comments are considerably better. I’m not even a member but get more out of reading those comments than hn, and I’ve worked at multiple YC’s. This place is not what it used to be.
Inviting people who invited bots chould also hurt your "social credit" score in various ways.
Your tree could for instance be pruned - you can still invite people, but the people you invited can no longer invite people.
There are not a lot of sites which have tried this and failed. Those which have tried to be even a little bit clever about it, have succeeded pretty well (Advogato was a really early example).
What there have been, are sites which rejected such restrictions after a while, because they would rather have a big number to show to investors than real people. Many have even run the fake accounts themselves (e.g. Reddit).
I'm assuming there's tracking on the invites. So a recursive kick on X and all who X invited would still do the trick. If an IP address appears more than 5 times in an invite tree, ban the /24 or ASN if not from a friendly country for 10 minutes or other reasonable timeframe.
Getting unique IPs in any country you want is trivial for anyone but people building toy bots.
How far up the tree do you kick? Going too far up makes it so malicious people can "sabotage" by botting to get huge swatch of legitimate users banned.
Going to shallow means I just need to create N+1 distance between myself and my bot accounts
There, sadly, needs to be some gatekeeping and then it can work.
For example I'm member, since years, of a petrolhead forum where it works like that: a fancy car brand, with lots of "tifosi" (and you don't necessarily want all these would-be owners on the forum). To be part of the forum you must be introduced by some other members who have met you in real-life and who confirm that you did show up with a car of that brand.
If you're not a "confirmed owner", you can only access the forum in read-only mode.
It's not 100% foolproof but it does greatly raise the bar.
It's international too: people do travel and they organize meetups / see each others at cars and coffee, etc.
Or take a real extreme, maybe the most expensive social network: the Bloomberg terminal. People/companies paying $30K/year or so per seat each year probably won't be going to let employees hook a LLM to chat for them and risk screwing their reputation. Although I take it you never know.
It is the way it is but gatekeeping does exist and it does work.
>Where you need an invite to comment also solves the problem. You need to know a real human to get access.
Bittorrent trackers, as absolute retarded as they are, have performed this experiment for us and the lesson we're supposed to learn is that this does not work. Someone, somewhere, has an incentive to invite the wrong sort eventually, which because of the social network graph math stuff, eventually means "soon". Once that happens, that bot will invite 10 trillion other bots.
Absolutely. If anything, private torrent trackers and NZB indexers are proof that it works overwhelmingly well.
The few I'm part of all have a real community (like in the net of old), civil conversation, and verified, quality materials being shared. Almost everybody behaves and doesn't abuse the invite system, because nobody wants to lose their access to such a wonderful oasis among the slop web. It's a great motivator to stay decent and follow the rules. When things go bad, it's usually not because of malice, but because someone got their account stolen. Prune the invitee tree and things are mostly under control again.
Honestly the $10 barrier to SomethingAwful back in the day (and I guess now since it’s still around) definitely made a huge difference. I hate the idea of subscribing to a site like HN or Reddit… but one time $10 to post? I’d accept that if it meant less bots.
I would probably not pay $10 to post on HN, but many spammers who expect some kind of tangible return would pay that, so the fee just makes the problem worse.
The spammers wouldn't pay it once though - the idea is that it's a good way to scale moderation. Each time an admin needs to ban a user there is a 10$ subsidy supporting that action - and if the bots come back then they get to pay 10$ to be banned again.
Assuming the money isn't wasted and is actually used to fund moderation 10$ is probably comfortably above the cost to detect and ban most malicious users.
There are large swaths of spammers that indeed would not pay it. There are on the other hand plenty of NGO's that would pay it without a second thought to promote specific topics and dogpile on others. Those are the movements I would expect AI to take over if not already. AI does not sleep, humans do. AI won't miss the comments that groups believe need to be amplified or squelched.
Yeah, I love HN, but I wouldn't pay and I know many if not the majority of other people wouldn't. It would increase quality for awhile for sure, but what happens a year or two down the road? It would kill the user count and reduce comments and become less valuable over time.
reminds me of Bill Gates in the 90s when asked about email spam. He said it would make sense to make an email cost like 1 cent so the spammers can't spam as much but this didn't sit right with the mindset of the people at the time.
Also, while real people probably would not be willing to pay to E-mail, spammers who are making money would pay and consider it a cost of doing business. So the fee is having the opposite of its intended effect.
I don't think the current firehose model of spam would be sustainable anymore, though. Those spammers send millions of mails a day. Even with a 1 cent cost, they'd have to be much more selective about their address lists, given the low success rate. It may not solve the problem but I'm almost sure it would help a little. It also may be an additional qualitative barrier for crime-linked spam such as phishing mails, because they'd have to try and find a non-traceable was of payment, which is not trivial and always carries a slight risk of being identified anyway.
Hashcash was a proof-of-work system that would have put a computational tax on email. I don't know what kept it from getting more traction other than simple chicken-and-egg network effects, but it's a good idea, and worth resurrecting.
TLDR: Mail storage is the sender's responsibility. The message isn't copied to the receiver. All the receiver needs is a brief notification that a message is available.
Sounds like a horrible system where you retain many of the problems of email (you still need to deliver notifications) and new surveillance and persistence and mutability problems layered on top..
it's also something that was in my mind when i wrote about those two options. I still keep this idea in the back of my head since those days (i'm old enough to remember when gates had this atrocious, yet interesting idea).
We need something else, we need an "extreme" (~$1) fine that anyone can claim from any sender who bothered them, no questions asked. Spammers will stop instantly overnight. This would work for phone spam as well.
I read about an idea for an incentive/check system like that before. Something like: make the cost 10c instead of 1c, but implement a system where recipients can mark mails as confirmed "wanted" mail, upon which the sender would be reimbursed 9c. Increasing the cost for unsolicited mails while keeping the cost low for well-behaved newsletters.
payment would need a delay too.
Pay $10 and then wait a week or so for the payment to clear without it being reversed. Hopefully that stops the card stealers from dumping as much as possible before getting booted.
Could we just add complex and varied captcha to the comment & posting forms?
That's not a bad idea, sending mail could simply be an authorization for a $1 or $10 charge. And if the receiver said the message was unwanted, then the charge would go through.
There's just the pesky problem of incentives on the other side of the coin - who gets the $? The spammee? But there would be enshitification issues like:
1. Those who are incentivized to take as big a cut as possible.
2. Those who would put it in their EULA that you must accept their spam and not chargeback or else you lose access to something you value like their services (EULA Ransom... not much different to today "accept our EULA or lose access to what you've already paid for!")
I'm sure there are many other perverse incentives which would creep in..
Odds are it would harm real discussions more than it would harm bot spam.
The bots exist for a reason, usually to covertly advertise a product, and by themselves already cost money to run. Someone looking to astroturf their AI B2B SaaS would probably be more willing to pay $10 to post than a random user from a less wealthy country who just wants to leave a comment on an interesting discussion.
Maybe some proof-of-work scheme where uploading content would require the uploader to solve a cryptographic puzzle, hence reducing overall number of posts? The PoW difficulty should somehow be correlated to economics where it wouldn't be too expensive but also wouldn't make sense to do mass uploading via bot farms?
Good one, honestly didn't think about it. But visual or other kind of human-accessible captchas can be solved by bots. My suggested PoW would be computational.
I recall a WSJ article during the 2024 election that was about the fact that Tim Walz and JD Vance were both big consumers of Diet Mountain Dew, and how basically America ran across the board on various types of Mountain Dew. Can you really call yourself "American" if you're not doing the dew?
I pay for my ISP and the financial institution the money comes from has age verification
Social media, HN and the rest of internet first business can go broke
I don't see anyone out there propping me up directly. Why would I give crap if some open source hacker or etsy dealer doesn't have a home next month? Yeah I don't because they're not caring in the same way
Thoughts and prayers everyone else but your effort is clear, not going to be 1984'd into caring for people who clearly don't care back.
Third option: a web-of-trust that allows you to see the vectors required to connect you to a given commenter, and which of your known friends and friends-of-friends has already attested their humanity.
You could have easily said this twenty years ago when photoshopped photos were going viral on the early internet. Turns out people are completely fine with ai content and photoshop.
I have not seen or heard of a single person who is excited about AI generated blog posts, or TikToks, or commercials, or images. In fact it’s the opposite, the internet coined the term AI slop, and my non-internet addicted friends hate the fact that chatGPT is killing the environment.
The only people I’ve ever seen champion AI are the few who are excited by the bleeding edge, and the many many peddlers
The most common people just seem to be the elderly who don't care / don't know any better. The same ones who told us never to believe anything from the internet. They seem to be hooked on weird AI jesus facebook posts, daily AI generated motivational content, talking to the chatbot in Whatsapp, etc.
There are probably more than 10^17 AI model executions occurring per day. I know in ye olde HN there are many Purists that are Too Good For AI, but the majority of the human race is consuming AI at a blinding rate, and if they really didn't like it, they would stop.
> and if they really didn't like it, they would stop.
I can’t really articulate why, but this doesn’t feel true to me. There are plenty of things humans do especially at scale that we don’t like, or we do that we don’t like others doing, and don’t stop
>The "Moloch problem" or "Moloch trap" refers to a scenario where individual, rational self-interest leads to a collective outcome that is disastrous for everyone. It describes competitive, zero-sum dynamics—often called a "race to the bottom"—where participants sacrifice long-term sustainability for short-term gains, resulting in a loss for all involved.
Hence why we have to keep feeding the orphan crushing machine.
And how much of that consumption is voluntary or willful? I don't want to get AI slop in my search results or in my forum discussions, it muddys the water with shallow at best information, often in excessively verbose ways that helps hide its more subtle falsehoods that it picked up.
Your comment doesn't make sense because the fact that "dead internet" has been coined since then (along with the popularization of "slop" and "hallucination") means there is a line and we have crossed it. Denial doesn't stand up to any scrutiny.
It's too bad we weren't more skeptical about the ways emerging technologies would eventually be used against us. Some warned about it but many (including me) ignored them. Perhaps we could be forgiven for that naivete, but there's no excuse to be ignorant of what's going on now.
Why is it being called dead internet theory when, as far as I can tell, what's really happening is that big centralized systems are being overrun with bots? The internet existed and was pretty great before these large centralized systems came into being.
Anyone can still run a blog/website, and/or their own discourse server. There's no need to mourn for these centralized systems that largely existed only to exploit us in some way. Let's celebrate "small internet theory", an internet where exploitation is effectively impossible because every company that tries it is overrun with AI bots. That sounds awesome to me personally, but I was also up late last night watching clips of Conan O'Brien from 1999 and the nostalgia for that era / what the internet was like back then hit me so hard it was almost painful.
“A social networking system simulates a user using a language model trained using training data generated from user interactions performed by that user. The language model may be used for simulating the user when the user is absent from the social networking system, for example, when the user takes a long break or if the user is deceased” [1].
So why isn't it called "dead social media theory"? The internet is not only social media services, though I understand a lot of people seem to think that without centralized social media services there is no reason to use the internet.
Have you been on the internet at large lately? With google you may get one authoritative site on something and 50 bot copies of the site on different domains. Sometimes the stolen site is the number one return. Also, if you ran sites years/decades ago, you realized way back then the any local user posting was getting overran by spammers/bots. Now is so much worse that it's not worth doing in most cases.
So, most posts on social media aren't real.
Most user posts on non-social media are spam/not real.
I spend all day every day on the Internet and I don't share your perspective. I might dislike centralized social media and yearn for a bygone era, but just in the past two days I had a very positive interaction with multiple real humans in the Commodore 64 subreddit that helped solve a problem I was having that isn't documented anywhere else on the internet yet. So then I went on my personal blog and blogged about it, which will get it out there on Google and help others. In this way, I am helping to keep the internet alive, I guess. "Be the change you want to see in the world," and all that.
If you think a site with only 209 visitors in the past 30 days is going to move the needle, then I've got news for you. Especially if bots are the main source of that visitors count. That's very very close to the number of people visiting your site being you, you, and maybe your mom type of numbers. After that, it'll be skiddies and bots. Anybody that's run their own site has been there, but let's not make it out to be some grandiose site that will determine Google page ranking.
Why are you putting words/desires in my mouth that I did not voice? No one said anything about moving the needle. I said that my blog will go into Google results and help people, you said that sounded optimistic, so then I provided you proof that my blog already shows in google results and receives traffic. I've received messages from real people who have been helped by my writing on my blog, so it's not just bots.
I do not know what "move the needle" means or why you think I am trying to do that. Your excessive negativity and pessimism is unwarranted and I dislike it. Honestly between you and that other guy replying to my comments with seemingly thinly veiled vitriol for my perspective, it's just further proof of my point that being able to communicate with large groups of anonymous people is typically a net negative. Most anonymous people seem to be quite nasty. I'd rather write on my blog where no one like you will see it, and if you do see it, you likely won't go out of your way to send me an email with your negative comments because it's likely you do this for public attention.
I think you are looking from a very different angle. A site with only 200 visitors/month can't move a needle, but it's a valid part of ecosystem.
Tbh, for niche hobbies even one new visitor a month is a win, if they actually read the article and not skim over it. An eager enthusiastic listener is a price not easily won on the internet. Having even one per month would mean you personally taught something to a classroom of peers in a meager 2 years. Blogposts easily can move live ten times as longer.
For people that spend most of their time on small internet, sites like that are essential, because they work on another level. You know you engage with someone who has a passion for the same things you do, and had a time to polish their words. You know you can reach out for help and be kindly greeted.
This is parts of the internet that are so boring for anyone else, they are totally safe from spam and ads.
That doesn't scale, can never scale, if anything like that becomes popular, the massive slopfest would follow and the slop would be sold instead of the original.
And yet those boring places – boring for everyone not interested enough – are there, and people have a way to reach to each other and talk to each other about shared interests. The internet isn't dead for nerds.
At the end of the day there is no real penalty for being a bad actor on the internet. They get unlimited retries on spamming and otherwise causing problems. In many ways this helps Google entrench itself as the search/ad company. No one else has the money or compute resources to continuously update the internet. Furthermore they have told us it's their job to shove unskippable ads in our faces. They'll gladly let the public internet die in the future if they can push out their own version of "SafeInternet by Google/now with more ads!".
Every single one of your comments in this thread is some slippery slope stuff where you think corporations and federal government are going to work together to kill off the (public?) internet. It's okay that you feel that way, even if it's just a big ol' fallacy, but you don't need to repeat it in six different places. You made your point, you think the internet is doomed no matter what happens, great, let's move on.
Authentic human activity has been completely overwhelmed by bots and slop. Discerning signal from noise becomes too burdensome to bother with.
Of course the physical medium continues to exist.
Of course there are still humans, such as yourself, producing free content, to be harvested and regurgitated by parasites.
But authentic human activity is increasingly going out of band, no longer discoverable. Whatsapp, discord, private groups. Exactly as the theory predicted.
And check these books "Superbloom: How Technologies of Connection Tear Us Apart" and "No Sense of Place", maybe it would help you to see the overall effects of the internet (and other communication mediums) and forget this simplistic view that a lot of programmers have. The nature of the communication medium doesn't just affect the message, it shapes everything in society. Ignoring that because you had a good experience here and there won't change anything.
The problem is that average people cannot tell even now. Heck, I'm quite sure that /r/all is completely bot driven, yet I still check it occasionally. I'm not even sure about HN, but I didn't find yet so obvious manipulation than on Reddit.
100% agree that this is what it should be called. To argue that big websites being big makes them equivalent to the whole Internet is absurd. Besides, I love the idea of the only recourse to be to go back to independently run information websites.
For the younger generation, social sites are the internet. They open an app on their device, they don't go to sites by searching the web. I've seen people perform a web search in an app store thinking it was the same thing.
Yeah I agree. It’s an acute problem on social media platforms where there’s a market force incentivizing it. If you’re mostly engaging in specific niche interactions with known communities or people, it’s not nearly so prevalent. The internet still works fine as a whole.
> A social networking system simulates a user using a language model trained using training data generated from user interactions performed by that user
>Anyone can still run a blog/website, and/or their own discourse server.
And those will also get chocked with fake bot "members" and bot comments.
Plus, if "anyone can still run a blog/website", this includes bots. AI created and operated blogs/websites, luring in people who think they're reading actual human posts.
In some ways it might be positive. My girlfriend had a small addiction to Instagram reels. The flood of AI generated videos on there just killed the magic for her and she stopped using it
Happy for your girlfriend, and anyone else who escapes because of this.
But it's not about the current generation of addicts. It's a play to capture the next generation.
It remains to be seen whether they'll get caught or not but it's important to remember that even if all of us mature humans find this new AI social media weird and gross, children don't have our preconceptions.
Meta is going to do everything in their power to train the next generation of young, immature brains into finding AI social media normal and addictive.
They (along with TikTok) already managed to do that to the last two generations so they have a scary track record here.
Bandwidth is only expensive in the US, somehow. Here in Germany I didn't bother about bots and their additional traffic since 1998 (there are other annoying things about bots though).
More than that, it's practically impossible to find good specialized, human-written websites. Search engines don't find them, all results are AI garbage. With no real ability to be discovered, there's no incentive to maintain such websites too, and so the cycle of slop continues.
No, the old internet wasn't that great. There were so many problems. Finding things was hard, buying things was hard, integrating things was hard, compatibility was hard, everything was super fractured. It felt great at the time because you discovered all these random things and it was all novel at the time. Centralized (Or decentralized collaborative services like IRC or Usenet) really unlocked the power of the internet.
usenet and irc are quite old. how are they examples of some mythical point at which the internet was unlocked by services?
centralized and decentralized would include almost any service. your comment is so vague and ambiguous as to be meaningless. (that's a hallmark of LLM output. are you a bot?)
it was easier to find authoritative answers 20-30 years ago. google and, before that, altavista and yahoo, were quite good at directing queries to things like university-run information sites or legitimate, curated commercial sites. for the last decade the first google page has been crammed with useless SEO optimized fluff.
as for shopping, that was the first dotcom boom. what really took it mainstream was covid. not centralized or decentralized collaborative nonsense.
no.... not a bot, and please see HN FAQ before making comments like this.... I'm talking about decentralized common services, like IRC, Usenet, email, same service and they all interact together. But the old internet was super fractured when we got websites, nearly everything did things completely different, was very hard to trust anything. It was not easier finding authoritive answer 20 to 30 years ago, I started in 91, and it was hard to find anything. Search engines were a great improvement, but kind of hard to find what you wanted, things drastically improved with google and page rank, but that brought in other problems
Reasonable fragmentation and friction is a feature, not a bug. Global-scale social networks with zero resistance have turned the information superhighway into the information superconductor carrying infinite current, otherwise known as a short circuit.
I generally agree with this, but I think the small internet hasn't succeeded in building social replacements for the "centralized systems". The internet is a social technology. So for this to be viable, the small internet needs an answer.
Occasionally, someone mentions RSS as a solution. That's only a small component of the solution.
In an ideal/fantasy world under "small internet theory", every online friend group would have their own Discourse server set up (similar to how friend groups use Discord now), and traffic/usage of that Discourse server is so small that it would be a waste of resources to try to swamp it with bot traffic, and on top of that, everyone on the Discourse server are friends who can vouch for new members who join, so no bot could join the Discourse server because no one would know who they are.
I understand that some may feel we are losing something, by not being able to go onto a website and anonymously talk to 1000s of other anonymous people we do not know, but I do not think that has actually been a net positive and this bot issue demonstrates the issue quite well: if you do not know who you are talking to, you do not know if they are telling the truth, or if they are someone you should even listen to at all, and now they might not even be human. So why do it? I would rather talk to my friends, people I've met in meatspace or over voice chat in a game, people who I can vouch for and that I know I can respect and trust.
Let's build small communities of real friends who recognize each other and spend time with them on the internet, in that way the internet will never die.
And 10 minutes later Texas demands you identify all your users age when someone posts a porn image somewhere. Facebook will gleefully laugh all the way to the court saying we need such internet ID to entrench themselves.
>, in that way the internet will never die.
You mean in the exact way the internet used to be... then died?
I'm guessing your GenX or a Xennial, it's how we think. Relationships and friendships are hard things to acquire and keep and you have to work to do it otherwise friends disappear. The thing is the younger generations mostly don't think that way. They have mostly always lived in a world where connections are cheap and easy to maintain. Attempting to move to a system that is more difficult will be very difficult for them.
So I’m a member of a group of about 70 middle-aged guys who have a discord server exactly like this. We live all over the country, but most of us have met in person, we travel the world together, and we do an annual retreat where usually about half of us meet up. In addition to discord, we have a bunch of groups on Marco Polo, and we have little sub-groups that do zoom calls regularly. Really wish some of them lived nearby, but in spite of that it’s been one of the best things in my life for years now.
Small internet isn't very attractive for most bots. Also, I use websites that are invite-only. This is effectively a web of trust. This works pretty well, bots aren't a real problem there.
Run your site like an old school BBS. You only run into these problems when you invite the world to your site and want big numbers. You don't have to do that.
That is a simple method in phpBB. Using ranks one can set new accounts to be able to post and nobody can see their message until verified by a moderator. For small groups and semi-private (invite only) forums this is fairly easy to manage. Spammers and grifters influence nobody. Only cranky old bastards like me see the message. There are other means to keep bots off a tiny site but that is a longer topic. Even better one can send a header to redirect those using the Torbrowser to the Tor link and when states come along and demand some third party process, one simply disables the Clear-Web access. More friction, less data leakage and no corporate capture. This also eliminates the people that can't handle an extra step to access the site and eliminates lazy governments that need money trails.
It would be interesting if we had some sort of local verification in the real world. As in picking up some key from some physical place or having it sent to some physical place. Some services like nextdoor are set up like this and mail out account auth to make sure the user is local to their next door group. Obviously you can imagine how it might be abused but it is impossible to do so at the scale you can abuse digital only methods.
It reminds me of the cartoon of two people on an escalator that stops working and one says to the other "Last time this happened I was stuck for four hours"
I'm thinking there might have been a deeper message than the moment of ridiculousness.
> what's really happening is that big centralized systems are being overrun with bots? The internet existed and was pretty great before these large centralized systems came into being.
This is a great point. Suddenly, I'm looking forward to this
It's funny you mention this, I got a Commodore 64 Ultimate the other day and one of the first things I did was load up the BBS client and browse some BBSes. Those are from before my time (my first PC was a Compaq Pentium 166) so I never got to experience them for real. But if the rest of the internet collapses under the weight of bot traffic, BBSes are quite nice.
BBSs have been in theory replaced, but in reality they haven't even been approached by modern social media. Small forums full of dedicated users, often local. So many great memories.
Who cares if anyone knows my blog exists? I'm not writing my blog to farm engagement as I do not run ads on my blog. I write on my blog because I want to write my thoughts down and project them into the world. Whether or not anyone sees them is pretty unimportant.
If my writing helps someone via them hitting my blog directly or them getting the answer via AI aggregation, mission accomplished.
In my experience AI doesn't give the answer you want because it gives the most shallow and basic, many times so basic as to be worthless, response. Then I either scroll through 20 results hoping I see one that isn't an AI writeup of the exact same incomplete source, or I give up and search out a specific site I know exists that isn't AI written for that information.
The internet existed and was pretty great before these large centralized systems came into being.
The big centralized systems existed before the internet. GEnie. Delphi. Bitnet. CompuServe. The Well. American People Link. And dozens more.
The internet brought them all together, then extinguished them. Now we're going back to the old days.
The only difference now is that instead of paying AT&T to carry dialup connections and leased lines, we're paying our local/regional ISP for cable and fiber.
It's all the same game. Only the names have changed.
You can create a blog, yeah. But you also can write the blog with AI. So, you still need to filter the content. Over time, people will find that "The signal-to-noise ratio has hit a breaking point where the cost of verification exceeds the expected value of engagement." https://arnon.dk/the-trust-collapse-infinite-ai-content-is-a...
> Let's celebrate "small internet theory", an internet where exploitation is effectively impossible because every company that tries it is overrun with AI bots.
But isn't it even harder for small forums to resist the robot onslaught without the trillion dollar valuations to fund it?
Although, part of the reason Facebook/Linkedin/Twitch/etc have bots is because those companies secretly want them, in order to inflate their usage numbers.
> Although, part of the reason Facebook/Linkedin/Twitch/etc have bots is because those companies secretly want them, in order to inflate their usage numbers.
The people that want to get rid of the bots get crushed because said botting technology is hyper advanced and cheap to use because of the massive scale of social media. This ends up with huge numbers of them getting put behind services like cloudflare further consolidating the internet.
You know.. I keep thinking this might be a good thing in some ways. AI spam could save us from the worst of the current social media status quo, the toxicity of the attention "economy", but flooding it so thoroughly nobody wants to engage with it anymore. Maybe the world can collectively "wake up" and "go outside" by turning towards local and more intimate communities for social interactions..
It's a shame though that this is gonna kill so many sites and projects. Sure we have ChatGPT, but also with things like Google AI summary getting so much better traffic to sites is going to plummet. Without people visiting I think the incentive, heck even motivation, for a ton of the sites is gone. We've seen it with sites like Stack Overflow, but it's probably going to happen to just about everything..
Things are definitely going to change in significant ways. The internet of the past is definitely dead, it just doesn't know it yet.
> It's a shame though that this is gonna kill so many sites and projects. Sure we have ChatGPT, but also with things like Google AI summary getting so much better traffic to sites is going to plummet. Without people visiting I think the incentive, heck even motivation, for a ton of the sites is gone. We've seen it with sites like Stack Overflow, but it's probably going to happen to just about everything..
As I see it, this is just an extra step in a long series of tools to just serve information more quickly. Search snippets for search results have always (?) been displayed for each link/page returned. If the information you were looking for was included in those snippets, then you wouldn't need to visit the actual site.
Then at some point there were knowledge cards/panels. Again, if the information you were looking for was in those cards/panels, then you didn't need to click on the links.
Now with LLMs/Gemini, the information is sometimes summarized at the top of the page. You need even less to visit the search results.
Google has always been a kind of cache for the Internet. It's just way more efficient at extracting and displaying information from that cache now.
So, yes, traffic keeps going down. But new knowledge will still need to be produced, right?
I don't know that the influx of AI spam would necessarily result in people disengaging and choosing to seek out real content, though. Social media feeds have been serving up less and less content from our actual real life contacts for a while now (partly because people seem to be posting less). As long as it's engaging I think a significant chunk of people aren't going to care whether it's AI
(anecdotally, my mother loves AI generated videos, perhaps it's just novelty at the moment and it will wear off)
I see many, many startups that promise to be an automated marketing agent that will do this exact thing: scour sites for conversations and post links to your product.
Obviously that burns down the human Internet, but it’s also a business that will have a short lifespan and bring about its own demise.
I guess they don’t care about anything enduring as long as they can grab some quick cash on the way out.
> I guess they don’t care about anything enduring as long as they can grab some quick cash on the way out.
As far as I can tell, that is basically all AI-related businesses. Including those non-AI ones jumping on the bandwagon to throw all their employees in the bin and expect 10x productivity somehow: if they are right and these tools do become that good, well the economy as we know it is over as white collar knowledge work disappears.
At least in the US very few industries actually seem to be about making a product.
A good example is this, car companies don't make cars for the most part, they make loans. Financial companies first, car companies second.
Consolidation, collusion, and rent-seeking behaviors by companies are going out of control too. The fact AI companies can do what they are doing has much to do with the previous brick and mortar businesses weakening any business regulations down to nothing.
> A good example is this, car companies don't make cars for the most part, they make loans. Financial companies first, car companies second.
I get that this is true from a certain point of view. But car companies clearly compete in a very healthy way on features and quality.
In fact, cars are a great example of a market where the companies clearly care about making the product, and the competition between them has driven that products to incredible heights. Cars these days are vastly better than they were in the past.
Maybe the only parts of a future internet people will actually hang out in is going to be one where any profit-making is completely de-incentivized. No recommendations. No product reviews. No opinions on companies or services. More slow web. Maybe we'll slowly head back to what websites used to look like when Yahoo was the biggest search engine.
Back then the day Yahoo was a manually curated index of submitted & verified sites with search capabilities.
Wild-ass business idea: what if Yahoo 2026 recreated Yahoo 1996 and also any of the video sites it bought up back in the day get relaunched as deshittified ad-selling mechanisms to fund the whole thing… there’s gotta be Yahoo 1996 money in whatever scraps YouTube is missing.
It used to be faster and easier to follow actual content.
An index is also a lot more discoverable for content like the others. The issues of classification still exist its not a tree like they tried to make it but indexes based on human effort on value I think still have their place
The Internet was always full of bots. Not chatbots, but bots like crawlers, scrapers, automated scripts. That was fine.
What the OP is talking about is bots that participate in public discourse. That's the actual problem.
I think it can be handled to a degree though. Private communities, private Internet on top of existing Internet, and social media platforms without public APIs and with strict, enforceable ToS would all help.
For a while video was a holdout of sorts - e.g. if someone posted video content of themselves or their voice you could trust a real person was behind it.
But now convincing fake video generation is easily accessible, so one more holdout stands to fall.
It does seem like some kind of ID system is going to be the only way. Sucky but inevitable.
I often have the following thought: technological advancement, for all its boons, inevitably leads down destructive roads in the long run. Sooner or later we open a pandora's box.
> convincing fake video generation is easily accessible, so one more holdout stands to fall.
Is it though? I have absolutely no doubt we'll get there but I haven't seen any evidence of this in the wild. My Youtube feed is becoming overrun with content with clearly generated scripts and often generated narration. But I haven't seen a single instance (that I'm aware of) of generated video being passed off as real.
Yes I have seen hundreds of tweets and reddit posts showcasing game-changing video technologies like AI face replacement and yes they look incredible in the 45 second demo reels, but every instance I have seen of real-world usage was comically bad.
The technology isn't inherently evil. The actual problem is the way our societies are set up, ironically incentivizing sociopathic behaviour even among members of a single nation, nevermind when geopolitics get involved.
I essentially see it like this: imagine giving a hand grenade to a three year old. The grenade isn't inherently evil, but it raises the existential stakes for that three year old and anyone in their vicinity.
There comes some hypothetical point where technology has advanced so much that anyone has the power to destroy the world.
I just searched for a video game tip: "Bannerlord II where to sell clay?" and google's top result was an AI generated page FOR THIS GAME that directed me to ebay.
Also, I forgot to mention: google AI overview included the AI garbage page as it's answer.
I think that we are going to see more and more of this. To the point where most interactions you have online will likely be with bots. So I started building something that actually has a chance of fixing it: a social network for only humans.
do you think small, invite-only communities will end up being the last holdout for genuine human conversation online? or will bots eventually infiltrate those too?
Bots will absolutely infiltrate them eventually, but I think it's the only solution.
Internet promised ability to connect with anyone anywhere around the world. It felt limitless and infinite.
Turns out in an infinite world, the loudest voices are the ragebaits, the algorithmically-amplified, or the outright scammers.
Human social brain doesn't work in an infinite world, it works for a Dunbar's Number world. And we all like our psuedo-anonymous soapboxes (I'm standing on one right now), but trick will be to realize that the glitter of infinite quantity isn't the same as small-scale connection.
At least for some time I imagine a hybridization may pop up. For example you grow a community of humans that keeps bots under control. Because of this all actors are humans, and valuable because of that.
Hence you'll end up with defectors getting paid to siphon off all the conversations to some ad companies that will work on tying them with real world identities and then serving them more detailed ads in the places they cannot avoid interfacing with the open internet.
I think most small communities will stand bot-free because there's little incentive to have bot engage with it.
But I wonder if there's a size of conversation after which people will still choose AI assisted summaries. Discord had/(has?) a feature where it used LLMs summarize and then notify you about a discussion happening.
The Discord thing sounds like a reasonable and acceptable use case to me. Fuzzy search is basically the only thing LLMs are really useful for, and a feature like that actually serves the user. Help them find stuff that's interesting to them, instead of trying to replace it with a pale imitation of real thought and conversation. My most optimistic view of the future is that features like that will be what sticks around after the hype and bubble.
I have a decent-enough filter for AI-written nonsense:
- banner blindness to blue check accounts (instantly scroll past, the blue check is extremely prominent visually)
- a very long Ublock Origin text filter regex for emojis (green check mark in particular) and $currentHotTopic keywords where the signal to noise ratio is close to 0.
What does it matter who wrote it as long as you like the content? If the content is posted on a network that allows robotic agents to post, and you don't like it, just sign on to a different network.
I imagine it will be way closer to Ghost in the Shell/Cyberpunk in the end than we realize.
A. People want to connect with other people, not talk to computers, and
B. AI slop peddlers know this and have an incentive to lie about their content.
If GenAI content was always reliably declared and people's choices were respected, we wouldn't have a problem.
It's like saying, what does it matter if the news article was fake, as long as you enjoyed reading it? It matters because when I read the news, I want to read about things that actually happened, not stories that manage to fool me into believing they're true.
People need to look long and hard at how they are using technology, and ask how technology should be used. Every single technological trend for the past 10 years has been smoke and mirrors, promising utility of an iPhone but with deliverables closer to a blockchain full of links to jpegs.
This post's title is hyperbolic at best. At best the author is noticing what most people have known for a long time, there are bots on the internet. Most interactions I have online are with real people. Maybe we will end up with a dead internet, but moderation is still possible currently.
The elephant in the room is that a lot of social media companies have a conflict of interest. They can juice their user metrics by not moderating bots as well as they could be.
Tbh I don't care if I speak to a human or a bot as long as they are "useful"; by useful I mean if they provide me useful information but then again humans can provide unique information that bots can not. But I think identity is not relevant anymore, what's relevant is reputation. People think internet bots are bad per se but we need to build useful bots, just like there are chat bots that are useful on various platforms like Telegram, Discord or whatever other platforms people use.
This is a great point. In the past and present, sites like slashdot and HN depend on the users to achieve that moderation to surface useful comments and keep 'spam' down.
Now, there are tools to achieve that kind of moderation automagically, and even better, consistently. This is an opportunity to build out a community that is useful for everyone. The first platform that guarantees anonymity supported by human-independent moderation will likely attract significant and persistent user support.
There is still the issue of cost - how does the community pay for such a platform? Perhaps like the Google of yore - very limited ads? Avoiding enshittification can be done through the Wikipedia model - non-profit to manage the whole thing?
I think it's a symptom of being terminally online to think that most other people are also terminally online. The internet has a way of convincing you that most of the [interesting] events in the world happen on the internet. But I think this isn't the case; most stuff happens in the real world, most people live in the real world most of the time, and a tiny fraction of trite drama happens online.
>being terminally online to think that most other people are also terminally online.
50% of US teenagers describe themselves as terminally online.
Go any place where people work and have time to goof, and you'll see them online.
Go to a bar/club, you see people with a phone in front of their face.
The idea there is an online and offline is crumbling further every day. Cameras are small, bandwidth is high in relation to our compression algorithms. Anything happening in the world can be broadcast live. More and more types of machines are coming online that accept digital instructions that make things happen in real life.
Furthermore it's an odd rejection of the printing press on your part. That methods of information exchange affect the real world around them. If the book brought about the industrial revolution, what does an always available global communications network bring?
At least based on your writing here on HN it seems like you're probably an introvert, or at least a person that likes quiet pondering and reflections. Reading a book would be far more interesting than most online activities, right? If I'm right and that is the case, then you may be missing just how many people are horrifically addicted to being on social media all the time.
Dunnow, everyone is in a bubble of some sorts. I'm online a good bit but rarely on my phone, If I'm away from my desk I'm offline. My social circle is similar so I would naturally have bias for what I experience.
Year or so ago I took an Uber and was mesmerized by the driver. He had his phone up mounted on the left and was pretty constantly interacting with it. Checking for new rides, watching a video, checking facebook. It was quite impressive how much content he consumed while at a red light and how dexterously he navigated to and through like 10 different apps.
I very much got the feeling that this was a person that was terminally online and suspected that he's not alone. A bit alienating really, living in the same country speaking the same language but realizing there's this huge cultural/behavior divide between us.
Isn't this more about identity crisis? Not in the psychological sense but in the internet sense? Who is real who isn't? Crypto proof of work idk? Your profile like d LinkedIn or Hacker News or something gives your clawbot x credits worth of legit automated queries or rate limits on your behalf? It could be flipped upside down where we don't spend our eyes off on reading websites anymore, it just goes and gets it for us but I may be hallucinating.
They tried Worldcoin as a solution to proof of human but it never took off. I guess linking to social media as you say? I haven't had much issue with AIs but so many human scammers often with bot accounts. On Xitter 90% of my followers are those.
I imagine soon there will be small scale Tailscale style semi private networks popping up with no AI content and no regards to draconic identity collection laws.
It is the corporate internet - the one by the corporation, for the corporation that is dead. Or at least everything in it is dead. The death blow is AI, but it was almost there anyway.
The good news is that the community internet - for the community, by the community - is just starting.
What is a community internet? The internet is layered protocols. UDP, ICMP, TCP, HTTP, HTTPS etc. The community internet is just a new layer of protocols. Coming soon.
The real problem isn't that bots exist — it's that we have no trust infrastructure for the internet. We can verify domains with SSL, authenticate users with OAuth, but we have zero standard for verifying that an AI agent is good at what it claims. The dead internet is a discovery/trust problem, not just a spam problem.
I'm curious what tools do people use to apply to jobs automatically? Would this be automatically flagged by recruitment systems as AI? We have never had a problem like this at our company but now I am little paranoid
The funny thing about the LinkedIn post is that the parody is dead-on as to the kind of mindless slop a human on LI would post. LinkedIn was the Dead Internet before LLMs were even a thing. And I guess AI doesn't even have to be posting everything for Dead Internet Theory to hold, it just has to be the default perception in order to cause everything to be treated skeptically.
I reckon we are going find an inverse metcalfes at some stage where the value of a network is proportional to the square of its connected users, minus the square of the number of connected bots. Heck I would be surprised if meta hadnt figured this out, or wasnt on the way to figuring it out.
I'm here for it in the short-term. As the market continues to saturate, most of the people building this stuff will flame out. Eventually, I suspect we hit a tipping point where the ROI is too low (not enough real human engagement, just other bots) and the flood dials back.
Just yesterday in a local non profit organization's Signal groupchat a user who had just offered to take meeting minutes the day prior emitted an open claw error message to the chat. They are now banned from the organization.
I think next step will be an isolated version of invite-only internet where you have to be physically present with your invitee to give them access. There will be a beautiful navigation widget where you can access a unified "addon" to any page: community moderated comment section, version history of that page, backlinks, carefully curated "related" section(so that you can continue browsing beautiful human written content on 1910 era steam locomotives, similar to 90s era webrings), donate button so that you can support he author and much more! Oh, the dream
optional de-centralized hosting, unified cryptocurrency as payment tokens, single open LLM as summary and search-indexing tool, specialized toolkits for journals and social networks (livejournal, early twitter, early fb). Most importantly: you can post anonymously where its allowed (there could be areas where it can be disallowed entirely, like a public square), but your account will take the punishment, so no edgy shitposting behind throwaways.
Kinda been dead a while, also not dead there's still good stuff out there. Lots of it, but it's in the corners and under the carpets. Things created in the original spirit of the "let me show you my interests" that the older web was built on.
While back was toying with the idea of building out a new web on a new protocol (not http based). Thus no existing browser would understand it. Deliberately obscure to force a "Reset" button of sorts.
Though would be short lived, over time we've learned to ruin stuff faster and faster. I'm not sure there's any network so alien that it could hold on to that golden era of innocence from the past, it would be found then expediently and expertly exploited.
I have to use LinkedIn to sell. I only occasionally look at the feed but I am ruthlessly muting or blocking anyone who is blatantly foisting their AI drivel on other humans. I’ve had enough of this shit.
I’ve been unfollowing people for a while and the issue is rarely from within my network anymore… the feed shows a lot of posts from AI foisters who I don’t even follow.
Everyone here is so far from a normie it is almost painful. Dead internet is an outcome of supply and demand.
The fundamental issue is that a plurality of humans pref the direction things have gone and are moving in. Is it a good direction? By this crowd’s standards, no.
To be clear, i dont like either but when i watch the speed kids swap between 5 insta accounts and 3 reddit accounts, it seems the majority are happy with it.
Let’s not kid ourselves: Every day, multiple “I just asked the LLM to clean up my notes” posts are voted up to the front page here, often with highly engaged, appreciative comment sections.
LLM’s for all their faults are well-trained to produce what we want.
Reddit in particular is overwhelmed by bots. There are small niche communities where it’s mostly people talking to people, but the vast majority of popular posts are made by bots, voted on by bots and commented on by bots.
It’s not even like commercial astroturfing, it’s just karma farming and public sentiment manipulation.
I, for one, miss the promise of the old Internet, that you could connect with people from all around the world. I always saw it as an exciting extension to real-world interactions, not a replacement. And I love the fact that over the past 30 years, I was able to make friends (or pen pals, if you will) in the US, Canada, Mexico, Bolivia, Japan, Indonesia, Russia, New Zealand... the concept of finding people sharing your niche interests, wherever they were on the globe, even as you were stuck on your cozy suburb, was amazing to me, and I'm sad that we all but lost that.
The only place that reminds me of the old Internet is VRChat, funny enough. You're guaranteed to be interacting with a nerdy, culturally similar human who's present in the moment.
That just reminded me of Chat Roulette for some reason. It seems that one is still around, as well. I'd guess not many bots on there, either (though potentially plenty of other unpleasant things).
So, as I share his thoughts, I've been wondering: why haven't we seen any real innovations in this space?
Mastodon wasn't really it and neither was Substack, although maybe it got slightly closer. TikTok and Telegram, maybe, for different reasons, but they'll face the same destiny.
I'd suppose the much despised "mainstream media" might be a winner here eventually. But beyond that, I am thinking about something like the following:
I don’t think people are going to get offline and the best we can probably do is create free and open p2p platforms that don’t require registration necessarily to use. And allow people to control their own databases. A lot of communication is locked behind corporations that run the services that are building these tracking and identification databases.
I actually think it’s more about getting people off browsers and other tracking software.
>create free and open p2p platforms that don’t require registration necessarily to use
And how do you create this without it being overran by bots, spam, and people posting gargantuan amounts of porn?
>allow people to control their own databases
There are two types of people that want to control databases. 1: The freedom seeking type who want information sovereignty. 2: The type of people that want to hoover up as much data as possible for money and power.
Guess who has more ability to control the world out of those two.
Lastly, most people want to use curated websites free of spam and content they don't want. Almost nobody wants to do that curation themselves. Hence curated platforms will attract the most people via network effects.
ah shoot, that wasn't lastly...
> getting people off browsers
and putting them on what exactly? phone apps, that's not better at all. Multimedia attracts people like flies to poop. It's seemingly a natural human response to move to an application that is more visually interesting regardless of it's security safety.
> And how do you create this without it being overran by bots, spam, and people posting gargantuan amounts of porn?
By not creating public networks out of it.
The only database people want to control is their personal information and who they communicate with and when. So we should enable low barrier to entry to communicate.
I cannot solve for the social media side of it, but we can enable people to at least have low friction when getting online. Somewhere to turn to that is not a data harvesting service.
The displacement effort has to come from those that believe in those freedoms. It’s not easy, maybe impossible in some circumstances but this status quo right now cannot be it, in my opinion.
The kids tend to hang out with the kids of their parents' friends, with the neighbours' kids.
A bit later in life, when in school, we find friends among the classmates, who too aren't usually all that similar to us.
Maybe when we switched to a fully online adult world with its hyper-optimization of everything, we've put our potential friends in the same bucket with recommendation system-driven content like music and tv-shows. Dating too.
There are certain benefits in getting by with limited choice, when we learn to communicate with people who are not a 100% match.
And as for having a drink or a coffee- we can always just invite the friends over. Hanging out in each others' apartments is fun and cheap
It's becoming rarer and rarer for people to have friends that are physically close to them unless they've stayed in the town they were born in.
Modern technology kind of broke friendship in the sense that not very long ago maintaining friendships over any distance was expensive. That is it costs long distance fees, gas, or letter writing. Because of those expenses it was very common to make friends locally pretty quickly.
But the internet broke that, especially modern social media. Wherever you moved your friends were a free website away, and long distance calling was gone. At first this seemed fine because sites connected you to your friends, but as the lock in happened it became a contest of getting you pissed off and showing you ads.
Going to take a long time to socially fix this problem. Especially as some large number of people are going to talk to AI instead.
Hi Friend, are you looking for a <insert product>bottled water</>? Here are the top 1000 brands of delicious bottled water! Water is very good for you! You are an ugly bag of mostly water! You do not have to enter your payment details, I remember them! Be assured that 1000 cases of 1000 brands of delicious bottled water are on their way to you now [Shipping charges may apply] at Peach Trees 2026, Sector 13, Mega-City One! Have an adequate day!!!
Eventually that's what's going to happen if things keep on going in this direction and it looks like there's nothing stopping it so yeah, we're moving in circles. Old things will become new again.
My household just bought The Brick to start taking control of our phones and online usage. We've been very online for 15+ years but are hoping to break the addiction cycle by simply blocking our devices from access. The timing feels right, mostly because sites like Instagram and Reddit are too braindead and spam+ad heavy these days. The executives and shareholders' desire for profits have already killed two of my biggest online pastimes.
I heard of The brick and sounds very effective. Not many people realize the're addicted to their devices and most tragic of all is that kids who grew always online have no baseline to return to. As the OP mentions, I too hope that when it'll all become a junk pile people will eventually return to offline mode.
I only see two outcomes for this problem : an internet of verified identities (start by uploading your ID card). Or a paid internet, where it doesn't matter who you are, but since you're going to pay for that email or that reddit account, the probability that it's AI spam is greatly reduced.
And i'm looking forward to none of them.
I want cool cryptography where I can, e.g. verify where I'm writing from and what my age is without giving away any other information.
Or if I want, I can verify that I'm myself, and eschew anonymity, and certain platforms should only accept contributions from people who don't hide their identity.
Everyone knows who you are in the town square.
People in the town square only see my face, they do not automatically have my name, birth date and ID available unless I give it to them or they go to lengths obtaining those (il)legally.
The smart glasses industry is working hard to change that.
>Everyone knows who you are in the town square.
Many years ago I left a small town and moved to a big city for this exact reason.
What stops someone from handing over their idendity’s private keys to an agent?
Same thing that stops you from duplicating your ID card.
Zero Knowledge Proof schemes
Applied ZKPs are being actively worked on in the blockchain sphere.
“A zero-knowledge rollup (zk-rollup) is a layer-2 scaling solution that moves computation and state off-chain into off-chain networks while storing transaction data on-chain on a layer-1 network (for example, Ethereum). State changes are computed off-chain and are then proven as valid on-chain using zero-knowledge proofs.”
I think this was the premise of Keybase?
Still jaded that went nowhere...
It was too early. It even executed its crypto airdrop YEARS before it became a common form of distribution in web3
Keyoxide still exists: https://keyoxide.org/.
It's kind of bizarre that Zoom is still bothering to keep the lights on at Keybase when it's been completely fossilized for six years now. The writing is so obviously on the wall that nobody should be relying on it for anything, and yet they just won't let it die.
It's not fossilized, it's just that no one uses it. Put hot chicks on there or make it mandatory for logging into Slack and suddenly everyone will be using keybase.io, and honestly I think web of trust is a good idea and if a webapp can make it seem easy or intuitive then I'm all for it.
We're scratching our heads wondering why there's no forward motion when it's simply that no one is pushing it.
Looks pretty fossilized to me: https://keybase.io/blog
They haven't added or really changed anything since the acquisition AFAICT, it's just trucking along exactly as it was the day Zoom bought them out. Twitter account proofs were broken by the API changes years ago and nobody is at the wheel to fix or even just deprecate them.
https://github.com/keybase/keybase-issues/issues/4200
> We're scratching our heads wondering why there's no forward motion
Did you miss “Zoom”?
IMHO, this is the exact instinct and there's a way to verify identity, location, and age without even having to share those directly.
Switzerland just voted recently to officially implement Selective Disclosure JWT, which does exactly all that. Social network registration can ask "are you 18?" and run with that - and only that. Or the club entrance. Or whatever, because it's all controlled by yourself in your app.
That seems like a good idea. The question is how the JWT is generated. A standard one would be more akin to a traditional crypto keypair. That is a "signal" key insomuch as it tells us who controls an account. It can't tell us the owner is the controller and that is the current weakness of crypto right now. To know the owner, we need another type of keypair to go alongside the traditional kind. That would be a "tone key" and is generated by a refreshing seed derived from the entropy of long-running, unfakeable conversations. The same way a friend might recognize us as being ourselves.
But you don't need to prove to all others that you are yourself, do you? You are only asked whether you're 18, the bouncer doesn't care about your name. So you can still hold the phone (like last summer the ID) of someone else and fake their answer.
Anonymity is important for many things. But on the flip side it's responsible of many issues with the internet today, because it makes moderation pretty much impossible (anyone can always just create a new account).
What we're missing is a way to have cryptographically secure pseudonymity: you log in to a website, you don't give any information whatsoever, but you cannot make two different accounts.
Most likely because your second sentence is impossible in one way or another.
Even if it's some kind of government encoded key, governments cannot be trusted to create imaginary people and hand them out to companies like palantir for large scale population manipulation.
I can imagine a government creating a moderate number of fake profiles for use by police and intelligence services, and honestly I'm fine with it, but creating a ghost population for propaganda purpose is entirely different and if you live in a country where you cannot trust your government not to do something that bad, you're already screwd.
In any case, it is still better than the status quo where even foreign authoritarian states can do that in countries where the local government wouldn't.
Do you propose to only let people from a whitelist of countries use internet? Because many countries would have no qualms giving their troll farms bunch of fake electronic ids.
I don't hide my identity, but I've yet to find a "non-anonymous" platform that actually accepts my identity.
Paid option doesn't really deter this behavior, it encourages it - a botter will see a price tag on a "real" account (see what happened to twitter's blue checkmark sub) and go oh goody, I can pay for people to think I'm real.
If you make the price high enough sure, but I'm unsure you can find the right price to simultaneously 1) deter bot traffic and 2) be appealing to actual users.
in other words, it just becomes the cost of doing business.
the individual user is now priced out and cannot speak candidly and anonymously, while large, wealthy orgs simply price that into their market-capture and consensus-building techniques
I'm trying to imagine this new paid app in different angles and versions, ie a new Reddit... Pay to be in there, get paid for being in there, only humans can be in there, ads pay for humans being there, humans use some govt on-line ID system, karma systems improve so that only humans are rewarded, Voight-Kampf captchas, humans mail the app their dna to verify their identity, humans login on a street 24/7 login post (think phone booths).... I just don't see any good, unbreakable, viable and/or sustainable way. We just need to get used to coexist with bots everywhere while we adjust our expectations and social codes. Fast forward until AI is massively on the streets and indistinguishable from us physically (or very distinct and fascinating), and all supposing that we can keep them in control...
Dead internet is the prequel to dead world, let's seize the opportunity to learn how to coexist with synthetics and develop the code that will make life with a higher intelligence species possible on Earth. And remember, we humans vary widely, and just like there are people happy to share LinkedIn slop today, there will be humans gladly living surrounded exclusively by overpowering synthetics. So lower your expectations for universal solutions and focus on niche.
I see a simpler outcome, smaller communities where you can verify humans are human. I've already started doing this, and mostly with people that already live in my community.
The corporate internet was never good to begin with, it was just forced on the masses.
I think this is the trend, and it's probably a good one for most involved, except probably advertisers.
Friend of a friend verification could side-step that, if there is a good way to penalize bad actors willing to violate the principle.
I guess we're coming full circle back to red offbrand Hacker News
I don't think any web-of-trust systems ever worked. It might be a bad example but PGP tried to make it a thing for over 30 years.
If by worked you mean "worked so well they replaced all the big actors" then sure, nothing has worked.
But plenty has worked on a smaller scale. Raph Levien's Advogato worked fine.
There's also a reason most new social networks start up as invite only - it works great for cutting down on spam accounts. But once they pivot to prioritizing growth at all costs, it goes out the window.
PGP is niche. This would be far more mainstream. If you applied it to HN I could probably verify > 50 people already. For PGP I wouldn't know anybody...
Someone, somewhere, salivating at the idea of combining both ideas. A paid for digital ID service that you can use as authentication for the web.
Actually, if I'm thinking about it. Social Media platforms already started this with the paid blue badge for verification, and it's also monthly subscription. But it's for their respective platform only, not universal.
Isn’t this what World Coin is? Definitively not a fan of the project but I think the general goal is to get people to verify they are human and then somehow “waves hands blockchain” that can be carried with them on the internet.
Would that work though? Unless it checks your pulse every 30 minutes I don't see how that would make it better. Bots would use stolen IDs for that. It would only contain it at a smaller scale probably
There's definitely a price where it doesn't scale and that price is almost certainly lower than what people would be willing to pay once for themselves.
It would have to integrate with some kind of official government ID, so that there can be extremely serious criminal penalties for ID theft. But that's something for the next republic, because the current one's justice system is unlikely to be up to the task.
Neither of those solves it, just tries to conserve the status quo.
The issue, as I understand it, is literally a new Eternal November, just that instead of “noobs” there are “clankers” this time.
Personally, I don’t give a flying fuck about things like gender, organs (like skin or genitalia) or absence thereof, or anything alike when someone posts something online, unless posted content is strongly related to one of those topics. Ideas matter no matter who or what produces them. Species fit into the same aspects-I-don’t-care-about list just fine - on the Internet nobody knows^W cares you’re a dog. Or a bunch of matrices in a trench coat. As long as you behave socially appropriate.
The problem with bots is that they’re not just noobs - unlike us meatbags they don’t just do wrong and stupid things but can’t possibly learn to stop (because models are static). Solving that, I think, is the true solution, bringing Internet back to life. Anything else seems to be just addressing the correlations to the symptoms.
(Yea, I’m leaning towards technooptimist and transhumanist views - I was raised in culture that had a lot of those, and was sold a dream of a progress that transcends worlds, and haven’t found a reason to denounce that. Your mileage may vary.)
I actually look forward to the era of social media companies paying humans to use their platforms.
Members only comment blogs. Where you need an invite to comment also solves the problem. You need to know a real human to get access.
That might raise the initial barrier, but it assumes every user behaves appropriately.
All it takes is one invited user to open the door to bots.
Because you have an initial user who invited the bots. The whole invite tree of this user can cull all invites given by the user who added bots.
Yes but I think bots can be very good, and many people have legitimate online-only relationships. It gets hairy quickly, with real users getting culled and bots slipping through.
Also, if the bots are smart, they'll add real people too and take them down with them.
Yeah that’s the trade off of this implementation. Lobste.rs already uses this implementation https://lobste.rs/about#invitations The comments are considerably better. I’m not even a member but get more out of reading those comments than hn, and I’ve worked at multiple YC’s. This place is not what it used to be.
Blog admin sees who invited the bots and recursively kicks that account and any invited by it.
I invite myself multiple times in addition to other real humans. Then I use my duplicate accounts to invite bots.
Inviting people who invited bots chould also hurt your "social credit" score in various ways.
Your tree could for instance be pruned - you can still invite people, but the people you invited can no longer invite people.
There are not a lot of sites which have tried this and failed. Those which have tried to be even a little bit clever about it, have succeeded pretty well (Advogato was a really early example).
What there have been, are sites which rejected such restrictions after a while, because they would rather have a big number to show to investors than real people. Many have even run the fake accounts themselves (e.g. Reddit).
I'm assuming there's tracking on the invites. So a recursive kick on X and all who X invited would still do the trick. If an IP address appears more than 5 times in an invite tree, ban the /24 or ASN if not from a friendly country for 10 minutes or other reasonable timeframe.
Getting unique IPs in any country you want is trivial for anyone but people building toy bots.
How far up the tree do you kick? Going too far up makes it so malicious people can "sabotage" by botting to get huge swatch of legitimate users banned.
Going to shallow means I just need to create N+1 distance between myself and my bot accounts
Then we go back to torrent sites.
Invite only. You get a number of invites per year etc. And once a year an open door or so
> Members only comment blogs.
There, sadly, needs to be some gatekeeping and then it can work.
For example I'm member, since years, of a petrolhead forum where it works like that: a fancy car brand, with lots of "tifosi" (and you don't necessarily want all these would-be owners on the forum). To be part of the forum you must be introduced by some other members who have met you in real-life and who confirm that you did show up with a car of that brand.
If you're not a "confirmed owner", you can only access the forum in read-only mode.
It's not 100% foolproof but it does greatly raise the bar.
It's international too: people do travel and they organize meetups / see each others at cars and coffee, etc.
Or take a real extreme, maybe the most expensive social network: the Bloomberg terminal. People/companies paying $30K/year or so per seat each year probably won't be going to let employees hook a LLM to chat for them and risk screwing their reputation. Although I take it you never know.
It is the way it is but gatekeeping does exist and it does work.
>Where you need an invite to comment also solves the problem. You need to know a real human to get access.
Bittorrent trackers, as absolute retarded as they are, have performed this experiment for us and the lesson we're supposed to learn is that this does not work. Someone, somewhere, has an incentive to invite the wrong sort eventually, which because of the social network graph math stuff, eventually means "soon". Once that happens, that bot will invite 10 trillion other bots.
Actually it does work for those invite-only trackers, especially in niche fields.
Unlike most public trackers which are either dead or on a life-support, member-only and invite-only sites are still kicking.
And you are personally responsible for your invitee
Absolutely. If anything, private torrent trackers and NZB indexers are proof that it works overwhelmingly well.
The few I'm part of all have a real community (like in the net of old), civil conversation, and verified, quality materials being shared. Almost everybody behaves and doesn't abuse the invite system, because nobody wants to lose their access to such a wonderful oasis among the slop web. It's a great motivator to stay decent and follow the rules. When things go bad, it's usually not because of malice, but because someone got their account stolen. Prune the invitee tree and things are mostly under control again.
Honestly the $10 barrier to SomethingAwful back in the day (and I guess now since it’s still around) definitely made a huge difference. I hate the idea of subscribing to a site like HN or Reddit… but one time $10 to post? I’d accept that if it meant less bots.
A $10 one time not-an-asshole fee is totally reasonable.
History also shows you can take a $10 fee and maintain quality on SomethingAwful for quite some time.
I would probably not pay $10 to post on HN, but many spammers who expect some kind of tangible return would pay that, so the fee just makes the problem worse.
The spammers wouldn't pay it once though - the idea is that it's a good way to scale moderation. Each time an admin needs to ban a user there is a 10$ subsidy supporting that action - and if the bots come back then they get to pay 10$ to be banned again.
Assuming the money isn't wasted and is actually used to fund moderation 10$ is probably comfortably above the cost to detect and ban most malicious users.
The spammers wouldn't pay it once though
There are large swaths of spammers that indeed would not pay it. There are on the other hand plenty of NGO's that would pay it without a second thought to promote specific topics and dogpile on others. Those are the movements I would expect AI to take over if not already. AI does not sleep, humans do. AI won't miss the comments that groups believe need to be amplified or squelched.
That’s basically what Valve does on cheaters with premier accounts on cs:go/cs2. And the revenue still growing up.
Yeah, I love HN, but I wouldn't pay and I know many if not the majority of other people wouldn't. It would increase quality for awhile for sure, but what happens a year or two down the road? It would kill the user count and reduce comments and become less valuable over time.
I wonder how much that functions as an age gate since kids usually don't have credit cards?
Didn't that fee allowed to change account names of other users or something like that?
You could pay another $10 (or maybe $15?) to change someone else's avatar.
reminds me of Bill Gates in the 90s when asked about email spam. He said it would make sense to make an email cost like 1 cent so the spammers can't spam as much but this didn't sit right with the mindset of the people at the time.
Also, while real people probably would not be willing to pay to E-mail, spammers who are making money would pay and consider it a cost of doing business. So the fee is having the opposite of its intended effect.
I don't think the current firehose model of spam would be sustainable anymore, though. Those spammers send millions of mails a day. Even with a 1 cent cost, they'd have to be much more selective about their address lists, given the low success rate. It may not solve the problem but I'm almost sure it would help a little. It also may be an additional qualitative barrier for crime-linked spam such as phishing mails, because they'd have to try and find a non-traceable was of payment, which is not trivial and always carries a slight risk of being identified anyway.
Hashcash was a proof-of-work system that would have put a computational tax on email. I don't know what kept it from getting more traction other than simple chicken-and-egg network effects, but it's a good idea, and worth resurrecting.
http://www.hashcash.org
Email2000 is the only answer: https://cr.yp.to/im2000.html
TLDR: Mail storage is the sender's responsibility. The message isn't copied to the receiver. All the receiver needs is a brief notification that a message is available.
Sounds like a horrible system where you retain many of the problems of email (you still need to deliver notifications) and new surveillance and persistence and mutability problems layered on top..
community idea:
"my2cents"
0.02 to post or send a message
it's also something that was in my mind when i wrote about those two options. I still keep this idea in the back of my head since those days (i'm old enough to remember when gates had this atrocious, yet interesting idea).
We need something else, we need an "extreme" (~$1) fine that anyone can claim from any sender who bothered them, no questions asked. Spammers will stop instantly overnight. This would work for phone spam as well.
I read about an idea for an incentive/check system like that before. Something like: make the cost 10c instead of 1c, but implement a system where recipients can mark mails as confirmed "wanted" mail, upon which the sender would be reimbursed 9c. Increasing the cost for unsolicited mails while keeping the cost low for well-behaved newsletters.
payment would need a delay too. Pay $10 and then wait a week or so for the payment to clear without it being reversed. Hopefully that stops the card stealers from dumping as much as possible before getting booted.
Could we just add complex and varied captcha to the comment & posting forms?
That's not a bad idea, sending mail could simply be an authorization for a $1 or $10 charge. And if the receiver said the message was unwanted, then the charge would go through.
There's just the pesky problem of incentives on the other side of the coin - who gets the $? The spammee? But there would be enshitification issues like:
1. Those who are incentivized to take as big a cut as possible.
2. Those who would put it in their EULA that you must accept their spam and not chargeback or else you lose access to something you value like their services (EULA Ransom... not much different to today "accept our EULA or lose access to what you've already paid for!")
I'm sure there are many other perverse incentives which would creep in..
Odds are it would harm real discussions more than it would harm bot spam.
The bots exist for a reason, usually to covertly advertise a product, and by themselves already cost money to run. Someone looking to astroturf their AI B2B SaaS would probably be more willing to pay $10 to post than a random user from a less wealthy country who just wants to leave a comment on an interesting discussion.
Given how easy it was to get banned, the :tenbux: were almost like a subscription.
Now we could only pay $$ to overwrite people's social media pfps, now that'd be fun.
It's a beautiful system. And if you were a dipshit and got banned, you paid another $10 and hopefully learned your lesson.
Exponential backoff: second time is $100 etc.
I think metafilter had a similar system and it was definitely one of the higher quality forums
Maybe some proof-of-work scheme where uploading content would require the uploader to solve a cryptographic puzzle, hence reducing overall number of posts? The PoW difficulty should somehow be correlated to economics where it wouldn't be too expensive but also wouldn't make sense to do mass uploading via bot farms?
Like a CAPTCHA?
Good one, honestly didn't think about it. But visual or other kind of human-accessible captchas can be solved by bots. My suggested PoW would be computational.
Verified entities defecting by using AI to generate their content for them will break this.
We can use verification mountain dew cans. No big deal.
I can't recall the last time I did the Dew. Should I turn myself in to a reeducation camp?
I recall a WSJ article during the 2024 election that was about the fact that Tim Walz and JD Vance were both big consumers of Diet Mountain Dew, and how basically America ran across the board on various types of Mountain Dew. Can you really call yourself "American" if you're not doing the dew?
I drank enough for three people in college. My lifetime average is probably still in the margin of error.
I pay for my ISP and the financial institution the money comes from has age verification
Social media, HN and the rest of internet first business can go broke
I don't see anyone out there propping me up directly. Why would I give crap if some open source hacker or etsy dealer doesn't have a home next month? Yeah I don't because they're not caring in the same way
Thoughts and prayers everyone else but your effort is clear, not going to be 1984'd into caring for people who clearly don't care back.
> an internet of verified identities (start by uploading your ID card)
That is Facebook. I hear it is full of bots posting under verified identities.
This 3-year old meme video is becoming more and more relevant by the day: https://www.youtube.com/watch?v=-gGLvg0n-uY
I wonder what would be the effect of a minuscule tax (0,01c) on network use. Could reuse addiction, abuse, create a fund to finance other things.
How would you overcome a local llm embedded into a keyboard?
People can also move to smaller communities
Me neither. A paid internet isn't anonymous either, your ID is as verified as the verified net.
In that case, I will certainly embrace the slop net. Perhaps this is even good because many don't dare to venture beyond the black wall.
There's a third option: web-of-trust. https://lobste.rs/ has some problems but not bot spam.
Don't they have an literal bot account that reposts top HN links?
Third option: a web-of-trust that allows you to see the vectors required to connect you to a given commenter, and which of your known friends and friends-of-friends has already attested their humanity.
It turns out that Twitter selling the blue marks is the correct direction, but no one would admit it.
Yeah, me too
You could have easily said this twenty years ago when photoshopped photos were going viral on the early internet. Turns out people are completely fine with ai content and photoshop.
Fine in what way? What people?
I have not seen or heard of a single person who is excited about AI generated blog posts, or TikToks, or commercials, or images. In fact it’s the opposite, the internet coined the term AI slop, and my non-internet addicted friends hate the fact that chatGPT is killing the environment.
The only people I’ve ever seen champion AI are the few who are excited by the bleeding edge, and the many many peddlers
The most common people just seem to be the elderly who don't care / don't know any better. The same ones who told us never to believe anything from the internet. They seem to be hooked on weird AI jesus facebook posts, daily AI generated motivational content, talking to the chatbot in Whatsapp, etc.
There are probably more than 10^17 AI model executions occurring per day. I know in ye olde HN there are many Purists that are Too Good For AI, but the majority of the human race is consuming AI at a blinding rate, and if they really didn't like it, they would stop.
> and if they really didn't like it, they would stop.
I can’t really articulate why, but this doesn’t feel true to me. There are plenty of things humans do especially at scale that we don’t like, or we do that we don’t like others doing, and don’t stop
>The "Moloch problem" or "Moloch trap" refers to a scenario where individual, rational self-interest leads to a collective outcome that is disastrous for everyone. It describes competitive, zero-sum dynamics—often called a "race to the bottom"—where participants sacrifice long-term sustainability for short-term gains, resulting in a loss for all involved.
Hence why we have to keep feeding the orphan crushing machine.
Who wrote what you quoted?
Instead of getting judgy about Google responses, read the original story.
https://www.slatestarcodexabridged.com/Meditations-On-Moloch
I read the story before. Don't post generated comments or AI-edited comments. HN is for conversation between humans.[1]
[1] https://news.ycombinator.com/newsguidelines.html
Ya, that dang be danged, religious adherence to that rule is quite stupid, especially when you're talking to someone that has been here years
There are probably as many spam emails per day. You think "majority of the human race" likes them?
And how much of that consumption is voluntary or willful? I don't want to get AI slop in my search results or in my forum discussions, it muddys the water with shallow at best information, often in excessively verbose ways that helps hide its more subtle falsehoods that it picked up.
Your comment doesn't make sense because the fact that "dead internet" has been coined since then (along with the popularization of "slop" and "hallucination") means there is a line and we have crossed it. Denial doesn't stand up to any scrutiny.
It's too bad we weren't more skeptical about the ways emerging technologies would eventually be used against us. Some warned about it but many (including me) ignored them. Perhaps we could be forgiven for that naivete, but there's no excuse to be ignorant of what's going on now.
There's a huge difference between fake content and fake authors.
Why is it being called dead internet theory when, as far as I can tell, what's really happening is that big centralized systems are being overrun with bots? The internet existed and was pretty great before these large centralized systems came into being.
Anyone can still run a blog/website, and/or their own discourse server. There's no need to mourn for these centralized systems that largely existed only to exploit us in some way. Let's celebrate "small internet theory", an internet where exploitation is effectively impossible because every company that tries it is overrun with AI bots. That sounds awesome to me personally, but I was also up late last night watching clips of Conan O'Brien from 1999 and the nostalgia for that era / what the internet was like back then hit me so hard it was almost painful.
> Why is it being called dead internet theory
“A social networking system simulates a user using a language model trained using training data generated from user interactions performed by that user. The language model may be used for simulating the user when the user is absent from the social networking system, for example, when the user takes a long break or if the user is deceased” [1].
(More seriously: https://en.wikipedia.org/wiki/Dead_Internet_theory)
[1] https://patents.google.com/patent/US12513102B2
So why isn't it called "dead social media theory"? The internet is not only social media services, though I understand a lot of people seem to think that without centralized social media services there is no reason to use the internet.
Have you been on the internet at large lately? With google you may get one authoritative site on something and 50 bot copies of the site on different domains. Sometimes the stolen site is the number one return. Also, if you ran sites years/decades ago, you realized way back then the any local user posting was getting overran by spammers/bots. Now is so much worse that it's not worth doing in most cases.
So, most posts on social media aren't real.
Most user posts on non-social media are spam/not real.
Most websites in searches are copies/ad spam.
So yea, dead internet reality.
I spend all day every day on the Internet and I don't share your perspective. I might dislike centralized social media and yearn for a bygone era, but just in the past two days I had a very positive interaction with multiple real humans in the Commodore 64 subreddit that helped solve a problem I was having that isn't documented anywhere else on the internet yet. So then I went on my personal blog and blogged about it, which will get it out there on Google and help others. In this way, I am helping to keep the internet alive, I guess. "Be the change you want to see in the world," and all that.
> So then I went on my personal blog and blogged about it, which will get it out there on Google and help others.
That's some of the boldest optimism I think I've seen in awhile. Maybe your blog is more popular than I assume, but still
Traffic stats for my primary blog are public (I only started using simple analytics in December so there's only two full months of data): https://dashboard.simpleanalytics.com/amiantos.net
If you think a site with only 209 visitors in the past 30 days is going to move the needle, then I've got news for you. Especially if bots are the main source of that visitors count. That's very very close to the number of people visiting your site being you, you, and maybe your mom type of numbers. After that, it'll be skiddies and bots. Anybody that's run their own site has been there, but let's not make it out to be some grandiose site that will determine Google page ranking.
Why are you putting words/desires in my mouth that I did not voice? No one said anything about moving the needle. I said that my blog will go into Google results and help people, you said that sounded optimistic, so then I provided you proof that my blog already shows in google results and receives traffic. I've received messages from real people who have been helped by my writing on my blog, so it's not just bots.
I do not know what "move the needle" means or why you think I am trying to do that. Your excessive negativity and pessimism is unwarranted and I dislike it. Honestly between you and that other guy replying to my comments with seemingly thinly veiled vitriol for my perspective, it's just further proof of my point that being able to communicate with large groups of anonymous people is typically a net negative. Most anonymous people seem to be quite nasty. I'd rather write on my blog where no one like you will see it, and if you do see it, you likely won't go out of your way to send me an email with your negative comments because it's likely you do this for public attention.
Your sites are great, it was interesting to read about your experiences with Freemasonry. A lot of temporarily-embarrassed billionaires up in here.
Okay Freud, you're right. It's all about the attention. That's why you're the one publicly promoting their blog for what is it, right, attention.
I think you are looking from a very different angle. A site with only 200 visitors/month can't move a needle, but it's a valid part of ecosystem.
Tbh, for niche hobbies even one new visitor a month is a win, if they actually read the article and not skim over it. An eager enthusiastic listener is a price not easily won on the internet. Having even one per month would mean you personally taught something to a classroom of peers in a meager 2 years. Blogposts easily can move live ten times as longer.
For people that spend most of their time on small internet, sites like that are essential, because they work on another level. You know you engage with someone who has a passion for the same things you do, and had a time to polish their words. You know you can reach out for help and be kindly greeted.
This is parts of the internet that are so boring for anyone else, they are totally safe from spam and ads. That doesn't scale, can never scale, if anything like that becomes popular, the massive slopfest would follow and the slop would be sold instead of the original.
And yet those boring places – boring for everyone not interested enough – are there, and people have a way to reach to each other and talk to each other about shared interests. The internet isn't dead for nerds.
I appreciate your positive attitude and I hope more people will adopt it.
I am also part of some very niche communities on the internet, and although they are small they are certainly thriving.
"You can do everything right and still lose".
At the end of the day there is no real penalty for being a bad actor on the internet. They get unlimited retries on spamming and otherwise causing problems. In many ways this helps Google entrench itself as the search/ad company. No one else has the money or compute resources to continuously update the internet. Furthermore they have told us it's their job to shove unskippable ads in our faces. They'll gladly let the public internet die in the future if they can push out their own version of "SafeInternet by Google/now with more ads!".
Every single one of your comments in this thread is some slippery slope stuff where you think corporations and federal government are going to work together to kill off the (public?) internet. It's okay that you feel that way, even if it's just a big ol' fallacy, but you don't need to repeat it in six different places. You made your point, you think the internet is doomed no matter what happens, great, let's move on.
>even if it's just a big ol' fallacy
Really?
https://www.cnbc.com/2026/03/08/social-media-child-safety-in...
https://en.wikipedia.org/wiki/Social_media_age_verification_...
Please wake up, won't be long before someone fires off a lawsuit at HN and we'll have to give identification here.
Not a slippery slope. That's exactly what happened.
You've (unironically?) restated the crux of the Dead Internet Theory.
https://en.wikipedia.org/wiki/Dead_Internet_theory
Authentic human activity has been completely overwhelmed by bots and slop. Discerning signal from noise becomes too burdensome to bother with.
Of course the physical medium continues to exist.
Of course there are still humans, such as yourself, producing free content, to be harvested and regurgitated by parasites.
But authentic human activity is increasingly going out of band, no longer discoverable. Whatsapp, discord, private groups. Exactly as the theory predicted.
To be really blunt - why do you think your blog takes will 'get out there on google' and also 'help others'?
You are a kind soul, but your mind is afraid to see reality. It doesn't matter if you share that perspective or not, and "Be the change you want to see in the world" doesn't really say anything here. See this for example https://www.youtube.com/watch?v=9kWeAhMponc, and this https://arnon.dk/the-trust-collapse-infinite-ai-content-is-a...
And check these books "Superbloom: How Technologies of Connection Tear Us Apart" and "No Sense of Place", maybe it would help you to see the overall effects of the internet (and other communication mediums) and forget this simplistic view that a lot of programmers have. The nature of the communication medium doesn't just affect the message, it shapes everything in society. Ignoring that because you had a good experience here and there won't change anything.
It is inevitable that in a few years we can't even tell a real user from a bot without forensic analysis.
When AI can post a million times a day the internet is FUBAR.
The problem is that average people cannot tell even now. Heck, I'm quite sure that /r/all is completely bot driven, yet I still check it occasionally. I'm not even sure about HN, but I didn't find yet so obvious manipulation than on Reddit.
It's funny when people start accusing eachother of being chatGPT.
That sounds exactly like the kind of thing chatGPT would say to hide the fact it’s chatGPT… :)
100% agree that this is what it should be called. To argue that big websites being big makes them equivalent to the whole Internet is absurd. Besides, I love the idea of the only recourse to be to go back to independently run information websites.
For the younger generation, social sites are the internet. They open an app on their device, they don't go to sites by searching the web. I've seen people perform a web search in an app store thinking it was the same thing.
Yeah I agree. It’s an acute problem on social media platforms where there’s a market force incentivizing it. If you’re mostly engaging in specific niche interactions with known communities or people, it’s not nearly so prevalent. The internet still works fine as a whole.
commercial internet services would prefer that you forget the internet without them can in fact exist
> A social networking system simulates a user using a language model trained using training data generated from user interactions performed by that user
Google People[1]?
[1]: https://qntm.org/perso
>Anyone can still run a blog/website, and/or their own discourse server.
And those will also get chocked with fake bot "members" and bot comments.
Plus, if "anyone can still run a blog/website", this includes bots. AI created and operated blogs/websites, luring in people who think they're reading actual human posts.
What's wrong with AI written websites? Not all of them need to lure people by pretending to be humans. Content is either useful or not.
In some ways it might be positive. My girlfriend had a small addiction to Instagram reels. The flood of AI generated videos on there just killed the magic for her and she stopped using it
Happy for your girlfriend, and anyone else who escapes because of this.
But it's not about the current generation of addicts. It's a play to capture the next generation.
It remains to be seen whether they'll get caught or not but it's important to remember that even if all of us mature humans find this new AI social media weird and gross, children don't have our preconceptions.
Meta is going to do everything in their power to train the next generation of young, immature brains into finding AI social media normal and addictive.
They (along with TikTok) already managed to do that to the last two generations so they have a scary track record here.
Happy to hear of this anecdata, as it gives me hope something similar will happen to my family
Anyone can run a blog/website and be subject to AI bot crawlers using terabytes of your bandwidth for no reason, yeah.
Bandwidth is only expensive in the US, somehow. Here in Germany I didn't bother about bots and their additional traffic since 1998 (there are other annoying things about bots though).
If this encourages those people to stop sending hundreds of megabytes of crap per page load of their text content, it might be a good thing
More than that, it's practically impossible to find good specialized, human-written websites. Search engines don't find them, all results are AI garbage. With no real ability to be discovered, there's no incentive to maintain such websites too, and so the cycle of slop continues.
Kagi small Web, though their rss only seems to show 5 updates a day across thousands of sites. Also search for indieweb
No, the old internet wasn't that great. There were so many problems. Finding things was hard, buying things was hard, integrating things was hard, compatibility was hard, everything was super fractured. It felt great at the time because you discovered all these random things and it was all novel at the time. Centralized (Or decentralized collaborative services like IRC or Usenet) really unlocked the power of the internet.
usenet and irc are quite old. how are they examples of some mythical point at which the internet was unlocked by services?
centralized and decentralized would include almost any service. your comment is so vague and ambiguous as to be meaningless. (that's a hallmark of LLM output. are you a bot?)
it was easier to find authoritative answers 20-30 years ago. google and, before that, altavista and yahoo, were quite good at directing queries to things like university-run information sites or legitimate, curated commercial sites. for the last decade the first google page has been crammed with useless SEO optimized fluff.
as for shopping, that was the first dotcom boom. what really took it mainstream was covid. not centralized or decentralized collaborative nonsense.
no.... not a bot, and please see HN FAQ before making comments like this.... I'm talking about decentralized common services, like IRC, Usenet, email, same service and they all interact together. But the old internet was super fractured when we got websites, nearly everything did things completely different, was very hard to trust anything. It was not easier finding authoritive answer 20 to 30 years ago, I started in 91, and it was hard to find anything. Search engines were a great improvement, but kind of hard to find what you wanted, things drastically improved with google and page rank, but that brought in other problems
Reasonable fragmentation and friction is a feature, not a bug. Global-scale social networks with zero resistance have turned the information superhighway into the information superconductor carrying infinite current, otherwise known as a short circuit.
> buying things was hard
This one is not a problem anymore.
I generally agree with this, but I think the small internet hasn't succeeded in building social replacements for the "centralized systems". The internet is a social technology. So for this to be viable, the small internet needs an answer.
Occasionally, someone mentions RSS as a solution. That's only a small component of the solution.
How would the small internet fight the bots?
Aggressive moderation? Disable UGC?
In an ideal/fantasy world under "small internet theory", every online friend group would have their own Discourse server set up (similar to how friend groups use Discord now), and traffic/usage of that Discourse server is so small that it would be a waste of resources to try to swamp it with bot traffic, and on top of that, everyone on the Discourse server are friends who can vouch for new members who join, so no bot could join the Discourse server because no one would know who they are.
I understand that some may feel we are losing something, by not being able to go onto a website and anonymously talk to 1000s of other anonymous people we do not know, but I do not think that has actually been a net positive and this bot issue demonstrates the issue quite well: if you do not know who you are talking to, you do not know if they are telling the truth, or if they are someone you should even listen to at all, and now they might not even be human. So why do it? I would rather talk to my friends, people I've met in meatspace or over voice chat in a game, people who I can vouch for and that I know I can respect and trust.
Let's build small communities of real friends who recognize each other and spend time with them on the internet, in that way the internet will never die.
>Let's build small communities of
And 10 minutes later Texas demands you identify all your users age when someone posts a porn image somewhere. Facebook will gleefully laugh all the way to the court saying we need such internet ID to entrench themselves.
>, in that way the internet will never die.
You mean in the exact way the internet used to be... then died?
I'm guessing your GenX or a Xennial, it's how we think. Relationships and friendships are hard things to acquire and keep and you have to work to do it otherwise friends disappear. The thing is the younger generations mostly don't think that way. They have mostly always lived in a world where connections are cheap and easy to maintain. Attempting to move to a system that is more difficult will be very difficult for them.
> Attempting to move to a system that is more difficult will be very difficult for them.
That doesn't make it wrong, it just might make the last 20 years a mistake.
Large scale mistakes are very difficult to fix and have entrenched groups to ensure they continue. See: Internal combustion engines, Cigarettes.
So I’m a member of a group of about 70 middle-aged guys who have a discord server exactly like this. We live all over the country, but most of us have met in person, we travel the world together, and we do an annual retreat where usually about half of us meet up. In addition to discord, we have a bunch of groups on Marco Polo, and we have little sub-groups that do zoom calls regularly. Really wish some of them lived nearby, but in spite of that it’s been one of the best things in my life for years now.
Small internet isn't very attractive for most bots. Also, I use websites that are invite-only. This is effectively a web of trust. This works pretty well, bots aren't a real problem there.
Run your site like an old school BBS. You only run into these problems when you invite the world to your site and want big numbers. You don't have to do that.
Aggressive moderation?
That is a simple method in phpBB. Using ranks one can set new accounts to be able to post and nobody can see their message until verified by a moderator. For small groups and semi-private (invite only) forums this is fairly easy to manage. Spammers and grifters influence nobody. Only cranky old bastards like me see the message. There are other means to keep bots off a tiny site but that is a longer topic. Even better one can send a header to redirect those using the Torbrowser to the Tor link and when states come along and demand some third party process, one simply disables the Clear-Web access. More friction, less data leakage and no corporate capture. This also eliminates the people that can't handle an extra step to access the site and eliminates lazy governments that need money trails.
HashCash.
It would be interesting if we had some sort of local verification in the real world. As in picking up some key from some physical place or having it sent to some physical place. Some services like nextdoor are set up like this and mail out account auth to make sure the user is local to their next door group. Obviously you can imagine how it might be abused but it is impossible to do so at the scale you can abuse digital only methods.
Pokémon go was ahead of its time
It reminds me of the cartoon of two people on an escalator that stops working and one says to the other "Last time this happened I was stuck for four hours"
I'm thinking there might have been a deeper message than the moment of ridiculousness.
> what's really happening is that big centralized systems are being overrun with bots? The internet existed and was pretty great before these large centralized systems came into being.
This is a great point. Suddenly, I'm looking forward to this
> Anyone can still run a blog/website, and/or their own discourse server
Including bots.
Bring back BBS. Getting into the good ones was a process back in the day.
It's funny you mention this, I got a Commodore 64 Ultimate the other day and one of the first things I did was load up the BBS client and browse some BBSes. Those are from before my time (my first PC was a Compaq Pentium 166) so I never got to experience them for real. But if the rest of the internet collapses under the weight of bot traffic, BBSes are quite nice.
BBSs have been in theory replaced, but in reality they haven't even been approached by modern social media. Small forums full of dedicated users, often local. So many great memories.
We're talking right now on a centralized system that's slowly being overrun by bots. We can survive without, but I'll miss it.
And who is going to know your blog exists? If they search on Google they are going to get an answer from AI and stop
Who cares if anyone knows my blog exists? I'm not writing my blog to farm engagement as I do not run ads on my blog. I write on my blog because I want to write my thoughts down and project them into the world. Whether or not anyone sees them is pretty unimportant.
If my writing helps someone via them hitting my blog directly or them getting the answer via AI aggregation, mission accomplished.
In my experience AI doesn't give the answer you want because it gives the most shallow and basic, many times so basic as to be worthless, response. Then I either scroll through 20 results hoping I see one that isn't an AI writeup of the exact same incomplete source, or I give up and search out a specific site I know exists that isn't AI written for that information.
Thats not exactly making an argument for the discovery of blogs
Kagi small web, for one
I love "small internet theory." Beautiful view of the future.
Your welcome to link your proven human page with mine.
Id even run a dedicated UT99 server lol
The internet existed and was pretty great before these large centralized systems came into being.
The big centralized systems existed before the internet. GEnie. Delphi. Bitnet. CompuServe. The Well. American People Link. And dozens more.
The internet brought them all together, then extinguished them. Now we're going back to the old days.
The only difference now is that instead of paying AT&T to carry dialup connections and leased lines, we're paying our local/regional ISP for cable and fiber.
It's all the same game. Only the names have changed.
it was originally called zombie web but that didn't catch so it turned into this.
You can create a blog, yeah. But you also can write the blog with AI. So, you still need to filter the content. Over time, people will find that "The signal-to-noise ratio has hit a breaking point where the cost of verification exceeds the expected value of engagement." https://arnon.dk/the-trust-collapse-infinite-ai-content-is-a...
Parasitic zombie internet.
> Let's celebrate "small internet theory", an internet where exploitation is effectively impossible because every company that tries it is overrun with AI bots.
But isn't it even harder for small forums to resist the robot onslaught without the trillion dollar valuations to fund it?
Although, part of the reason Facebook/Linkedin/Twitch/etc have bots is because those companies secretly want them, in order to inflate their usage numbers.
> Although, part of the reason Facebook/Linkedin/Twitch/etc have bots is because those companies secretly want them, in order to inflate their usage numbers.
Yes, they are disincentivized to get rid of bots.
The people that want to get rid of the bots get crushed because said botting technology is hyper advanced and cheap to use because of the massive scale of social media. This ends up with huge numbers of them getting put behind services like cloudflare further consolidating the internet.
signal/noise
You know.. I keep thinking this might be a good thing in some ways. AI spam could save us from the worst of the current social media status quo, the toxicity of the attention "economy", but flooding it so thoroughly nobody wants to engage with it anymore. Maybe the world can collectively "wake up" and "go outside" by turning towards local and more intimate communities for social interactions..
It's a shame though that this is gonna kill so many sites and projects. Sure we have ChatGPT, but also with things like Google AI summary getting so much better traffic to sites is going to plummet. Without people visiting I think the incentive, heck even motivation, for a ton of the sites is gone. We've seen it with sites like Stack Overflow, but it's probably going to happen to just about everything..
Things are definitely going to change in significant ways. The internet of the past is definitely dead, it just doesn't know it yet.
> It's a shame though that this is gonna kill so many sites and projects. Sure we have ChatGPT, but also with things like Google AI summary getting so much better traffic to sites is going to plummet. Without people visiting I think the incentive, heck even motivation, for a ton of the sites is gone. We've seen it with sites like Stack Overflow, but it's probably going to happen to just about everything..
As I see it, this is just an extra step in a long series of tools to just serve information more quickly. Search snippets for search results have always (?) been displayed for each link/page returned. If the information you were looking for was included in those snippets, then you wouldn't need to visit the actual site.
Then at some point there were knowledge cards/panels. Again, if the information you were looking for was in those cards/panels, then you didn't need to click on the links.
Now with LLMs/Gemini, the information is sometimes summarized at the top of the page. You need even less to visit the search results.
Google has always been a kind of cache for the Internet. It's just way more efficient at extracting and displaying information from that cache now.
So, yes, traffic keeps going down. But new knowledge will still need to be produced, right?
I don't know that the influx of AI spam would necessarily result in people disengaging and choosing to seek out real content, though. Social media feeds have been serving up less and less content from our actual real life contacts for a while now (partly because people seem to be posting less). As long as it's engaging I think a significant chunk of people aren't going to care whether it's AI
(anecdotally, my mother loves AI generated videos, perhaps it's just novelty at the moment and it will wear off)
I see many, many startups that promise to be an automated marketing agent that will do this exact thing: scour sites for conversations and post links to your product.
Obviously that burns down the human Internet, but it’s also a business that will have a short lifespan and bring about its own demise.
I guess they don’t care about anything enduring as long as they can grab some quick cash on the way out.
> I guess they don’t care about anything enduring as long as they can grab some quick cash on the way out.
As far as I can tell, that is basically all AI-related businesses. Including those non-AI ones jumping on the bandwagon to throw all their employees in the bin and expect 10x productivity somehow: if they are right and these tools do become that good, well the economy as we know it is over as white collar knowledge work disappears.
But hey, we made money in those few years right!
At least in the US very few industries actually seem to be about making a product.
A good example is this, car companies don't make cars for the most part, they make loans. Financial companies first, car companies second.
Consolidation, collusion, and rent-seeking behaviors by companies are going out of control too. The fact AI companies can do what they are doing has much to do with the previous brick and mortar businesses weakening any business regulations down to nothing.
> A good example is this, car companies don't make cars for the most part, they make loans. Financial companies first, car companies second.
I get that this is true from a certain point of view. But car companies clearly compete in a very healthy way on features and quality.
In fact, cars are a great example of a market where the companies clearly care about making the product, and the competition between them has driven that products to incredible heights. Cars these days are vastly better than they were in the past.
Maybe the only parts of a future internet people will actually hang out in is going to be one where any profit-making is completely de-incentivized. No recommendations. No product reviews. No opinions on companies or services. More slow web. Maybe we'll slowly head back to what websites used to look like when Yahoo was the biggest search engine.
Back then the day Yahoo was a manually curated index of submitted & verified sites with search capabilities.
Wild-ass business idea: what if Yahoo 2026 recreated Yahoo 1996 and also any of the video sites it bought up back in the day get relaunched as deshittified ad-selling mechanisms to fund the whole thing… there’s gotta be Yahoo 1996 money in whatever scraps YouTube is missing.
It used to be faster and easier to follow actual content.
An index is also a lot more discoverable for content like the others. The issues of classification still exist its not a tree like they tried to make it but indexes based on human effort on value I think still have their place
The Internet was always full of bots. Not chatbots, but bots like crawlers, scrapers, automated scripts. That was fine.
What the OP is talking about is bots that participate in public discourse. That's the actual problem.
I think it can be handled to a degree though. Private communities, private Internet on top of existing Internet, and social media platforms without public APIs and with strict, enforceable ToS would all help.
For a while video was a holdout of sorts - e.g. if someone posted video content of themselves or their voice you could trust a real person was behind it.
But now convincing fake video generation is easily accessible, so one more holdout stands to fall.
It does seem like some kind of ID system is going to be the only way. Sucky but inevitable.
I often have the following thought: technological advancement, for all its boons, inevitably leads down destructive roads in the long run. Sooner or later we open a pandora's box.
> convincing fake video generation is easily accessible, so one more holdout stands to fall.
Is it though? I have absolutely no doubt we'll get there but I haven't seen any evidence of this in the wild. My Youtube feed is becoming overrun with content with clearly generated scripts and often generated narration. But I haven't seen a single instance (that I'm aware of) of generated video being passed off as real.
Yes I have seen hundreds of tweets and reddit posts showcasing game-changing video technologies like AI face replacement and yes they look incredible in the 45 second demo reels, but every instance I have seen of real-world usage was comically bad.
The technology isn't inherently evil. The actual problem is the way our societies are set up, ironically incentivizing sociopathic behaviour even among members of a single nation, nevermind when geopolitics get involved.
I essentially see it like this: imagine giving a hand grenade to a three year old. The grenade isn't inherently evil, but it raises the existential stakes for that three year old and anyone in their vicinity.
There comes some hypothetical point where technology has advanced so much that anyone has the power to destroy the world.
I just searched for a video game tip: "Bannerlord II where to sell clay?" and google's top result was an AI generated page FOR THIS GAME that directed me to ebay.
Also, I forgot to mention: google AI overview included the AI garbage page as it's answer.
It's dead Jim.
Emacs will solve this too:
https://github.com/tanrax/org-social
:-)
I think that we are going to see more and more of this. To the point where most interactions you have online will likely be with bots. So I started building something that actually has a chance of fixing it: a social network for only humans.
I wrote about it here: https://blog.picheta.me/post/the-future-of-social-media-is-h...
do you think small, invite-only communities will end up being the last holdout for genuine human conversation online? or will bots eventually infiltrate those too?
Bots will absolutely infiltrate them eventually, but I think it's the only solution.
Internet promised ability to connect with anyone anywhere around the world. It felt limitless and infinite.
Turns out in an infinite world, the loudest voices are the ragebaits, the algorithmically-amplified, or the outright scammers.
Human social brain doesn't work in an infinite world, it works for a Dunbar's Number world. And we all like our psuedo-anonymous soapboxes (I'm standing on one right now), but trick will be to realize that the glitter of infinite quantity isn't the same as small-scale connection.
At least for some time I imagine a hybridization may pop up. For example you grow a community of humans that keeps bots under control. Because of this all actors are humans, and valuable because of that.
Hence you'll end up with defectors getting paid to siphon off all the conversations to some ad companies that will work on tying them with real world identities and then serving them more detailed ads in the places they cannot avoid interfacing with the open internet.
It's absolutely why discord is popular.
I think most small communities will stand bot-free because there's little incentive to have bot engage with it.
But I wonder if there's a size of conversation after which people will still choose AI assisted summaries. Discord had/(has?) a feature where it used LLMs summarize and then notify you about a discussion happening.
The Discord thing sounds like a reasonable and acceptable use case to me. Fuzzy search is basically the only thing LLMs are really useful for, and a feature like that actually serves the user. Help them find stuff that's interesting to them, instead of trying to replace it with a pale imitation of real thought and conversation. My most optimistic view of the future is that features like that will be what sticks around after the hype and bubble.
Lobsters is like that, basically a ghost town compared to Reddit. If you block engagement, you will succeed.
> Lobsters
Invite only, very exclusionary. Private club with public posting? Worst of both worlds.
The internet is not dead though - it's bursting with life, both human and LLM claw like. It's only going to get more so as time goes on. Re:
>Can we go back to an internet like this? I guess we can’t.
Gary Brolsma is still at it with Numa Numa (2023) https://youtu.be/ZBKm1MBsTbk. There's just a bunch of other stuff out there too.
I have a decent-enough filter for AI-written nonsense:
- banner blindness to blue check accounts (instantly scroll past, the blue check is extremely prominent visually)
- a very long Ublock Origin text filter regex for emojis (green check mark in particular) and $currentHotTopic keywords where the signal to noise ratio is close to 0.
What does it matter who wrote it as long as you like the content? If the content is posted on a network that allows robotic agents to post, and you don't like it, just sign on to a different network.
I imagine it will be way closer to Ghost in the Shell/Cyberpunk in the end than we realize.
The problem is that
A. People want to connect with other people, not talk to computers, and
B. AI slop peddlers know this and have an incentive to lie about their content.
If GenAI content was always reliably declared and people's choices were respected, we wouldn't have a problem.
It's like saying, what does it matter if the news article was fake, as long as you enjoyed reading it? It matters because when I read the news, I want to read about things that actually happened, not stories that manage to fool me into believing they're true.
People need to look long and hard at how they are using technology, and ask how technology should be used. Every single technological trend for the past 10 years has been smoke and mirrors, promising utility of an iPhone but with deliverables closer to a blockchain full of links to jpegs.
This post's title is hyperbolic at best. At best the author is noticing what most people have known for a long time, there are bots on the internet. Most interactions I have online are with real people. Maybe we will end up with a dead internet, but moderation is still possible currently.
The elephant in the room is that a lot of social media companies have a conflict of interest. They can juice their user metrics by not moderating bots as well as they could be.
Tbh I don't care if I speak to a human or a bot as long as they are "useful"; by useful I mean if they provide me useful information but then again humans can provide unique information that bots can not. But I think identity is not relevant anymore, what's relevant is reputation. People think internet bots are bad per se but we need to build useful bots, just like there are chat bots that are useful on various platforms like Telegram, Discord or whatever other platforms people use.
This is a great point. In the past and present, sites like slashdot and HN depend on the users to achieve that moderation to surface useful comments and keep 'spam' down.
Now, there are tools to achieve that kind of moderation automagically, and even better, consistently. This is an opportunity to build out a community that is useful for everyone. The first platform that guarantees anonymity supported by human-independent moderation will likely attract significant and persistent user support.
There is still the issue of cost - how does the community pay for such a platform? Perhaps like the Google of yore - very limited ads? Avoiding enshittification can be done through the Wikipedia model - non-profit to manage the whole thing?
Next step is: we get back to speaking to each other in the real world. That would successfully close the loop.
I think this will be a tiny minority who is already currently not terminally online (so, no big change for them).
And the vast majority will just be driven to more AI-mediated interactions.
I think it's a symptom of being terminally online to think that most other people are also terminally online. The internet has a way of convincing you that most of the [interesting] events in the world happen on the internet. But I think this isn't the case; most stuff happens in the real world, most people live in the real world most of the time, and a tiny fraction of trite drama happens online.
>being terminally online to think that most other people are also terminally online.
50% of US teenagers describe themselves as terminally online.
Go any place where people work and have time to goof, and you'll see them online.
Go to a bar/club, you see people with a phone in front of their face.
The idea there is an online and offline is crumbling further every day. Cameras are small, bandwidth is high in relation to our compression algorithms. Anything happening in the world can be broadcast live. More and more types of machines are coming online that accept digital instructions that make things happen in real life.
Furthermore it's an odd rejection of the printing press on your part. That methods of information exchange affect the real world around them. If the book brought about the industrial revolution, what does an always available global communications network bring?
At least based on your writing here on HN it seems like you're probably an introvert, or at least a person that likes quiet pondering and reflections. Reading a book would be far more interesting than most online activities, right? If I'm right and that is the case, then you may be missing just how many people are horrifically addicted to being on social media all the time.
Dunnow, everyone is in a bubble of some sorts. I'm online a good bit but rarely on my phone, If I'm away from my desk I'm offline. My social circle is similar so I would naturally have bias for what I experience.
Year or so ago I took an Uber and was mesmerized by the driver. He had his phone up mounted on the left and was pretty constantly interacting with it. Checking for new rides, watching a video, checking facebook. It was quite impressive how much content he consumed while at a red light and how dexterously he navigated to and through like 10 different apps.
I very much got the feeling that this was a person that was terminally online and suspected that he's not alone. A bit alienating really, living in the same country speaking the same language but realizing there's this huge cultural/behavior divide between us.
I don't want to talk to people in the real world. That's why I spend all of my time on the fucking internet.
Isn't this more about identity crisis? Not in the psychological sense but in the internet sense? Who is real who isn't? Crypto proof of work idk? Your profile like d LinkedIn or Hacker News or something gives your clawbot x credits worth of legit automated queries or rate limits on your behalf? It could be flipped upside down where we don't spend our eyes off on reading websites anymore, it just goes and gets it for us but I may be hallucinating.
They tried Worldcoin as a solution to proof of human but it never took off. I guess linking to social media as you say? I haven't had much issue with AIs but so many human scammers often with bot accounts. On Xitter 90% of my followers are those.
I imagine soon there will be small scale Tailscale style semi private networks popping up with no AI content and no regards to draconic identity collection laws.
It is the corporate internet - the one by the corporation, for the corporation that is dead. Or at least everything in it is dead. The death blow is AI, but it was almost there anyway.
The good news is that the community internet - for the community, by the community - is just starting.
What is a community internet? The internet is layered protocols. UDP, ICMP, TCP, HTTP, HTTPS etc. The community internet is just a new layer of protocols. Coming soon.
Sure, but just for reference: https://xkcd.com/927/
The real problem isn't that bots exist — it's that we have no trust infrastructure for the internet. We can verify domains with SSL, authenticate users with OAuth, but we have zero standard for verifying that an AI agent is good at what it claims. The dead internet is a discovery/trust problem, not just a spam problem.
I'm curious what tools do people use to apply to jobs automatically? Would this be automatically flagged by recruitment systems as AI? We have never had a problem like this at our company but now I am little paranoid
The funny thing about the LinkedIn post is that the parody is dead-on as to the kind of mindless slop a human on LI would post. LinkedIn was the Dead Internet before LLMs were even a thing. And I guess AI doesn't even have to be posting everything for Dead Internet Theory to hold, it just has to be the default perception in order to cause everything to be treated skeptically.
I think I'll just take up blacksmithing.
> And of course let’s not forget AI spamming OSS repos with nonsensical PRs. What’s even funnier is when the reviewer turns out to be AI too.
What's even funnier is this is literally how "agent teams" (the latest hotness) work. They just do it all on your laptop rather than spamming GitHub.
I reckon we are going find an inverse metcalfes at some stage where the value of a network is proportional to the square of its connected users, minus the square of the number of connected bots. Heck I would be surprised if meta hadnt figured this out, or wasnt on the way to figuring it out.
I'm here for it in the short-term. As the market continues to saturate, most of the people building this stuff will flame out. Eventually, I suspect we hit a tipping point where the ROI is too low (not enough real human engagement, just other bots) and the flood dials back.
This is the founding thesis of the dead internet theory: https://forum.agoraroad.com/index.php?threads/dead-internet-...
In nature, sometimes death is the prerequisite of life: Think of the dead leaves on the forest ground.
I think the age of algorithmic curation is dead - but it may, through a „RenaiSSance“, bring back true human connection.
Vrei sa pleci dar numa numa iei numa numa iei numa numa numa iei
bring back the old internet
We are closing in on the "personal bubble internet" bots create news, videos and comments for your personal liking.
Lots of interesting ideas to fix it, I’ll offer mine: let it die.
The grand bargain of the web is gone and it ain’t coming back.
Just yesterday in a local non profit organization's Signal groupchat a user who had just offered to take meeting minutes the day prior emitted an open claw error message to the chat. They are now banned from the organization.
I think next step will be an isolated version of invite-only internet where you have to be physically present with your invitee to give them access. There will be a beautiful navigation widget where you can access a unified "addon" to any page: community moderated comment section, version history of that page, backlinks, carefully curated "related" section(so that you can continue browsing beautiful human written content on 1910 era steam locomotives, similar to 90s era webrings), donate button so that you can support he author and much more! Oh, the dream
optional de-centralized hosting, unified cryptocurrency as payment tokens, single open LLM as summary and search-indexing tool, specialized toolkits for journals and social networks (livejournal, early twitter, early fb). Most importantly: you can post anonymously where its allowed (there could be areas where it can be disallowed entirely, like a public square), but your account will take the punishment, so no edgy shitposting behind throwaways.
Really makes me wonder. Is there anything we can do to revive the internet or is it time to let the golden age go? : (
What is wrong with real life?
Very fair question. Investing in life outside of the internet has always paid off ten-fold.
Kinda been dead a while, also not dead there's still good stuff out there. Lots of it, but it's in the corners and under the carpets. Things created in the original spirit of the "let me show you my interests" that the older web was built on.
While back was toying with the idea of building out a new web on a new protocol (not http based). Thus no existing browser would understand it. Deliberately obscure to force a "Reset" button of sorts.
Though would be short lived, over time we've learned to ruin stuff faster and faster. I'm not sure there's any network so alien that it could hold on to that golden era of innocence from the past, it would be found then expediently and expertly exploited.
https://geminiprotocol.net/docs/specification.gmi
[no, not that gemini]
I have to use LinkedIn to sell. I only occasionally look at the feed but I am ruthlessly muting or blocking anyone who is blatantly foisting their AI drivel on other humans. I’ve had enough of this shit.
unfollow works too
I’ve been unfollowing people for a while and the issue is rarely from within my network anymore… the feed shows a lot of posts from AI foisters who I don’t even follow.
Same. LinkedIn is now unusable for me. Will try blocking!
Maybe we should get more Jannies. They only cost zero dollars and zero cents
Everyone here is so far from a normie it is almost painful. Dead internet is an outcome of supply and demand.
The fundamental issue is that a plurality of humans pref the direction things have gone and are moving in. Is it a good direction? By this crowd’s standards, no.
To be clear, i dont like either but when i watch the speed kids swap between 5 insta accounts and 3 reddit accounts, it seems the majority are happy with it.
Let’s not kid ourselves: Every day, multiple “I just asked the LLM to clean up my notes” posts are voted up to the front page here, often with highly engaged, appreciative comment sections.
LLM’s for all their faults are well-trained to produce what we want.
Reddit in particular is overwhelmed by bots. There are small niche communities where it’s mostly people talking to people, but the vast majority of popular posts are made by bots, voted on by bots and commented on by bots.
It’s not even like commercial astroturfing, it’s just karma farming and public sentiment manipulation.
I think it makes sense, since most people don't post anything or at least don't post much. So someone (something) must fill in this void.
Presumably advertisers still want real human eyeballs?
Or maybe we have finally accepted that our entire economy is the naked emperor.
just go out, what you are seeking is real life which happens outside, not in front of a display.
I, for one, miss the promise of the old Internet, that you could connect with people from all around the world. I always saw it as an exciting extension to real-world interactions, not a replacement. And I love the fact that over the past 30 years, I was able to make friends (or pen pals, if you will) in the US, Canada, Mexico, Bolivia, Japan, Indonesia, Russia, New Zealand... the concept of finding people sharing your niche interests, wherever they were on the globe, even as you were stuck on your cozy suburb, was amazing to me, and I'm sad that we all but lost that.
The only place that reminds me of the old Internet is VRChat, funny enough. You're guaranteed to be interacting with a nerdy, culturally similar human who's present in the moment.
That just reminded me of Chat Roulette for some reason. It seems that one is still around, as well. I'd guess not many bots on there, either (though potentially plenty of other unpleasant things).
I am still pissed that Wikipedia calls it a conspiracy theory.
https://en.wikipedia.org/wiki/Dead_Internet_theory
Wasn’t WP supposed to be impartial and avoid passing judgement?
There's been some discussion about this
https://en.wikipedia.org/wiki/Talk:Dead_Internet_theory#c-Bo...
Thankfully, humans excel at finding solutions to problems.
Most of the time, by making more problems.
So, as I share his thoughts, I've been wondering: why haven't we seen any real innovations in this space?
Mastodon wasn't really it and neither was Substack, although maybe it got slightly closer. TikTok and Telegram, maybe, for different reasons, but they'll face the same destiny.
I'd suppose the much despised "mainstream media" might be a winner here eventually. But beyond that, I am thinking about something like the following:
https://www.theguardian.com/technology/2026/mar/10/uk-societ...
Mastodon has been an obvious innovation and success, along with decentralized platforms and protocols in general.
"I only see two outcomes for this problem…"
Or, you know, the internet just dies and we all meet at bowling alleys again.
Dead internet prophecy.
Ironic that there’s a dead bot comment at the bottom of this article trying to pose as a human
Good fucking riddance. Time to start actually talking to each other again.
fuck the internet. eventually everything needs to get fucked.
1) Holy fuck I'd borderline forgotten about Numa Numa
2) Reddit... doesn't have much of an incentive to fix the astroturf issue. The site "organically" censors, a lot
Well, it's not such terrible news, is it?
I get nostalgia for the 90s/00s, but that time was never coming back anyways.
The best we can hope now is for people to be less online. And if it comes from people drowning in AI crap, I think it's kind of funny.
I don’t think people are going to get offline and the best we can probably do is create free and open p2p platforms that don’t require registration necessarily to use. And allow people to control their own databases. A lot of communication is locked behind corporations that run the services that are building these tracking and identification databases.
I actually think it’s more about getting people off browsers and other tracking software.
>create free and open p2p platforms that don’t require registration necessarily to use
And how do you create this without it being overran by bots, spam, and people posting gargantuan amounts of porn?
>allow people to control their own databases
There are two types of people that want to control databases. 1: The freedom seeking type who want information sovereignty. 2: The type of people that want to hoover up as much data as possible for money and power.
Guess who has more ability to control the world out of those two.
Lastly, most people want to use curated websites free of spam and content they don't want. Almost nobody wants to do that curation themselves. Hence curated platforms will attract the most people via network effects.
ah shoot, that wasn't lastly...
> getting people off browsers
and putting them on what exactly? phone apps, that's not better at all. Multimedia attracts people like flies to poop. It's seemingly a natural human response to move to an application that is more visually interesting regardless of it's security safety.
> And how do you create this without it being overran by bots, spam, and people posting gargantuan amounts of porn?
By not creating public networks out of it.
The only database people want to control is their personal information and who they communicate with and when. So we should enable low barrier to entry to communicate.
I cannot solve for the social media side of it, but we can enable people to at least have low friction when getting online. Somewhere to turn to that is not a data harvesting service.
The displacement effort has to come from those that believe in those freedoms. It’s not easy, maybe impossible in some circumstances but this status quo right now cannot be it, in my opinion.
Hey it’s what executives want. Fake everything. Slop and robots everywhere. Have at it, I say. Maybe then people will go outside again
I wish going outside again were possible, but what if most of the people you actually want to hang out with aren't in the same area?
And for those who are near, the cost of having a coffee or a drink is too much now on top of expenses that are already stretching,
The kids tend to hang out with the kids of their parents' friends, with the neighbours' kids. A bit later in life, when in school, we find friends among the classmates, who too aren't usually all that similar to us.
Maybe when we switched to a fully online adult world with its hyper-optimization of everything, we've put our potential friends in the same bucket with recommendation system-driven content like music and tv-shows. Dating too.
There are certain benefits in getting by with limited choice, when we learn to communicate with people who are not a 100% match.
And as for having a drink or a coffee- we can always just invite the friends over. Hanging out in each others' apartments is fun and cheap
go for a walk with your friends and a bottle of water
It's becoming rarer and rarer for people to have friends that are physically close to them unless they've stayed in the town they were born in.
Modern technology kind of broke friendship in the sense that not very long ago maintaining friendships over any distance was expensive. That is it costs long distance fees, gas, or letter writing. Because of those expenses it was very common to make friends locally pretty quickly.
But the internet broke that, especially modern social media. Wherever you moved your friends were a free website away, and long distance calling was gone. At first this seemed fine because sites connected you to your friends, but as the lock in happened it became a contest of getting you pissed off and showing you ads.
Going to take a long time to socially fix this problem. Especially as some large number of people are going to talk to AI instead.
Hi Friend, are you looking for a <insert product>bottled water</>? Here are the top 1000 brands of delicious bottled water! Water is very good for you! You are an ugly bag of mostly water! You do not have to enter your payment details, I remember them! Be assured that 1000 cases of 1000 brands of delicious bottled water are on their way to you now [Shipping charges may apply] at Peach Trees 2026, Sector 13, Mega-City One! Have an adequate day!!!
Eventually that's what's going to happen if things keep on going in this direction and it looks like there's nothing stopping it so yeah, we're moving in circles. Old things will become new again.
My household just bought The Brick to start taking control of our phones and online usage. We've been very online for 15+ years but are hoping to break the addiction cycle by simply blocking our devices from access. The timing feels right, mostly because sites like Instagram and Reddit are too braindead and spam+ad heavy these days. The executives and shareholders' desire for profits have already killed two of my biggest online pastimes.
I heard of The brick and sounds very effective. Not many people realize the're addicted to their devices and most tragic of all is that kids who grew always online have no baseline to return to. As the OP mentions, I too hope that when it'll all become a junk pile people will eventually return to offline mode.