Unlike a lot of communities, yours at least started on the correct side. Better to ban outright, than to slowly realize that you should have banned it.
When it comes to slow forum content, I think it's a fool's errand to try to determine if someone is using AI for their responses. Any of the tell-tale signs of AI are easily skirted by mentioning in their prompt to not do so. It goes back to how you can't sanitize human language which has been an issue with LLM's from the beginning.
Encouraging a culture of not using AI works to an extent, but I also tire of threads claiming the parent post is AI. There isn't a sure-fire way to know one way or another.
Indeed. Take a soft approach, or "wait and see", and you'll just allow your community to get infested with slop enthusiast crybullies that loudly protest any pushback against "genai content". The communities that draw a firm line and hold it will be the only ones that endure.
It was a surprise to us how vehemently some folk defended AI content and assumed it was their right to post it within our community.
We had no problems with people using it and posting elsewhere, it was the demands that we must allow it that were problematic and made us question whether we were doing the right thing.
No regrets now, though, as we see competitors being flooded with AI slop and they are too invested in it to change now.
> I dont know why people would want to be in a community where they arent wanted.
This is standard predatory behavior. Child abusers hanging out with kids, weirdos hanging out near the women's clothing department, etc.
It's usually a clear indication of the sort of people you don't want to associate with in your online community. They bring a net negative to the table.
Adding that much friction is also going to loose you many genuine users. Might be worth it depending on the community but if it makes newcomers fewer than your usual churn rate its a death sentence.
This is fucked and I hate it. Internet is (was?) about convenience and direct access. I understand there are challenges that need solutions, but this ain’t it
Not that I don't take your point that such a service could exist, but the site you linked explicitly says they don't offer letter writing as a service.
Also, I imagine it's not impossible to reliably distinguish between an autopen and genuine handwriting. The company who's site you linked say their machine can't perform complex pen movements so calligraphy is impossible.
The real advantage of posting a letter is that you have to pay for postage, and the stamps on the envelope will indicate which country the letter is really coming from.
Where I live a 2nd class stamp costs the equivalent of $1.24. That's $1240 for a thousand.
Not including the cost of the letter itself, or the envelope, or the cost to write it if it's being farmed out to overseas labour, who then has to send it by international postage. And then you have evidence of where the letter originated, and that can be compared with how the user presents themselves online.
Little bit more than 2 hours minimum wage I think.
If you head to Twitter right now, the vast majority of bots are blue checks. It seems to actually encourage the opposite, where you trusting that someone paying $8 for an account makes you even more likely to fall for slop
I think twitter is an odd-one-out here, twitter as a whole has been heading down hill ever since the acquisition, and I wouldn't be surprised if many of those blue checks are officially sanctioned bots. Especially given the way so many of them push the same narratives that Musk does at the same time he does.
Agreed, but I left twitter even before the right-hand-raising oligarch took over. The reason was that censorship started to kick in aka twitter staff writing me a mail that my "conduct" is not appropriate. Basically they try to reduce the "aggressiveness" in written content. Well, that's already an assumption on their part; and in any discourse with orthogonal opinions, you can not really reconcile such positions anyway, so I don't need some 20 years old from India hired by Twitter to tell me what I should or should not do (though, realistically it was a bot actually that just scanned for content). I noticed that censorship is increasing on "social" websites. Reddit as an example is a mega-censorship site - the amount of deletion by crazy mods is insane.
Bots are indeed killing twitter now. I noticed more and more were leaving permanently. Musk evidently accelerated the decay here. There is something wrong with his mindset here, it's almost as if it is pathological. His perception of things is genuinely distorted, and I am not even 100% certain he is completely aware of it; he must be partially aware, but it seems there is also something wrong with the brain. No wonder he gets along with Trump - that one now has clearly dementia narcissism in the final stage.
This does not work, for similar reasons why captchas piss off real humans.
You add a barrier here. You think that your solution means that AI is reduced, but you also reduce real humans. I noticed this with other parts too, such as "you need to verify your identity before you can post to the ruby issue tracker". I can do so, but I need my tablet and this takes me more time than before, so I stopped using the ruby issue tracker altogether. (It's not the only reason, but adding barriers really makes me invest my time elsewhere - more likely to do so at the least.)
You always need to consider all trade-offs. Charging money means you will also offset real humans at the same time. And it's not solely about the cost; it is simply a hassle to want to do so. For similar reasons I also rarely register at a phpbb forum - I need to store the password to not forget it etc... so more hassle. Using a password manager is also more of a hassle.
Yeah, I tried to sign up for instagram, but at the fourth captcha I gave up and left. How does instagram have any users with such a hostile sign-up barrier?
Fun fact. There is this threads twitter clone from meta. How do I login?
I "log in with Instagram", where "I log in with Facebook". Guess how well data recovery works when there is literally no password set. I'm surprised these systems work at all.
That's an assumption. Depending on the incentives in play, the relative scale at which AI users and real humans are affected may well be the opposite of what you expect.
Is SA still a thing? I had an account since... 2007? God I'm old. I miss the days when you could have a community that you could easily search for content. Nowadays everything is a discord black hole.
A lot of the "add a cost to stop bad actors" end up being a selection effect in favor of bad actors.
Sure, it might stop 10% of the bad actors and lower the numbers, but it'll stop 80% of the good users who aren't experts at getting around the cost or don't have an income from using the service to just pay it as a cost of business.
>shrug off around 600 AI content creator accounts monthly.
>I fear losing the battle.
I was in a small niche creative writing community for a while. Circa 2021\22. AI wasn't why I was there but I demo'd a few LLMs to a lot of the users in the Off Topic section because people were curious. Even with an explanation of how they operated, almost everyone was at least interested. One author told me how he operated similarly, rote learning how to write like his favorite authors by copying out their texts, hand written, word for word. Their concern was largely that they were too hard to use from a technical perspective.
These people knew I was there to learn, and that I was unlikely to ever try and publish LLM derived content. I said as much often.
Sometime in late 2022, a switch was flipped. And almost all of them started talking about how AI and those who used it were unambiguously evil. They didn't say my name, but they stopped engaging with me. Gradually, they started reposting twitter content from extremely anti AI people. Complained about AI submissions to various publications. Eventually, someone reposted a tweet calling for the death of anyone who used an LLM, with not even a single disagreement (and lots of encouragement)
I just bailed. I had only ever engaged positively, answered questions for the curious and tried to help people out. I posted one AI assisted story, and that was to demonstrate how my contributions were tracked vs AI contributions automatically in the editor to satisfy someones curiosity. Clearly highlighting the bits I had written. Just a technical demo. No one was asked to enjoy or positively engage with it as if it was human written.
A while later, most of their submission rules were updated with a new clause, if it was judged that AI written content was discovered, they would blacklist that person from all submissions across their entire community. Considering I had demo'd LLMs, and the uselessness of AI detectors, it was clear to me that these people would be able to justify blacklisting me if I poked my head up at all. I had been developing my own story for submission (myself, no LLM content), but I just dropped it. I just didn't feel like sticking my neck out for the witch hunt.
I also used to be quite engaged with blockchain. And it went through a similar process, most people ignored it until that paper about the power usage (Claiming it would spike to some level it never reached) and then suddenly being associated with it was an outrageous moral crime. But after a while, when it turned out that the power use claims were largely a nothing burger, people gave up on the hate parade.
I don't think you will "Lose the battle" (at least in terms of keeping AI users out). And its always ok for small communities to be selective about their membership. I just don't think its possible to maintain such artificial rage for more than a few years. The AI Datacenter water/power claims are a clear London Horse Manure problem that looks set to resolve itself, and the copyright issues will get sorted to some degree. Eventually I think you just wont care enough to ban anyone except low effort spammers (of which there are a huge amount, granted).
Have you considered the possibility that most non-programmer people mostly experienced the negative effects?
Blockchain turned out to be an absolutely awful payment method, so most people only know it as 1) a way to do crimes like ransomware, 2) a get-rich-quick scam, 3) some buzzword companies threw in everything, 4) the thing that made GPUs unaffordable.
AI is now the thing that 1) is drowning the internet in slop, 2) companies throw into everything - to the point of making apps unusable, 3) makes most computer parts unaffordable. And what they get in return is... a kinda okay-ish Google? A homework plagiarism machine?
Their opinion about AI or blockchain most likely has absolutely nothing to do with you. They are just seeing the world noticeably get worse, and are desperately trying to protect their communities from it in any way they can.
>Their opinion about AI or blockchain most likely has absolutely nothing to do with you.
Which is why I left before I was banned. I no longer felt comfortable and they probably likewise. They wanted a safe space to hate on people involved in AI art and my leaving contributed to that. That said, I doubt I could have posted content calling for the death of authors or honestly any other group in that space without being ostracised.
Its a bit like saying "A witch might have burned down their house, so their reaction against witches is understandable" maybe in abstract. But that doesn't mean the subsequent actions are acceptable.
> Have you considered the possibility that most non-programmer people mostly experienced the negative effects?
Yeah absolutely. These people in particular, at the time, really on experienced it through 2 factors:
1. They (like many people) posted a lot of their midjourney creations for a few months. (21/22 was like that)
2. They saw an increase in low quality submissions.
So gripes about AI art and low quality submissions seem perfectly valid.
>Blockchain turned out to be an absolutely awful payment method
>AI is now the thing that 1) is drowning the internet in slop, 2) companies throw into everything - to the point of making apps unusable, 3) makes most computer parts unaffordable. And what they get in return is... a kinda okay-ish Google? A homework plagiarism machine?
Yeah so I am not complaining about people having negative opinions, I was sort of talking about the over meme, the zeitgeist switch where suddenly the entire conversation goes from pros/cons to what appears to be a standard, negative message that everyone absorbed in a short time. Basically used like a thought terminating cliche. I have problems with crypto, and I like things about crypto. I can have a great conversation with most people, but for 12 months or so, you couldn't have a conversation without people loudly shouting about how the power use was going to destroy the environment and that it was going to use X% of the power by Y date. They didn't want to talk about it, they had been given evidence that the discussion was over and everything was solved in favor of their beliefs. The AI debate has now roughly arrived in the same place, there's no longer really a discussion, but the zeitgeist has this one single mode that's constantly debated. To the point where you could be running a local LLM trained only on data from the 1800s and you can still be considered to be responsible for some data centre single handedly draining a lake.
My point is, like crypto, this fixed idea will eventually erode and the hate train will move on. People with well thought out negative opinions are still going to exist past that time, they just wont have people screaming at fever pitch about it constantly.
You didn’t like the broader consensus views towards llm usage but that doesn’t mean it wasn’t ultimately a positive to their community that you left. It sounds as though there was a mismatch in what you and the broader group wanted so perhaps a non-confrontational split is the best that could be hoped for in this situation?
> They wanted a safe space to hate on people involved in AI art and my leaving contributed to that.
Once again, I have to ask, why do you think that that is what they want? Maybe they want human generated content?
> the zeitgeist switch where suddenly the entire conversation goes from pros/cons to what appears to be a standard, negative message that everyone absorbed in a short time.
Understandable, though. Why discuss the pros and cons of $FOO when you're drowning in it? All you want it to stop the drowning.
I'm not angry, you just seem to be taking a very self-centered view on the general vibe in this specific forum you mentioned, and are interpreting general anti-AI/blockchain sentiment as personal attacks.
Its more like, here are the decisions I made while being in the position of being on the outside of sentiment, and the timeline of that changing sentiment.
The only thing I really took personally was the call for death, and that was me making a decision to leave in favor of my mental health.
This is entirely vibes based on reading research on similar campaigns so I cant pull a paper with hard evidence about this specifically. But I believe chinese/North Korean infowar campaigns are behind these seeded talking points. They seed in these far left activist communities and then once they find one that sticks the real people in these communities start carrying the message out to other communities and then the CN/NK botnets amplify the messages and suppress the responses. They dont just do this on the left im just highlight left for this specific point.
The battle is lost. You never had a chance. There's nothing you can do against the constant torrent of AI content that's only getting started. The online communities that we know and love are going to change and there's nothing we can do about it. You can't keep AI out of any platform no matter what the community guidelines say or even if it seems locked down with no bot access.
The only solution is in person meetups, bringing back the 3rd places, joining a club. Maybe it's not such a bad outcome.
I think the only reason stackoverflow still has any activity is because the community choose to ban AI content [1] and so did most of its other networks [2].
Perhaps it will even see a (small) resurgence when AI providers start charging for the actual costs.
Considering StackOverflow is now providing a ground truth for AI training, I believe the ban is more about not poisoning the well rather than keeping the StackOverflow or StackExchange human-friendly.
That ship has sailed long time ago with zealot admins and verbal harassment.
> That ship has sailed long time ago with zealot admins
Where there are certainly strong examples of this, a lot of people mistake enforcing the rules as zealotry. Part of the point of SO was that if things don't change then there is a completed state for SO too - no need to ask duplicate questions like on platforms where a post is less long-lived. Unfortunately people take things like “this is a dup”, “provide more information as we can't help”, “this isn't a complete answer”, and so forth, as deeply personal attacks…
One of the good things about LLMs is that they've drawn off all the simple already-answered questions! Unfortunately the more complex ones, or the ones for new solutions, are also going there so SO and its family of sites is ceasing to grow even in the ways it wants to.
> and verbal harassment.
Again, that did/does happen, but a lot less than some people report it. The most abusive people I've seen on there are those who have been given one of the responses I listed above.
People also always bring up the "fake XY problem" thing on SO as a sign of toxicity or whatever, but I’ve had many, many results where it was a XY problem, and the actual problem Y was solved, yet I landed there searching for a solution to X :/
The AI companies aren't so deep in the red when you only look at inference though - they are investing loads in new models in an AI arms race.
So I don't imagine AI is going to go away, especially given that now there are more open source models like Qwen that you can run locally. So even if those American behemoths go bankrupt it will persist.
> The AI companies aren't so deep in the red when you only look at inference though - they are investing loads in new models in an AI arms race.
Depends on how you're looking at it (using speculated numbers for easy math):
1. Having operating costs of $100m on revenue of $10b is very deep in the red, regardless of training costs.
2. Having $90m training costs on $10m revenue means they're just breaking even.
Problem is, we don't know their financials and how it is broken down (they could, of course, clear up the confusion and release some numbers, ut they aren't doing that now); all we know is when they need a new raise to continue operating.
From the raises we can determine what their operating costs are (For example, raised $30m in 2024, then $300m in 2025 is a 10x increase in operating costs because they aren't spending on capex. The training is done on opex).
From their subscriptions (which are all only estimated), we can sorta tell what the revenue is, but that's for subscriptions only which are almost guaranteed to be running at a loss (until recently, anyway). We don't even have estimates on revenue from the PAYG API users. Common sentiment is you'd be a fool to use the PAYG options for anything but trialing the service, but the world is filled with fools, so you never know!
What is interesting is comparing the prices for PAYG on the providers supplying open models vs the PAYG on the closed models - the suppliers providing open models aren't spending on training cost, so the cost to supply tokens on open source models is pretty close to the actual price of running models. This is partially confounded by the fact that many of these will have VC money backing them (they are not bootstrapped), and so will also try to perform landgrabs via subsidised tokens, because their goal is an exit with a buyout, and without an eventual acquisition they will simply fail.
I can't think of many open source model suppliers providing subscriptions, not ones that subsidise the subscription, at any rate.
The first IPO of these SOTA providers is going to be the eye-opener; we'll finally see their financials and we'll see just how much the PAYG was subsidised, and how much the subscriptions were subsidised.
Until then, with a collective industry investment of $800b (last I checked) and a collective revenue of $20b (last I checked), they are most definitely operating in the red for the most common definitions of operating in the red.
I kind feel this might be good.
Bot writen comments and AI media that can no longer be distinguish from real, will make us human leave the social networks, which helped to separate Us humans.
Going back to the real world were you can trully believe on what you see, and enjoy the tone, look and scent of of our fellows humans beings.
This seems naive. As long as people are "enjoying" the AI-infested social networks, or at least not annoyed enough to leave, they will stay on them, and become further disconnected from reality. We have half of EU teenagers talking to chatbots regularly. Alienated people flock to them.
Reminds me when reality TV came a long. Many folks were convinced that it would be a passing fad and that within 12-18 months TV would return to the way it had been before hand. That because the quality was so low people would eventually bore of it, still waiting for that moment...
Yes. I did not left but visit the site less often and kind of worry about its future. The engagement over what I post is just much lower, while the number of reported visitors seems to increase. I don't mind some good quality AI comment, sometimes one see it, but overall it is slowly becoming ghost town.
Social media that caters to what the user wants/interacts with can become infinitely more so. This is already applied to entertainment content across tv and the internet.
At some point an instagram/tiktok/etc user could see nothing by real people and not even know what is promoted vs ad vs post.
A lot of them aren't actively seeking them out. They are pushed at them and they just try it.
Go on, just this once, you can stop if you don't like it…
> than wanting to talk to humans.
The disaffected just want to talk. I'm sure they'd prefer humans but once the chatbots seem to be good enough in their absence they get a bit trapped there because the bots are too sycophantic and they get conditioned to want that from humans too which will not happen.
A few tech companies managed to get massive numbers of people addicted to toxic social media content that was terrible for mental health but made a small group very wealthy. I don't think those same businesses and execs are just going to pack up and go home with an even more powerful content tool available now. LLMs are going to be used to create skinner boxes that make Facebook and Twitter seem like wholesome communities.
The problem is that many of us have niche interests and no one local to discuss things with or get made fun of for being a nerd.
I loved maps and geography as a child and still do. I've never met anyone in real life that likes it as much as me. But on the internet there are places were I can discuss it and other people share fascinating articles, pictures, etc.
This is why cities are popular for this exact type of person. For centuries. People with niche interests move to a city, which by sheer density, have others with said interest.
Plenty of people have a reason why they can’t do it, but plenty do it and are happier for finding their community IRL.
Yeah except cities suck for many reasons. And for really obscure nieches where there may only be a couple hundred enthusiasts worldwide cities are not going to offer you the same forums that the internet did.
> "I kind feel this might be good. [...] Going back to the real world were you can trully believe on what you see, and enjoy the tone, look and scent of of our fellows humans beings."
No, it isn't anywhere near good. One doesn't throw out the baby to get rid of fouled-up bathwater. Online communities are just as valid as offline ones; it's just that many people a) don't want to be deceived, and b) don't want fakery (slop) all that entails. Easy.
> "Online communities are just as valid as offline ones Hilariously false."
No, it evidently isn't. Online communities connect people, and other communities, in ways that are impossible or undesirable to realize in meatspace. Bizarre to treat this as a zero-sum game.
> "Nothing, nothing substitutes for real human contact in the real world."
Problem at scale. Doesn't matter if someone is consciously able to identify individual bot accounts or comments. There can still be a strong general feeling that something is very wrong. Leading to more and more frustration and unhappiness.
"Popular" reddit posts and subreddits are a good example of this.
It's a market for lemons [1]. The issue is that if AI slop can't be readily distinguished from real human content, the real human stuff will get less and less attention over time. With less attention, people lose interest in writing, and eventually abandon the community altogether. As genuine human writers leave the community, the concentration of AI slop increases, and readers begin to realize that there isn't anything of value left to read, so they depart as well.
One of the paradoxical things that makes me hopeful is that there's going to be such an incredible amount of low effort AI slop content that it's going to drown out the low effort human-made content and generate a large amount of distaste for it. So much will be so bad that good taste and high quality will be rewarded more status as the people who will say and believe anything will be led astray and left behind.
Maybe it's hard getting across what I mean so a more concrete example is there will be SO MUCH clickbait out there that serious outfits instead of being forced to do it will be able to successfully differentiate themselves by NOT doing it. (and many similar things in different arenas)
I'm trying to say that LLMs raising the noise floor will drown out a lot of the toxic noise that's been plaguing us.
> So much will be so bad that good taste and high quality will be rewarded more status as the people who will say and believe anything will be led astray and left behind.
I really want to believe this will be true. However, I also suspect there's some external driving force, that I cannot readily name, which is making people incapable of consuming anything except this low-effort content. I mean, obviously it's working to some extent. Perhaps AI will be the thing that accelerates its death, but part of me thinks something else needs to happen beyond just an increase in useless content.
In my opinion there isn't an external _nefarious_ force causing all of this. Certainly those forces exist but without them much the same would be happening.
It's the economy of everything being free but supported with advertising. That mechanic is what leads to the race to the bottom lowest common denominator human motivation hacking attention toxicity. (yes that's a bit of a ramble).
If people weren't getting paid for the smallest increment of attention they could grab, it wouldn't be promoted the way it is. I don't have a high opinion of the things which grab my attention, but they still manage to do it sometimes. I think many people are in that boat. If there were other mechanisms with which we rewarded people for doing things, something different would be optimized.
And people just wouldn't reward the 10-second-gratification in anywhere near the same way if it weren't for the advertising.
The balance is so far out of whack with LLM's now in online communities. People crave human interaction with like minded individuals, and whoever figures out how to give authentic online experiences is going to be successful. Maybe small communities need to come back, where you build credibility slowly. Why does every site have to be a monstrosity that wants to build a hundred million users to IPO. It just attracts the worst. I was active on Reddit for years under the same username I have here. I have pretty much abandoned it.
I use Blind sometimes to check the TC of a company. Most of the posts/comments there are either stupid, sexist, racist, or all of them. But it does feel like most of them are real. Blind requires verification by company email for posting, which I guess eliminates most of the bots.
> People crave human interaction with like minded individuals
I don’t think they crave it enough to make a difference. Even before AI slop, Reddit had made successive changes that led to much less of a feeling of interaction with real, authentic humans who could become your buddies. The UI de-emphasized usernames and hid the sidebars where subreddits could have their own distinct community atmosphere. I hear that now on comment threads, Reddit will even hide a decent number of posts from other users, so that a poster may well be talking into the void.
It is on old-school fora that one can get a sense of actual interaction: with avatars and other personalized touches it’s easy to gradually learn who is who, and there is a culture of longform text where you can actually get a sense of other people’s personalities. But how many people under the age of 35 or 40 are joining those fora that survive? Give people a choice, and it turns out they prefer the dopamine hits of engagement-maximizing commercial platforms, and the smartphone as the default (or sole) interface to the internet with all the death of nuance that spells.
Some definitely enjoy the dopamine hits and get addicted to the doom scrolling. Maybe I am just too old to understand it and the internet is passing me by. Some of us still like conversations like this. Real conversation in a respectful manner even when we question each others viewpoints. The old internet is still there in some places and I'll continue hanging out there as long as it does. While I have great friends in real life, not that many of them are old tech nerds, so the internet is really the only place to talk to like minded people.
> whoever figures out how to give authentic online experiences is going to be successful
The problem is, there is fundamentally no way to scale this.
The only way to give authentic human interaction with like-minded individuals is to connect real humans to other real humans who share interests. And as we've already seen over the first few ages of the Internet, once such a community scales past a certain size, it a) ceases to be a place where people can come to chat, discuss, and hang out with their interest-sharing friends, because there are just too many people for one person to know, and b) becomes a target for profit-minded interests who will cheerfully eviscerate any authenticity and connection the community brought if it will make them a small profit before the community crumbles and collapses.
So anyone trying to "give authentic online experiences" as a business model is going to have to accept that they are going to be, at best, a small, modestly profitable company. And given the state of things today, I very much doubt that this is in the cards.
I have largely written Reddit off and no longer visit it
after an experiment I did where I had an agent karma farm for me and do some covert advertising. As I went through the posts it wrote I realized that as a reader I would have NO idea that these were just written by a computer. Many many people (or other bots) had full on conversations with it and it scared me a bit.
I am not quite there with Hacker News but I do know for a fact that many "users" here are LLMs.
Online communities are definitely dying. I guess I hope that maybe IRL communities have a resurgence in this wake.
For a while there were a lot of posts from people experimenting with ChatGPT to write anger bait posts on Reddit where they would later edit the post to say it was fake, written by ChatGPT.
I assume they thought they'd be teaching people a lesson by making them feel foolish for responding to AI stories, most of which were too fake to be believable.
However it did not matter. The posts remained popular and continued to bring in comments even after the admission that they were fake. In advice subreddits, commenters continue to give advice on the situation. Some comments would say they saw the notice that it was fake but continue arguing about it anyway.
This makes a feature of Reddit very clear: The truthiness of a post doesn't matter. The active commenter base on popular subreddits just wants something to discuss and, usually, be angry about.
In retrospect it's obvious given that misinfo posts were the easiest way to karma farm for years even before AI.
We do precisely the same thing here. Here's a relatively recent post that, to me, seems obviously LLM-written. It just rattles off some management platitudes:
Yeah, the trick is to do your own curation and go from there.
If you like some authors or journalists or bloggers, go see who they read (trust me they all say who they follow in their own niches) and build from there. You can develop quite a good RSS feed following this method in like an hours tops.
Even without AI slop I've noticed this happen on Reddit.
I once made a rather boisterously-argued comment on a political issue I'm passionate about, and I realised that I'd made a serious error of reading comprehension when it came to my opponent's argument. I apologised to them for being an abrasive arse over my own mistake, then edited my comment to say that I was mistaken.
My incorrect comment which literally said at the bottom it was incorrect continued to be upvoted while my opponent who had made the stronger argument continued to be downvoted.
The decline of Facebook is sad. I liked it early on. I used it primarily to follow family and casual friends from high school. When they posted, it would show up on my feed, I read all the posts, and that was that.
After awhile I had to wade through all sorts of nonsense to get to the posts I actually wanted to see, and even later Facebook stopped putting posts from people I follow in my feed. It was 100% garbage. I can't imagine why anyone uses Facebook for anything other than the marketplace.
Facebook is fine if you join groups based on your interests (hobbies etc) and then aggressively unfollow/block anything you don't want to see. It's not really conducive to discussions like Reddit, though. Mostly drive-by comments.
> then aggressively unfollow/block anything you don't want to see
That is hard work. I have a few friends in the trans world and occasionally interact with relevant groups on FB. The attention algorithm thinks that this means I might want to see random posts from pricks who literally want to see people like my friends herded up into concentration camps. Most of it is far less extreme than that, but the system is definitely optimised in favour of rage-bait because that ticks up the engagement metrics.
I often hear that about Facebook, but at least it has a "feeds" button that you can press to get the sources you actually subscribe to. The default "home" feed is useless.
I think it's going to effectively kill public chat communities without either proof of identity or attestation through a web of trust. Or rather turn them into little better than comment sections on news sites; thriving but worthless.
I'm active in a number of online communities that are doing just fine but the difference is those all involve ongoing relationships, built over time and with engagement across multiple platforms. I've no doubt this clock is ticking too but it's still harder to fake a user across a mix of text chat, voice and video calls, playing an online game, etc and when much of the web of relationships extends back into real life activity.
But I agree the golden age of easy anonymous connections online has ended.
Note that "attestation through a web of trust" means something like needing an invite from an existing user. It doesn't have to mean mass surveillance.
Private torrent trackers have been doing this for a while. If some number of your downstreams act like shitheads - you get nipped and so do your other downstreams.
This seems like the best way to handle it. Also, smaller communities. It's cool to do the global thing, but once you have 10k active users you can't moderate it with a team of 5 volunteers.
I think the attestation approach works best if there are different reasons for the punishment. Eg someone inviting a turd doesn't ban the person who invited them. Someone going full ai spam should.
This takes it a step further than what you describe. They keep track of who you’ve invited, who they’ve invited and so on and if there’s enough bad leaves on the tree they just cull the entire tree. It’s a somewhat common practice with private trackers
what.cd was better. You either got an invite where if you tanked your reputation you'd get banned and risk the inviter getting banned too; or you had to take an interview where you got quizzed on how to properly rip music in a variety of methods and how to ascertain between different qualities of rips (like mp3 bitrates to flac cue files).
If you weren't a bellend on what.cd you got access to certain forums where there were even more and better private trackers. Once you built that trust there were social privileges, but if you abuse that trust you got rightfully banned.
PGP’s web of trust was kinda bad privacy-wise in some regards, as it basically revealed your IRL social network.
If my PGP public key has 6 signatures and they’re all members of the East Manitoba Arch Linux User Group, you can probably work out pretty easily which Michael T I am.
Are there successful newer designs, which avoid this problem?
The IRL social network is actually the important part of the trust structure.
The only one of these I've seen that really worked was the Debian developer version: you had to meet another Debian developer IRL, prove your identity, and only then could you get the key signed and join the club.
> The IRL social network is actually the important part of the trust structure.
For Debian-style applications that are 100% about openness and 0% about secrecy, sure.
But if you want to secure communications between pro-democracy activists in China, or you're a Snowden-like whistleblower wanting to securely communicate with journalists - y'all probably don't want to be vouching for one another's keys.
> Note that "attestation through a web of trust" means something like needing an invite from an existing user.
It's probably better to call this something like vouching and leave "attestation" as the contemptible power grab by megacorps delenda est. The advantage in using the same word for a useful thing as a completely unrelated vile thing only goes to the villain.
>Then how can you have a community that is welcoming to people who are not part of the ingroup?
Being welcoming to every random person is by definition not a community, it's a free-for-all mess.
A community means communal interests and values, it's in the name. And to guard those you can't just be accepting everyone without vetoing them. That's how it turns to a shit of spammers and trolls and people who want to hijack it and don't share the original cause/spirit. Has happened to forum after forum...
We were talking about online communities, but still, the same principle applies. If you just let anyone in, there eventually would be less there to feel "at home" about, and more of a disjointed and low trust number of individuals loosely held together by virtue of just being in the same place.
I agree with you. It’s the problem I can’t crack and it’s why I am letting the idea simmer for so long.
In the end, you need to filter people at the door. You need to keep unpleasant people out and shut down bad behaviour.
I figured that a paid, motivated moderator could be better than a web of trust for this demographic. Maybe enforce a stricter moderation standard on unvetted members. At my scale it might work.
You'd have to be brutal about culling, uninviting and removing anyone who doesn't look like a good fit.
Or have a two-stage process: run very public, very open events that anyone can sign up to an attend. And then invite specific people that you meet at those events that look like a good fit for your community to your private, community-only event.
This works if the goal is to create a funnel for making friends. I aim for something closer to Stack Overflow, where people gather to solve shared problems and help each other.
The closest analog I can think of is community-run bike repair workshops. Some people are deeply involved in, and others just have a flat tire.
The closest digital equivalent is the forums of old.
Some will be fine providing their ID, others can be vouched by members who are fine providing their ID.
This preserves anonymity because for the latter because they’re only known to be “related” to the former, which is a vague hint at their real identity (e.g. they could’ve met in another online community). And the former don’t care, if they want they can vouch an anonymous alt.
Which is, funnily (?) enough, how a lot of IRL organizations used to be. And basically don't be of the wrong ethnicity or religion.
It still happens more informally today, of course, but it used to be a pretty (if un-spoken) part of how a lot of WASPy organizations operated to a greater or lesser degree.
I'm sure there are still cohesive groupings of WASPs, if not large ones or effective at gatekeeping major institutions. --Still a meaningful trope, of course. But to bring it up to date you'd have to diversify, and include, for example, Indian social and professional-recruitment patterns.
Also, I do feel that GP's take is hyperbolic even in the twentieth century. My own background is mostly German immigrants, of various religions and non-religion, and the way I've been told the story none of them faced significant resistance as they moved upward in the various academic and corporate institutions of their choices. These included NASA executives, department heads, etc.
Note that in balancing GP's accusation against WASPs I'm not attempting to address the related, but not precisely complementary, phenomenon of perpetually marginalized groupings.
> I think it's going to effectively kill public chat communities without either proof of identity or attestation through a web of trust.
This seems self evident to me too.
It's another factor in why I think the tech community needs to get ahead of governments on the whole "prove your ID on the Internet" thing by having some sort of standard way to do it that doesn't necessarily involve madness in the loop.
> It's another factor in why I think the tech community needs to get ahead of governments on the whole "prove your ID on the Internet" thing by having some sort of standard way to do it that doesn't necessarily involve madness in the loop.
The problem here is that the premise is the error. "Prove your ID" is the thing to be prevented. It's the privacy invasion. What people actually want are a disjoint set of only marginally related things:
1) They want a way to rate limit something. IDs do this poorly anyway; everyone has one so anyone so criminal organizations with a botnet just compromise the IDs of innocent people -- and then the innocent are the ones who get banned. The best way to do this one would be to have an anonymous way for ordinary people to pay a nominal fee. A $5 one-time fee to create an account is nothing to most ordinary people but a major expense to spammers who have 10,000 of their accounts banned every day. The ugly hack for not having this is proof of work, which kinda sorta works but not as well, and then you're back to botnets being useful because $50,000/day in losses is cash money to the attacker that in turn funds the service's anti-spam team, but burning up some compromised victim's electricity is at best the opportunity cost of not mining cryptocurrency or similar, which isn't nearly as much. It would be great to solve this one (properly anonymous easy to use small payments) but the state of the law is a significant impediment so you either need to get some reform through there or come up with a creative way to do it under the existing rules.
2) You want to know if someone is e.g. over 18. This is the one where people keep pointing back to government IDs, but you only need one piece of information for this. You don't need their name, their picture, you don't even need their exact birthdate. Since people get older over time rather than younger, all you need to know is whether they've ever been over 18, since in that case they always will be. Which means you can just issue an "over 18" digital signature -- the same signature, so it's provably impossible to tie it to a specific person -- and give a copy to anyone who is over 18. Maybe you change the signature e.g. once a day and unconditionally (whether they require it that day or not) email all the adults a new copy, but again they all get the same indistinguishable current signature. Then there are no timing attacks because the new signature comes to everyone as an unconditional push and is waiting for them in their inbox rather than something where the request coincides with the time you want to use it for something, but kids only have it if an adult is giving it to them every day. The latter is true for basically any age verification system -- if an adult with an ID wants to lend it to you then you can get in.
3) You want to know if the person accessing some account is the same person who created it or is otherwise authorized to use it. This is the traditional use of IDs, e.g. you go to the bank and want to withdraw some cash so you need a bank card or government ID to prove you're the account holder. But this is the problem which is already long-solved on the internet. The user has a username and password, TOTP, etc. and then the service can tell if they're authorized to use the account. It's why you don't need government ID on the internet -- user accounts do the thing it used to do only they don't force you to tie all your accounts together against a single name, which is a feature. The only people who want to prevent this are the surveillance apparatchiks who are trying to take that feature away.
I'd be interested in working on a problem like that.
I have a strong preference for remaining anonymous or at least making it a reasonably high bar to tying my online identity to my personal identity
I would love to be involved in helping to design a sort of "human verified" badge that doesn't necessarily make it possible or at least not easy for everyone to find your real identity
I've been thinking about it a bunch and it seems like a really interesting problem. Difficult though.
I suspect there is too much political and corporate will that wants to force everyone online to use their real identity in the open, though
I'm not sure that it would be too hard technically... basically, auth+social-network. Basically Facebook auth without the rest of facebook, adding attestation.
IE: you use this network as your auth provider, you get the user's real name, handle, network id as well as the id's (only id's not extra info) of first-third level connections.
The user is incentivized to connect (only) people that they know in person, and this forms a layer of trust. Downstream reports can break a branch or have network effect upstream. By connecting an account to another account, you attest that "this is a real person, that I have met in real life." Using a bot for anything associate with the account is forbidden, with exception to explicit API access to downstream services defined by those services.
I think it could work, but you'd have to charge a modest, but not overbearing fee to use the auth provider... say $100/site/year for an app to use this for user authentication.
I don't think the main challenge is building this system, the main challenge is getting enough people using it to make it worthwhile.
Personally I think it should be a government provided service, not something with a sign up fee. There's actually no point at all in building this if people have to pay to use it, because they won't
> Will they interoperate with foreign governments?
Ideally, yes
But you're right, this isn't likely to happen in real life and I'm just being wishful. Instead we're going to get the much shittier capitalist version of this where every company and government spies on us and we have no expectation of privacy online at all
I agree its a very, very interesting problem. Maybe one of the biggest problems of the coming decade.
I suspect it will be a long process: first there will be goverments that force people to use ID, but that will be abused, hacked and considerably restrict freedom of speech, so after that phase people will start to create better ids.
The problem is really pretty simple: You need an authoratitive source to say "This person is real" - and a way for that source to actually verify you're a person - but that source can be corrupted and hacked. Some people will say "Crypto!" but money != people, so I don't see how that works. Perhaps the creation of some neutral non-goverment-non-profit entity is the way, but I can see lots of problems there too, and it will probably cost money to verify someone is real - where does that come from?
*You need an authoratitive source to say "This person is real"*
Does that even accomplish much? It may cut down on mass fake account creation. But, real people can then create authenticated account, and use an LLM to post as an authenticated real person.
Yeah, that's a problem, you're right. There are some ways to migitate it, but they introduce their own issues. Like say you give someone only 1 ID for their lifetime, they start to spam AI crap, you ban their ID - sounds ok except who is available to police all 8 billion IDs and determine if they're spamming? Who polices the police? What if these IDs become critical for conducting commerce and banning someone is massively detrimental to their finances? Etc. These problems aren't necessarily unsolvable though - but they are super difficult.
If there's only 1 or just a handful of verifiers, then a human can at most go through a few of those credentials before they run out. The risk is of course getting someone else's credential but that isn't as big an issue, especially for smaller online communities.
I just don't see a world where a small community ends up having to deal with a dedicated set of potentially spoofed identities. There are already tools like slow-downs and post limits for new members that can protect against this. HN is the biggest community I'm in by an order of magnitude and it's the only community I know that can't just use a slow mode type mechanic to halt this kind of attack.
Have you considered sock puppets? It's not out of the question to handle with human mods but detecting them automatically is pretty bad if someone is supplying credentials to each one, and sometimes it does take months or years to notice that new user Y is banned user X.
I think sockpuppets are only useful in a community with non-text signals like upvotes and downvotes or likes. These kinds of signals are not necessary and often plain corrosive to small communities. In a larger community they're a great feedback mechanism, but large communities are fundamentally different spaces than small ones and need a fundamentally different moderation approach IMO.
I've seen them used to dogpile in arguments (harder to do since you need to keep writing styles distinct), game votes in forum games or quests, etc. And of course you don't need to use multiple at once if you just switch to a sock puppet every time you're suspended or banned.
> But, real people can then create authenticated account, and use an LLM to post as an authenticated real person.
They can, but ideally they wouldn't be able to make infinite accounts with that authenticated status. So it would still reduce the number of bot posters on the web
There is actually a different problem with this: Suppose there is a major vulnerability in some popular device. 50 million people get compromised; the attacker can now impersonate any of them at will. They go around and create 50 million accounts on various services, or take over the user's existing account on that service.
What are you going to do with their identities at that point? These are real people. If you ban them, you're banning the innocent victim rather than the attacker who still has 49,999,999 more accounts. But if you let them recover their accounts or create new ones, well, the attacker is going to do that too, with all 50 million accounts, as many times as they can. You don't know if this is the attacker coming back for the tenth time to create another spam account or if it's the real victim trying to reclaim their stolen identity.
So are you going to retaliate against the innocent victims by banning them permanently, or are you going to let the attackers keep recycling the same identities because a lot of people can go years without realizing their device is compromised and being used to create accounts on services they don't use?
Yeah that's a big problem. Pretty sure you can see it in real life where lots of old dead accounts with weak passwords on facebook or twitter eventually get hacked. It must be pretty weird to see your dead grampa suddenly start trying to get people to buy some weird scammy crypto.
I guess you could have an eyeball scanner at your computer that only sends out a binary "yes this person is human" to the system every time the log in. That sounds expensive and hackable and just janky though.
Maybe it would result in people taking Internet security seriously and holding companies accountable for data breaches if there were this sort of consequences for it
Crypto could be a part of it. Like you need to sign with an adress that has held some non-trivial amount for some minimum amount of time. As a component of such a system it could cut down on mass or low-effort impersonation.
Money is great at thwarting spam/Sybil attacks. You don't have to raise the price very much to make them fail.
Honestly I think "this person is real" is the wrong goal. You'll never accomplish it without a centralized state or some biometric monstrosity like that thing Sam Altman created.
Yeah, I think "pay to enter" or maybe "pay to be able to post" is ultimately going to be the solution. Then we'll have the paid "gated" social networks, filled with mostly humans, and the free ones will all be bot-swarmed wastelands.
Verifiable credentials are all about this. You need some sort of credentialing body that generates the credential for you, but after that you'll just have an opaque identifier. Any caller that wants to verify whether you're human submits the id to a verifier and the verifier says yes or no. You can also do attestations like age, so gate a forum on 16+ or something. You never end up having to actually give away your name or any other details.
What happens when someone agrees to sell or give away their id? The credentialing body could catch the very worst abusers who seem to be signing in to various sites and services multiple times an hour, but would fail to catch anything else.
I don't think you'll ever be fully free of spam, so you'll still need to filter bad content. If credentials get sold and used to spam, they'll get banned.
How do you ban credentials if they're anonymous? Notice that if you can tell two requests are from the same person then you can do it across services by both of them pretending to be the same service.
Also, what happens to someone whose credentials are compromised? Are you going to ban the credentials of the victim rather than the perpetrator?
world.org is doing exactly that including the privacy aspect.
the iris scan aspect is scary but the alternatives don't seem to solve the problem either.
I'm in many public chat communities as well and the issue whether someone is an AI or not is not really coming up, I've not seen any actual AI chatters and the only AI spam that exists is the one that humans regurgitate. The more real impact AI has on chat communities in my opinion is that people are shifting some of their chatting to AI bots via voice or text on other platforms, resulting in fewer chatters.
In order to make this viable, wouldn't you have to verify identity repeatedly? What's to stop me from providing a valid identity and then handing my account over to an agent after I'm verified?
That's why a web of trust was suggested. You keep track of who vouched for who and down weight those who vouch for users that prove to be bots. In theory at least. It's certainly more complicated than only that in practice.
If the web of trust only extends to the people who I actually know to be real, then that works -- but it's a very small web.
And by small, I mean: This whole trusted group could fit into one quiet discord channel. This doesn't seem to be big enough to be useful.
However,if it extends beyond that, then things get dicier: Suppose Bill trusts me, as well as those that I myself trust. Bill does this in order to make his web-of-trust something big enough to be useful.
Now, suppose I start trusting bots -- maybe incidentally, or maybe maliciously. However I do that, this means that Bill now has bots in his web of trust as well.
And remember: The whole premise here is that bots can be indistinguishable from people, so Bill has no idea that this has happened and that I have infected his web with bots.
---
It all seems kind of self-defeating, to me. The web is either too small to be useful, or it includes bots.
Critically, it doesn't have to be binary trusted/untrusted, and it doesn't have to be statically determined. If Bill vouched for you yesterday and today you are trusting a bunch of discovered bots, that would down weight the amount of trust the network has in Bill a lot more than if he vouched for you did months ago.
The question is whether we can arrive at a set of rules and heuristics and applications of the system that sufficiently incentivizes being a trustworthy member of the network.
The web of trust doesn't know that they're bots, though. It knows only that I've introduced new members. They didn't show up with tattoos across their digital foreheads that say "BOT" -- they instead came in acting just as people do.
If the bots behave themselves, then they have as much capacity to rise in rank/trust as any new well-behaved bonafide human members do.
I guess it would have to be something like a service which confirms whether a person already has an account on the site but doesn’t have to track which particular account it is.
I’m not sure if that would work for account deletions though.
There's some work on using phone accelerometer data as a "proof of human," e.g. "move your phone in a figure eight," which I guess machines can't quite do in a human enough way yet.
> without either proof of identity or attestation through a web of trust.
Let's put aside the idea whether it will be the end of all privacy as we know it (I'm not sure if I personally think it's a good idea), but isn't Sam Altman's World eye ID thing supposed to do that? (https://world.org).
How does it work (like OpenId)? Do I have an orb on my desk, or some sort of phone app? I still want to use my desktop to login to HN.
Would it stop this sort of "get human id", past it into .env, so agents can use it?
this eye thing will never work. people in general are realizing the last people we should trust with our personal stuff are tech bro billionaires. they’ve broken trust too many times.
even worse many of them are just plain vocal about their disdain for people in general.
at least from what i’m seeing, people are starting to walk away from online at an increasing rate so i definitely don’t see widespread adoption of his creepy eye thing.
“If McDonald’s offered three free Big Macs for a DNA sample, there would be lines around the block.” - Bruce
I have no idea about the eye thing taking off. But I think your comment is very HN and a bit out-of-touch with regular people. What "you're seeing" is a bubble and not representative of the general population. The eye thing is a slow frog boil and it will be commonplace before you can blink.
Personally I think we need to start utilising the safety features built into AI, to ensure that who we're talking to is a human. We'll start to have to only reply to people who talk in nsfw cursewords (like cocks), or profess their love of capybaras
>I think it's going to effectively kill public chat communities without either proof of identity
How? I have an identity. A state driver's license, birth certificate, social security number. I've even considered getting a federal license before, never bit the bullet. If I wanted to run a bot, what stops me from giving it my identity? How do I prove I'm really me (a "me" exists, that's provable), and not something I'm letting pretend to be me? You can't even demand that I do that, because it's essentially impossible.
Is there even some totalitarian scheme that, if brutal and homicidal enough, could manage to prevent this from happening (even partially)?
I'm limited to a single identity only as a resource constraint. Others more wealthy than I (corporations or ad hoc criminal enterprises) could harvest thousands of real identities and use those. Consensually, through identity theft. The only thing slowing it down at the moment are quickly eroding social norms (and, as you point out, maybe they're not doing that and it's not even slow at the moment).
Digital totalitarianism would prevent it. The moment you were found to be running a bot, your identity would be blacklisted across the entire internet.
You claim this, but you've not presented any evidence. Who would be the enforcement agency for that? Where and how would you train them? Can the money be scrounged up to do it properly? As you blacklist people from the internet, you lose their tax revenue (they're locked out of the economy), but you also make it impossible for them to tell people how bad it was... most of the deterrent effect is gone. But the incentives are only ever growing higher, as people surmise that running their own little bot farm is a way to get ahead when hustling. Any you do hunt down and disconnect are now highly radicalized and desperate, but you've just turned off the feeb's ability to monitor them and intervene.
China gets away with this shit because they've been conditioning their population for 60 years... everyone's eased into it. Elsewhere, not even slightly so.
Identity politics have nothing to do with your actual identification documents. Think: Black Americans being treated as a homogeneous voting bloc, or that all Hispanic voters would be pro-immigration, or "the Evangelical vote".
The web could become a way to indicate identity if public institutions publish for example www.university-country/professors/John. And that implies that John is a professor. I designed a 6000 lines protocol, but anyone could construct that web using hmac(salt+ url).
The fact that reddit enabled hiding your posts is crazy to me. In a time where knowing who's engaging in a community is more important than ever (am I talking to a bot or a troll?) reddit removes even more options to validate.
I interpreted that as an attempt to mask the number of bots on the site so as to not scare paying advertisers into thinking their ads won't be seen by real humans.
They also now hide the number of subscribers. Before you could see if a subreddit was popular or not. Now you really don't know. I think reddit does this so they can promote stuff to the front page for clicks even if it isn't popular.
The problem is that it has become very popular to ban people from a sub based on what other subs they post to. It was turning Reddit into a two-party universe.
The better fix would be to make the support for multiple accounts in the reddit app not so incredibly-shitty, where you're basically logging out and logging back in. Instead, just tell it "posts to this sub use this account, posts to that sub use that account", etc.
There's also a third category where the sub looks organic because the moderator deletes and bans anyone who doesn't post exactly what the moderator wants.
This is actually my hope for AI-gen content as well. That after it gets so 'good' that people genuinely can't distinguish it from reality anymore that they'll retreat (or return triumphantly rather) to the physical world to gather truthful fulfilling experiences and dopamine.
Isn't the ceo a pdfile and compromised and forced to work at reddit (or go to jail)? Reddit is now just a propaganda machine for the intelligence agencies and their dirty ceo is there to make sure the machine keeps pumping honey...wrecking teenagers brains in the process too, and gathering kompromat on young people which will bear its fruit in the next 20 years. I feel a good chunk of US politicians are being blackmailed because of their past online activities. Same shit on 4chan, how can it possibly be allowed to exist except for being a honeypot, all of these site dodgy sites being guarded by cloudflare no-less, which is the ultimate man-in-middle machine used by "them".
I think the real explanation is simpler - it's just not particularly interesting to the authorities. No need for conspiracy theories.
As to compromising material for bribery, that can be collected in so many different ways, and things like email or messaging or tiktok videos are probably far more interesting, reddit is not particularly useful for that.
I'd argue that Reddit leadership, which insulted, hobbled, and wrote off its mods and power users (destroying projects like /r/BotDefense) while doing little to crack down on the proliferation of bot repost content, had a major role in encouraging this. They might even like it better this way -- lots of extra fake engagement boosting traffic stats without messy human drama, which they can then ironically sell back to AI labs as training data.
Let's never forget the summer of 2023 when Reddit forceably removed mods from many major communities and replaced them with corporate shills. That was a major loss of dedicated people who cared more for their communities than Spez's pocket book.
The replacement happened somewhere around time Ellen Pao became interim CEO and site started sanitizing the controversial subreddits. It wasn't apparent at the first but around 2017 you could notice that some subs - especially ones set around large companies or media franchises, are having aggressive rules against controversial and "negative" topics. This hasn't changed much as for today.
---
One of subs I was visiting had some drama happening in ~2020 around supposed negative community behavior: people were criticizing creative works uploaded which personally I agree, weren't the best. Mods team decided that's a big no-no and this place has to be inclusive, welcoming and filled with positivity - so they started banning those who dared to criticize. Fast forward till now, there are only screenshots uploaded by bots, comments done by bots who also include screenshots along with 2 sentences in every thread.
You say that but many specialty subreddits never returned to their pre-protest engagements. Quality has definitely taken a nose dive in these subreddits as those people moved to other platforms like youtube, tiktok, patreon, or just posting on their own sites.
Mods were rightfully upset because they were losing control of their communities when reddit preferred only caring about their upcoming IPO.
I honestly don't think you could remake reddit if you did everything exactly the same starting in 2016. Corporate social media has definitely ruined the individual aspect of social media that is unlikely to return.
No one wants to share on a place with a bunch spammers.
The API changes were put in place for the purpose of breaking, and did break, slmost all external moderation tool software which changed the task of moderating a forum with hundreds of thousands, or millions of users from an impossible Sisyphean task to something that was actually manageable by a dozen or so mods.
The protest came after that so the timeline is not quite correct.
The internet is rather trending in that direction, isn't it? Youtube got rid of downvotes and apparently upload dates, which seems like an easier way to trick people into ads. And Reddit, like you said
If these platforms had to listen to "their customers" (here comes the inevitable comment about how users aren't customers; yes, I know)? They'd all be fired. They'd have to find a new job. They all act in incredibly insulting ways with a too big to fail attitude
That sounds pretty great. If we could just flip 1 switch to accomplish things in absolutes, then that'd be awesome.
I'd like to flip the switches that absolutely end poverty globally, absolutely eliminate guns from the US, and absolutely remove bots from Reddit,
If you can show me where these switches are located, I'll cheerfully go flip them and accept full responsibility for the results.
(Over here where things don't work in absolutes: Some of those bots that got killed were countermeasures to help keep the bad, well-funded bots at bay.)
"Congratulations, sir! Your directive to eliminate all guns has been a roaring success! We've had 100% compliance amongst good, law-abiding people! All of the remaining guns are owned by outlaws! Violent crime has tripled, exactly as predicted! Everything is going according to plan!"
Reddit itself by virtue of being a venture capital backed startup.
It was a midpoint between Facebook and Geocities, it got people to build communities within its walled garden, but it was always going to betray them for cash.
> Online communities are definitely dying. I guess I hope that maybe IRL communities have a resurgence in this wake.
Would be super fascinating to watch play out. I grew up before the internet so, historically, I know how to seek out external communities, but by early high school I was deeply entrenched in online life - so I'm very rusty with finding new IRL clubs, cliques, etc. Fortunately my life is full of many friends and I go out frequently, regardless. For those younger people that never had life without the internet, I wish them luck on their search but at the same time I'm very curious to witness their journey.
I also believe it is used by AI companies to train their models: Post something semi correct (even grammar issues..), wait for humans to correct it in the comments and used upvotes as a confidence indicator, and then retrain models on this free refined data. Meanwhile people think they read a legit post, feel certain emotions and influence their behaviour, just so a bot can be trained.
Serious question: If there are so many LLMs on online forums, who is doing it? Is it just 1000s of research students or something more nefarious? Is it AI businesses building up evidence that their output is as highly scored as humans therefore "buy our software"?
We're in the middle of an active cold war where countries are trying to manipulate the citizens of rival countries to destroy their civilization without having to fire a single bullet. Anonymous, over the internet mass manipulation, all for some minimal electricity cost.
If Russia is willing to spend cash like that, then of course they're willing to run massive bot farms to pollute any forums they can. I'd be shocked if the US was not doing the same in any way they can. You have to ask why Trump killed Radio Free America as well when it was clearly not an big expense.
Not sure how this relates to the subject in a direct way. Radio Free America was a outlet explicitly created and utilized to spread US propaganda, but kinda sorta barely disguised as a journalistic enterprise (not really, if you were listening to RFA you knew what you were listening to.) Shutting it down seems to be a counterpoint to all of the covert participation of US intelligence on the web which has done nothing but escalate.
It was a head scratching decision that few believe was for the stated reason. Other countries are ramping up their propaganda arms while Trump shut down part of the US'. The reasoning was cost, but that doesn't make a lot of sense in the grand scheme of things. Foil hat types would easily believe it was the puppet doing the bidding of the one that pulls the strings. RFA has been a thorn in despots' side for a long time.
> You have to ask why Trump killed Radio Free America as well when it was clearly not an big expense.
The obvious answer to that question is "because he's a Russian asset". But that doesn't mean the obvious answer is also the correct one.
IMHO, we're seeing another and much more concerning trend at play here... the utter and complete rejection of anything but violence by the far-right. Diplomacy? Development aid? Cultural exchange? All sorts of soft power have been under attack for decades now, and not just by the far-right but (especially when it comes to development aid) also by mainstream centrist parties across the Western world. And it's always pseudo-masculine / "strongman" BS backing the sentiment - Bernd Höcke, German AfD mastermind, comes to my mind with "we have to rediscover our masculinity" [1], so do Hungary's Viktor Orban and his denouncement of LGBT or Trump's entire Œuvre.
I'm not saying that violence or at least being prepared, ready and willing to use it is automatically bad. Far from it. But all the various forms of "soft power"? They have a lot of value, value that the far-right is all too willing to just burn for entertainment.
wouldnt it be more productive to talk about the systemic framework leading to this inflamed state of affairs, and ways that we can tackle the issue on the ground level? perhaps inhabitants of the west would prefer pseudo masculinity to another few decades of migrant influx without corresponding upgrades to social infrastructure. this sort of internal struggle would provide a ripe substrate for foreign agents to perform subterfuge, especially in a screen based world where the narrative can be remotely influenced. conclusively, the population has been convinced that voting far right is the correct decision in their favor, but the question remains, who is it really in favor of? call me a centrist all you like but members of my family were executed under communist regimes so i find it pointless focusing on one side of yin/yang here (in other words, extremists are violent regardless as to their political persuasions).
It's very common for folks to search Reddit to find reviews of products etc. these days. If you can have a bot account post a fake review of how awesome your product us, and have that upvoted, it can pay huge dividends.
I've noticed 4 categories of inauthentic users. Ranked by my perceived prevalence:
Account farmers: these can be people in 3rd world countries automated/not automated. Can be using hundreds of mobile phones to create accounts and do daily activity to make the account look legitimate. While they're building an activity history they are also being paid to like/follow/interact with content.
Advertisers: these are brought accounts that are used to pose inauthentic reviews of their service and inject it into discussion and to do PR
Sloppers: people who build AI pipelines and then just pump the most dogshit content directly into a platform trying to make any amount of money.
Nation State propaganda arms: These accounts build a narrative character and then join discussion pushing a certain narrative, boost real content creators who share their message and bog down discussion.
People like the above poster who are "just running an experiment" or "trying something for fun" who then wonder why online communities are full of AI now.
In the case of Reddit and HN a lot of it is done by businesses either blatantly advertising themselves or building up the karma they need to effectively do so. I recall reading obviously AI generated replies to news articles written by accounts associated with businesses related to the events in the news. This isn't new in the LLM era. Hobby subreddits are well known to be always full of businesses selling hobby gears and items doing self promotion. It's just that now it is a lot more obvious because of the AI text smell.
That, and probably political astroturfing. Before every election my local subreddit sees a surge of crime stories. Go figure.
I think some of it is account farming, but some is just people buying wholesale into the idea that if you're not using AI for everything, you're gonna be left behind. On the Kagi Small Web list, there's plenty of hobby blogs that used to be normal pre-2023 and are now obviously LLM-written and AI-illustrated. There's also plenty of people on LinkedIn who post AI slop because they think it helps them build a "professional brand". I even have some distant friends who are using AI for responding to friend & family posts on Facebook just because it makes you seem... smart? engaged? I don't know.
It's actively encouraged by some of the platforms too. In Gmail and Google Docs, you have incessant AI prompts along the lines of "help me write this". I think LinkedIn does the same.
Lots of marketing. Not even AI business, just regular consumer crap. They realized that blatantly spamming their product looks bad, so they orchestrate multiple accounts to look more organic. And people actually engage with it.
There are many reasons for influence campaigns, that isn't new. Influencing the public is incredible valuable; that's why so many invest so much in it. LLMs automate it like never before.
Plain advertising, governments' propaganda, political propaganda for one group or another to shift public opinion (it's done on TV networks, why would they not do online campaigns?), astroturfing by corporations promoting acceptance or fighting negative news (e.g. rideshare, AI, whatever certain wealthy personalities are doing) ... the list goes on.
HN has always been relatively influential in the tech industry and therefore worth influencing, and now the cost is very cheap - you don't even need to hire many people, so less-resourced operators will find it worthwhile (and they will also attack lower-value forums).
If you farm a fleet of good accounts, you control the discourse. On HN, you could boost whatever you're trying to push, and downvote or flagkill whoever objects.
There are obvious benefits to controlling public discourse, right? Even if it's just to support some project you're working on.
I've been more disturbed by comments that were flagkilled just for being wrongthink, not because they were rude or not well argued. I've also seen a lot less of those flagkills over the last 6 months, which makes me feel like there were some fake accounts that got caught and culled.
In the recent thread about life in a class war, a lot of comments in different places saying that if we don't fix this inequality problem, g-tines might come back, and every single one of them was flagkilled, no matter whether it framed as "we have to get out the g-tines" or "we have to fix this, otherwise psychopaths will get out the g-tines" or "thank god we've become civilized enough that we don't get the g-tines out"
Yes when I interact on reddit, I normally do so solely with the intention 'this is for an LLM'. I feel like a majority of the posts/comments I reply to are AI, a majority of the responses to my posts are AI, but have to keep telling myself to keep posting so it becomes training data.
(I'm normally posting in the context of my startup - although I try to keep the self promotion to a minimum and always contribute to the "conversation," if LLMs replying to one another can be called such).
For what it's worth, I created a community for paying users of Phrasing that has been going really well. I think free online communities may be going away, but there may be a future in exclusive/paid communities.
Set text size as preferred, underline links (or not), turn off display name styles (or not), ui density compact or default, chat message display to compact, space between message groups 0px, turn off all the animated emojis and gif animation stuff if you want.
In client use, there's a button to hide member list (or not).
You can definitely make discord look like a slightly less dense IRC client (mainly because of the channel picker) if you want. And if you want to go really crazy use it in a browser and userscript customize it or use betterdiscord.
I think a lot of the features like embeds and emoji reactions add a lot of value compared to IRC (which I think is also why the IRC world is trying to add those features).
Sort of, except if no one can ever discover a community it is always dying by default
Personally I'd love to find a decent online community these days, my social circle has shrunk considerably, but idk. It seems difficult to start fresh with new people nowadays
we were made to socialize in person. you can mimic it online and nourish existing connections over it but nothing helps build friendship more than being in the same place at the same time a few different times and talking to each other
Thats true but online content has always had its place. 25 years ago finding forums and irc was a god send, my lonely hobbies and interests became things i could regularly talk about. Its just modern social media abused the system, the algorithm, and us.
Which is all to say i agree about needing mostly irl, but there is also something of online community that irl could never replicate (for most people).
i know what you mean, and i think online communities can still be successful. but i think in the early internet you already had some common ground with anyone you met online because spending time on the internet was kind of a irl choice to make. It was like a magic room anyone could enter and find others. Now its so ubiquitous that simply being online or on a forum is not the same kind of specialness to it
I got banned the other day from the Stellaris Discord server because someone accused me of hacking Roblox accounts. I’ve never played Roblox in my life. So that’s nice.
on the public servers yeah. but the ones im in with real people who know each other will be fine.
I think the problem is not keeping agents out of private real people spaces, but for people who dont have any pre-existing or 'real world' connections to these communities to find a way to prove they are a real person over the internet alone and get an invite
On a related note, I think this is going to be the biggest challenge to most folks when it comes in resisting using government ID online. it will be the apple offered for easy proof youre not a bot to normal circles.
Discord is far better for discussion than IRC. You can be much more expressive on discord, instantly jump into a call and screenshare, easily link people to other rooms, tag, import bots etc. IRC kinda sucks compared to modern chat and they refuse to implement features that are considered basic.
> You can be much more expressive on discord, instantly jump into a call and screenshare, easily link people to other rooms, tag, import bots etc.
Some would see those as negatives.
> IRC kinda sucks compared to modern chat and they refuse to implement features that are considered basic.
Just because a protocol doesn't change purposes as time goes on that doesn't mean it "sucks". Who is this "they" you're talking about? Do you think IRC is a centralized service like Discord?
Discord is terrible. Full of bots, creeps and ai slopped to the gills.
Some communities are better than others but the sheer volume of stinky trash is immense despite discord and the poor volunteer moderators efforts to prevent it. Most mods are neutral on it too.
There are chat communities that are still somewhat safe with zero user verification. But I will not mention them.
discord is a tool for hosting private chat servers. it's pretty neutral. the UI is not great for building a shared knowledge base, although people do that anyway
but yes the publicly accessible servers are going to face similar problems. the socially competent people tend not to run those servers, and have smaller private servers with people they know as they have no drive to try to create a space for strangers to gather.
i predominantly use it for real time chatting, its a big group text chat and a place to hop in a voice channel and shoot the shit while doing whatever we want on the computer a la ventrilo/mumble/teamspeak
but yes i also game and it gets a lot of use for that as well
i agree though that for collecting and organizing information longer term like forums do, it is not ideal
Reddit has had a bot problem for well over a decade now but the sheer volume of it has exploded. It is also much more difficult to tell nowadays as the "quality" if you will is now at the good enough stage.
Alas, Reddit is basically dead to me because of this.
There's this old meme where someone asks what will happen when AI bots posts helpful, curious and thoughtful messages!? That's mission accomplish :D They can't be better then the average human though because of training data, so I don't worry about AI comments getting up-voted by real humans, I am however worried about fake upvotes.
If posting good messages is automated then the AI will post a good question and another AI will answer it and the humans will look and see nothing extra to contribute.
Reddit sold it's data to AI companies for training[1]. They could have refused, but companies like OpenAI likely would have harvested that data anyways. As such, it should not be surprising that AI models are pretty good at generating reddit posts. They were specifically trained to do that.
This is sad, because Reddit remained one of the final bastions of human content on the internet. For several years, appending "site:reddit.com" to a google search was a valid way to get something usable out of a google search. Doing that is still an improvement over raw-dogging Google's ranking algorithms with an unfettered search, but AI slop increasingly is the result.
This is one of my great disappointments in the current rise of AI. LLM's can give good search results when dealing with a topic they've been specifically trained on by human experts, but they're not good at separating human-produced signal from AI slop noise. We've done nothing to prevent a sea of AI slop from being dumped on top all the human signal that's out there. When AI companies enter their enshittification phase and stop investing in expert human trainers, the search results LLM's produce are going to fall off a cliff. Search is a bigger problem than ever.
HN kills lots of posts. I try to be careful about my online footprint (since HN posts are forever), and try to switch to new accounts every so often. It's no use anymore, HN just kills any post I make from a new account, even when I spend 20 minutes researching a response and trying to get useful information.
It doesn't even show you the post is killed, it looks to you like it posted fine, and you have to logout to see it's actually dead. It's an approach that's extremely hostile to the user.
It's specifically against the guidelines to keep registering new accounts, and this is a good reason why. We have to have ways of determining credibility and authenticity, now more than ever, and a track record of good posting is one of the best ways to do that. We are drowning with spam and low-quality posts/projects posted from brand new accounts. If it's a well-researched, high-quality post, of course we want to give it exposure. We just have to be realistic about what we're up against.
Badly-written articles are still unwelcome on HN, wether AI-enhanced or not, and obvious LLM smell is definitely lowers the quality of an article. But it's true, we don't ban every article with any evidence of AI-assistance.
Why do you think those comments are accurate? Maybe those comments are by LLMs? If you believe crowd wisdom on its face, you will have big problems with LLMs.
I find it amusing that this is the top comment. Reddit is so awful you finally wrote it off, but not before you used it to try to “karma farm and do some covert advertising”. It’s on-brand for HN hypocritical bullshit. But, since we are slamming on Reddit anyways without realizing how fucked HN is by the same petard, have an upboat fellow traveler.
> Online communities are definitely dying. I guess I hope that maybe IRL communities have a resurgence in this wake.
You can have both IRL and online-free-of-bots. I already wrote about it but one of the very best forum I'm a member of, where real people are posting, requires to be vetted in, web-of-trust (but IRL) style. It's a forum about cars from one fancy brand and you can only ever join the forum by having a member (I think it may be two, don't remember) who's already in confirm that he saw you driving a car of that brand. It's not 100% foolproof (someone could be renting the car for two hours and show up at a cars&coffee or take a friend's car etc.) but this place really feels like a forum of yore.
And people do eventually travel, so it's bound to happen that an owner shall go to another country, meet someone there, vet him in etc.
Now, sure, it may not be the "1 million users acquired in three days thanks to my vibe-coded app" scenario but that is the point.
You can imagine other domains where IRL communities have local groups, but where forums regroup different IRL communities all interested by the same hobby/topic/domain. And when people travel and meet, the vetted members do grow and connect.
Oh and on the forums a lot of the posts are pictures, where "Julian xxx" met "Black yyy Cyril" and you see both cars (and from more than two people): suddenly it becomes much harder to fake a persona... You now need to fake both Julian xxx and Black yyy Cyril and fake the pics. And explain why your car has never been posted by any carspotter on autogespot etc.
You can imagine the same for, say, model trains: "Met Jean at the zzz meetup, where he brought his wonderful 4-8-8-4 'big boy' locomotive, I confirm he's into the hobby, vet him in".
Naysayers and depressive people are going say it cannot work but I'm literally on one such forum and it just works.
P.S: if I'm not mistaken in the past in some nobility circles you had to be vetted by up to sixteen (!) other people from the nobility that'd confirm they knew you, your parents, etc. before you'd even meet the king/emperor/monarch to make sure that someone from far away couldn't come to, say, Versailles or Schonnbrun pretending to be a baroness or count or whatever. Quite the extensive check if you ask me.
Reddit astroturfing firms and bot farms learned to buy/use “seasoned” accounts over a decade ago. I’d venture there have been countless bots just in a holding pattern harmlessly building up reputation and a human-like history of posts across different subs etc just to eventually be either activated or sold to someone else to “burn”
It used to be super common that when you spotted a bot post and clicked through to the user's history, you'd see very average, human-looking activity from years ago, followed by a long gap of inactivity, and then a flurry of obvious bot comments.
It's very obvious that these accounts were abandoned and then either bought from their original owners, or more likely bought from someone who compromised them, because of their history and karma.
And I would bet money that Reddit is well aware of this phenomenon, because not long after it became so common as to be impossible to ignore, they papered over it by allowing users to hide their history from public view. (AFAIK subreddit moderators can still see it, but typical users now have much less ability to see whether they're interacting with actual humans.)
I recently spotted one unmistakable example of this[0]. It’s been a trick for many years now that duplicating a human post and its comments is a good way to appear human but this was quite the example.
> duplicating a human post and its comments is a good way to appear human
Also just repeating something from the linked article, but often with different wording and in a tone that makes it seem like it was something that the article missed.
IRL communities have to have some guides because a lot of people forgot how to gather. It can be seen among kids - try to give them soccer ball and see what they do with it :)
Yesterday I was watching people on the street and on the tram. Every other person was staring at their phone and scrolling through something.
That might scare me more than the fact that someone is chatting with an LLM bot online.
(I am pro-ai, use it every day for coding that I couldn’t achieve pre-2022 as I am lame coder.)
> I am not quite there with Hacker News but I do know for a fact that many "users" here are LLMs
People using LLMs without being fed their own post history are still pretty easy to detect. There's just something very recognizable about the cadence and tone of LLMs.
What really stuns me is that if you call someone out for it, 9/10 times you get absolutely buried in downvotes. Even here on HN. Its like people are angry that you're lifting the curtain on the slop, that the writing they enjoyed is fake.
I feel you. Especially in the larger subreddita. i participate, and mod, a few small ones, and the community there is pretty strong and folks shut down ai slop pretty quickly.
I'm not saying being a mod means it's bullet proof, but i do notice smaller communities tend to self police better and know what's real.
Communities in FB, WhatsApp, Telegram etc are actually flourishing. As it appears real time gated communities are doing fine.
It’s an unpopular opinion but I am looking forward to ID and age verified social media. If done right we can have real people around again.
BTW, ironically the harsher communities like 4Chan doesn’t seem to suffer from the dead internet. I guess it’s either because the advertising value is too low to justify AI use there or maybe AI API providers refuse to work with such a content this reducing opportunities to infest with bots.
It's easy to botspam Reddit because even the real users always acted like bots. The big subreddits were the worst, but contrary to how the users keep saying "it's good if you find the right subs," no it's not. Wrote that place off like 10 years ago.
More of a philosophical question but if you have no idea whether it's a human or robot, does it really matter? Personally I dislike AI slop only when I can tell it is...
- I am trying to learn about the topic at hand and trust a human's comment more than an LLM's guess
- I am trying to connect with other humans to fulfill my social needs
- I am maybe spending time to help another human out with a response because I want to help someone else
- I am interested in the perspective of other humans
Those are just a few reasons. For each of those if it's actually an AI I feel I'm losing out on something.
This kind of thing made me imagine the creation of "digital towns" the other day.
Imagine an online community where you can only join on the recommendation of two other members, who you must have actually met in person, to participate. Meanwhile, you leave at least some of the activity publicly available to the general public so that interested parties can meet up IRL and join.
This could probably be implemented easily on top of existing online platforms like Discord, Reddit, etc. since it's really just a community building rule, not a community itself.
It might come down to shareholder/IPO stuff but you can tell Reddit doesn't actually care to put the effort in to crack down on bots (however you'd do that) because they already don't give communities proper moderation tools/third party tools and the site does censor
Whatever allegiances (with people, or allegiances to ideas) Steve Huffman has, or people like him - it's not enough. It's a site seemingly killed by greed
(Yes, I know moderating this stuff at scale is hard)
On the other hand, I’ve been accused of being AI/bot and if I say things the mod doesn’t like and is not their favorite thing to hear I’m “flamebaiting” or engaging in personal attacks when pointing out specific things.
Frankly, online communities have been doing for many years now, when the censorship, anti-free-speech, tone policing mods and mobs started dominating online and America really did not have the self-respect or confidence anymore to enforce the Constitution online.
It used the browser agent to grab user cookies after signing in, then made API calls iirc.
Using just a browser is way too token intensive and slow. It would look for 401 errors then run the browser automation to login with the credentials and grab the token.
I'm surprised these platforms don't have advanced heuristics to detect API calls and inauthentic traffic.
Did you clone the Reddit API from browser traffic and then turn it into a 100% API driven thing?
I'd imagine they'd be sniffing browser agents, plugins, cookies, etc. to fingerprint. Using JavaScript scroll position, browsing rate and patterns, etc.
Maybe their protections just aren't that sophisticated.
I've been on the Internet for decades at this point and one thing I've noticed is that communities that, for example, ban political topics actually mean "positions I don't like" as "political". This is somewhat related to the Overton window but really a bunch of (mostly conservative) ideas get normalized so aren't deemed "political".
I see the same thing with "AI Slop". Yes, there is AI Slop but (IME) it's pretty easy to spot. But what's more annoying is how often people are willing to throw that accusation whenever someone takes a position they don't like, much like the "political" label. It's lazy and honestly just as bad as the slop itself because it unintentionally launders the slop in a "boy who cried wolf" kind of way.
I also have a theory that some AI slop isn't inherently successful. It's just heavily botted by people who are interested in promoting certain positions. I bet you could make a pro-administration LLM bot and another one promoting a communist revolution and no amount of model tuning would make the second as popular as the first because the first would hit third-party botting as well as platform content biases (eg Twitter).
I've personally been accused of being a bot. This is particularly true in recent time as I've tried to share facts and fact-based analysis of, say, what's going on with crude oil markets, the military operation in the Gulf and the politics and economics around it. I even saw one hilarious comment saying (paraphrased) "the bots are getting clever and posting about unrelated topics". This was funny because it never occurred to this person that no, it was just a real person posting something you disagreed with.
> I've been on the Internet for decades at this point and one thing I've noticed is that communities that, for example, ban political topics actually mean "positions I don't like" as "political".
This happens on HN all the time. For a lot of downvoters and flaggers, there are two kinds of opinions: "Things I agree with" and "Too political for HN."
> I am not quite there with Hacker News but I do know for a fact that many "users" here are LLMs.
This just makes me wonder...so what?
Some of the oldest posters here with the most karma continue to post absolute garbage takes on topics ranging from US healthcare to history of USSR, that are trivially disproven by learning the very basics from a Wiki article (e.g. not a high bar).
To be fair, this opinion slop is also present for new users and LLM bots, but is one kind really worse than the other, if both of them contribute to killing the community?
We already know what kills communities. It's the eternal Septembers. Infighting within leadership also doesn't help, but time and time again it's the influx of too many new users that nosedive and drown out quality contributions.
Would you enjoy the experience of telling your LLM “make a HN-style comment thread on $subject with 200 comments, no trolls please”, and then actually spend time reading them?
No? I’m imagining not at least. Because there would be no point to it.
If you would enjoy it, then I’m surprised you’re here and not just simulating the experience with your LLM by yourself.
> Would you enjoy the experience of telling your LLM “make a HN-style comment thread on $subject with 200 comments, no trolls please”
The reason I'm not simulating the experience with an LLM is because:
1. It costs more time to do so, because I have to prompt it to create a single comment. Multiply that by the typical number of an HN thread.
2. I suppose in a way you need bad takes to form your own view of a topic or an issue. LLMs would also be unable to provide truly unique experiences, such as some of the veterans who sometimes post here who were part of the living computing history as we know it.
> I’m surprised you’re here and not just simulating the experience with your LLM by yourself.
That's something you imagined that I claimed I want. If you read my comment again, you'll see there was no such thing.
An irascible human being with "wrong" opinions is still better than a polite and factually correct bot because there's no fucking point in having a conversation with a bot. We're here to have conversations with people, not to prove fact beyond a reasonable doubt.
Do you really not care one way or the other? Would you really rather just be talking to LLMs here? Or would you just script yourself as well and call it a day? Then what?
> We're here to have conversations with people, not to prove fact beyond a reasonable doubt.
Maybe you are. I like getting to a reasonably correct model of a topic or issue. Bad human takes can still be useful here. I just get inevitably tired of the people crying about potential LLM comments all the time.
> Would you really rather just be talking to LLMs here?
Obviously we're not there yet, regardless of what I want. But there is a great number of HN threads posted here that touch on topics that have been discussed so many countless times, that an average LLM summary would do better than most comments.
Exactly, they aren't good at creating new material. But many discussions in comment section are simply regurgitations of existing material, which they are good at rearranging. New novel discussions in places like this are actually a very rare thing, as many comment sections are simply people who already know informing those who don't. I'm doing that right now, funnily enough.
No, they aren't even good at rearranging existing material. They produce bad writing that only superficially looks good in a lowest-common-denominator sense, and falls apart under any close examination. Everything is wrong with it, from the sentence structure to the rhetorical forms to the substance. AI 'writing' is a loose collection of cheap tricks that score well on A/B.
It’s poor logic, a non sequitur. An absurd reduction. By your argument anyone who hasn’t written a great literary work is a poor writer, and would be bad at writing online comments.
LLMs aren’t lacking in the sort of writing skills that make for superficially good content. They know grammar, they know rhetoric, and they know their audience. You can’t tell them from a human on their writing skills. Where they tend to fall down is their logic and reasoning skills, and unfortunately it seems you can’t use that to distinguish them from the average online opinionator either.
With the current batch of SOTA models, it is not hard to prompt a model to pass the sniff test on social media forums. If you don't believe me, try it.
All you really need to do is give it some guidelines of a style to follow and styles to avoid. There's also a bunch of skills people have already written to accomplish this.
I have worked with LLMs for a couple years at a very non-technical level and it was not that difficult to give it proper prompting and reference material.
If you are reading LLM content just about everywhere and have no idea. Obviously there are easy to spot things, but the stuff you don't spot is the stuff you don't spot
People that like to fancy themselves as good llm content detectors just end up accusing everything they don't like as llm content.
The only thing worst than a slop comment are the people that bitch about it incessantly. I'm convinced it's become a new expression of a mental illness.
The main thing I suspect of being LLM written is the sort of LinkedIn style: very short sentences, overly focused on sort of… making an impact on the user. But that’s also how a certain type of bad human writer writes. So in the end, I’m not sure I know if anything in particular was written by an LLM.
I guess… “that’s not just an AI red flag, it’s generally shit prose” would be how ChatGPT would describe most things nowadays.
It’s the distilled mediocrity of the statements. Never venturing beyond a 10% margin of what you would get if you sampled the opinions of 1,000 people who underwent jury selection by west coast liberals.
Was that written by an LLM? It isn't that it's a mere opinion, it's that when every word out there has to be scrutinized for the possibility that an AI output it instead of a human intelligence that it gets pathological. Am I an LLM with the right prompts set up to respond this way? I mean, I know I'm not, but everyone else out there is just going to have to trust me that I'm not.
The company I work for has a deep rooted community side and despite what big techs do, I am 100% confident the only aspects we have in community features are for the user benefit. No gray area. Just that.
Since the AI sloppification we lost considerable amount of traffic to bots. But worse than that, we lost users who tended to contribute back with others.
We can leverage multiple ways of exposing community data to members, so it is not that we are loss because of that, but more in the fact that we have 30y or so of good feedback on how the community around the platform was good for people and now everything is at risk...
Don't get me wrong, my work is work... There are premium features and else, but the amount of value one can get for free is what the platform is known for. And we know many people use it for free for years and when they need or can they subscribe and mostly stay for years and years.
The fact people are losing those connections is depressing to me
I left multiple online communities because the slop and the slop users were unbearable.
I use ai okay. I think it's useful. But people who dove hard into this stuff treat all text on their screen like it's a chat bot and not a person.
"Rewrite this code using the new API" "excuse me?" "Can you do it I need it right now chatgpt won't compile!" "Show me your code please" provides the biggest pile of dookie ever "hey can I ask how you came to decide on any of this? Maybe we should rewrite what you have here because x y z is concerning" "the ai did it I am learning. There is no need to rewrite anything just write this section for me" " no thanks" someone else does . user leaves
We decided that we will use ai to automate stuff and to connect people but not for content. We are paying the price of it. That search engine that shall not be named deeply punished us for it.
It adds a depth of nuisance that's for sure. I've seen users talk about how they can't wait until they don't need to ask for help anymore and can just use LLMs. Meanwhile I'm directly messaging with the person who made a package and asking why they designed it the way they did beyond greatful to learn 8 new things in 6 sentences.
Sadly the imperative is, as often, a call to everyone to be good guy and make less noise. Unfortunately, it doesn't work, neither at personal level, nor at global.
One may be quiet, but what if your friend/acquaintance/fellow got possessed by some AI slot machine, and is sharing his "products" enthusiastically? I had such case, and right from the very beginning was dismissive and rude, and it doesn't work -- he keeps sharing various artifacts.
On a global level, yes communities die out. I think, global communication has reached the point when it's more a liability than a benefit. In late '90s and early '00s, maybe until early '10s, getting more connected could lead you to nice clients, getting hired etc. Nowadays, even before ChatGPT 3 in '22, every such area became overcrowded, underbidded, etc, and LLMs, surprisingly, added not much new -- just augmented this trend.
> But respect the community, and only share what is truly relevant. Save the crayon pictures for your kitchen fridge.
That highlights the problem - its not AI - it's the oversharing thats the issue. Many people have moved from "Sharing whats unusual/interested/excited me" to "What can I share today".
The constant stream of mediocrity drove me away from Facebook (years ago) and then Instagram.
When LLMs were new on the scene, I thought trust would fade in the written(text) medium. I saw it happening on Substack, Medium, and Reddit. But then VCs pumped so much money and AI has gotten into every other modality (audio, video). The only thing I really interact these days are the human beings sitting in front me, phone calls with people I know and hackernews. Life seems sorted but something feels missing as well.
Edit - I am not anti AI but it is slowly killing the digital human interaction.
Giant online communities, yes. Small ones seem totally unaffected afaict - some harder to spot scam/spam accounts, but they're outed as soon as they act. And any invitation-based thing should almost perfectly block those.
Smaller communities are generally a lot healthier anyway, so tbh I don't think this is all that bad of a thing. I don't think it's possible to be open to millions and also be healthy, unless you spend a lot of money paying moderators (and regularly rotating them, to prevent burn-out or mental harm from too much exposure, which ~0 do in an even slightly ethical way).
There has to be room for an AI-driven project that expresses a unique idea, even if there's no community around it yet. Someone has to express it, and from now on that idea will largely be implemented with AI.
> A good use of AI is when it enables people to do something they couldn’t do before, to contribute to a community when they couldn’t before.
I agree 100% with the novel contribution aspect. But there's some nuance there.
For example a project might have no active contributors. It might not be something you can drop directly into your codebase. Neither of those is inherently bad.
As AI becomes more responsible for higher-level planning decisions, the value of an OSS project becomes less tied to visible community activity like PRs and issues.
I notice this in my own work a lot. I might not use that project's code directly. But I think about a problem differently as a result. I often point my agent to existing OSS projects as inspiration on how to solve a problem. The project provides indirect value by supporting architectural decisions, deployment approaches etc. Unfortunately OSS activity doesn't capture this.
> A good use of AI is when it enables people to do something they couldn’t do before, to contribute to a community when they couldn’t before.
There are two separate things here that are getting silently conflated.
> A good use of AI is when it enables people to do something they couldn’t do before
This could be good on an individual level, if say, a doctor wants to vibe code an app of some sort for his individual practice.
>to contribute to a community when they couldn’t before.
This is where it goes off the rails. If they couldn't meaningfully contribute before, they aren't going to suddenly be able to discern that whatever slop they want to contribute is of value to the community. That's just another way of saying, if I wanted an AI opinion on something, why wouldn't I get it directly from the source, and write the prompt myself, instead of have some intermediate human prompt the AI for me?
The human has unique context. They may work in a niche domain or they talked to people and observed an unsolved problem. Then they express a potential solution via OSS. It's like product sense. Then they share that with others who find it interesting. The code is a great way to encapsulate the idea. It is usually the result of research and back and forth not a single prompt. It would be way harder to think through or build a solution without AI even if they had context.
Because of the convenience. Why should I have to go and spend my time prompting an AI, if someone else has already done that for me. Same thing with food. I know how to cook a chicken risotto, but sometimes I like having someone else do it for me.
Who is going to verify that an AI-driven project is a unique idea? How do you distinguish between a genuinely unique project, a grifter who is shilling their "unique" project, and a new enthusiast who is convinced their project is unique, but is not? This is an impossible moderation task. The only options I see for a community are to either totally ban AI-generated content, or be totally consumed by it.
I don't really know. Certainly we need a higher bar. The Kafka example in the post may be hyperbolic but I agree it pollutes the space. But we also can't swing the other way and rely completely on out of date proxies. If you ban AI code there will be very little code to see in a year. It'll take time but we'll arrive at new norms. We built semi successful ways to filter content farms in the earlier internet days. The signal has to shift to "did they think hard about this problem" which has some observable properties. Like how they articulate the problem, or why it became important to them.
I have pondered the sensibility of using AI to support the initial birth of new communities. Given the needed social validation of seeing both 1. A populated community and 2. The tone of the community being grounded, non-toxic and useful.
The alternative is having a community born that will be small, have early adopters who can be overly passionate or critical and gatekeep folks from discussion. That means high effort to curate initially.
This bothered me so much that in my tool for HTML-native authors, EPublish ( https://frequal.com/epublish/ ), I automatically insert a no-AI-training clause on the copyright page. Not that it will stop the kind of executives who will authorize mass unauthorized downloading of books to train their LLMs, but we have to at least take a stand.
Wow, that's bad. Looks like the warnings about TPM and remote attestation being a backdoor to total digital lockdown from the Stallman contingent were right.
I was on Usenet starting in 1991. Once the Internet got popular with the general public around 1995 things started going downhill. Spam overwhelmed Usenet in the late 1990's and made it almost unusable for general discussion.
Stuff started moving to web site forums which I still don't think are as good as a Usenet newsreader. slrn was my favorite.
Then reddit came along and a lot of online forums started dying as people moved to reddit.
Just this morning on reddit I reported 4 separate posts as AI slop to the moderators. They need to change the categories as I flag it as "disruptive use of bots"
For 2 of the posts the moderators agreed with me and about 5 hours later the posts were removed. For the other 2 the moderators haven't done anything.
It's a losing battle.
Some of the posts start by asking questions like "I was thinking about this and... [long rambling paragraphs] Your thoughts on this?"
I waste a minute reading then another minute skimming the rest of it and then realize I wasted 2 minutes of my life. Then another 30 seconds reporting it to the mods.
This has exploded in the last 6 months.
Then there are all the repost bots farming for karma. Some subs have a rule that you can't repost something in the last 30 days or 6 months. But it is really ridiculous when something get 500 upvotes and then literally the next day a bot reposts the same thing and it still gets 300 upvotes. I think it is just a bot farm upvoting stuff.
The baseline level of trust in an online interaction has been eroded significantly by LLMs.
The question is, how can we reverse this trend and increase trust?
I have a sneaking suspicion that it would help enormously if the stock prices of the largest companies in the world were not tied to how effective they are at hijacking as much of humanity’s time and attention as possible.
Maybe the fediverse can (eventually) help? It’s been a while since I looked at it.
Let’s empower people to effectively have more control over the content they interact with.
Social dynamics can make this difficult. We all want to be in the loop. The recent striking successes of the movement to ban phones in schools gives me hope.
> Maybe the fediverse can (eventually) help? It’s been a while since I looked at it.
The fediverse has been around for well over a decade in some form or another. It never caught on with society enough to make a difference. And unfortunately, the fediverse has now developed such a distinct culture of its own, Highly Online people with distinctive political and social shibboleths, that it even alienates many tech idealists around the world, let alone the general public.
The general public isn't alienated from the fediverse because of its distinctive political and social shibboleths, the general public simply doesn't know that it exists.
As far as the "tech idealists," a lot of them seem to want every space to be 4chan where they can be racist trolling assholes without consequence. And those folks have Nostr.
I think what could work is requiring users to prove their authenticity and uniqueness using a national ID of some sort. It would be bad for privacy, no doubt, but it surely would work. But the users' actual names should not be displayed.
I was thinking about that. It should be possible to do this in a way that mostly preserves privacy.
Sites and apps don’t need your actual national ID, just to know that you have one. I think it could be possible to have 3rd party verification services that don’t know where the verification request is coming from, thus preserving privacy on both sides.
The “slopification” of the internet has been happening for years now but I honestly don’t know what a real solution would look like.
Most people aren’t willing to go through a identity verification process, or pay to join a community, and invitation-only spaces would probably lose diversity of thought pretty quickly.
Even still, I guess one of the above is a lesser evil because the bot problem is only going to become more unbearable.
P.S. Props to the author. I really liked this writing style.
Also I’ve noted this odd behaviour imo where if I mention one of my comments is AI - as in “ this is what the AI says about the” because it’s a concise statement to aid the chat - I get severely downvoted. But if I just make my comment basically a human parsed version of the AI comment I get upvotes - with no concern for granularity of source integrity. Which is terrible in two ways.
The sad part is that the cost gets pushed onto the good participants. Once enough replies feel synthetic, real people spend more energy deciding whether the conversation is worth joining.
I feel that a lot in my side projects: maybe one should keep the half-baked AI repo for oneself and rather share what the experiment, the thesis, and the learning from the building are. No one cares much about the (un)finished product, as it can be replicated better in most cases with a couple hours' work of claude coding.
For instance, I really liked how Karpathy shared a high-level idea on the LLM-based wiki. It was sadly followed by a long tail of no-one-cares-about "Here is my LLM wiki product" posts pointing to the generic LLM-generated landing page.
I want my future community apps and sites to build in bot a flagger. I don't care how hard it is, the community that gets this right is the one I'll jump ship to.
AI slop complaining about AI slop. Many of these Reddit communities were trash way before AI. Hidden self promotion was everywhere. These people would like a platform to promote their shit, but they turn violent when others do. This guy literally wrote this with Claude complaining about others sharing things they created with Claude.
It used to be because the comments lacked any critical thinking. This is probably due to the fact that most people on instagram are teenagers. That's fine, and for that reason I stopped reading comments.
But now it's pretty obvious that the comments are LLMs talking. Whether a human initiated it, no idea, but the big walls of text done by bobbyfoo2012 seems highly unlikely.
A few things. A web of trust of some kind like vouching may come back, and general algorithmic silencing of low quality members. Also most governments are going towards the South Korean model of government-verified ID to post online to keep teenagers off social media. The same tool can be used to greatly reduce spam and slop, if that's what platforms want.
Also people will get used to AI in online spaces as AI quality improves. If I'm online trying to get help for some task, I personally don't care who wrote what if it is correct; it's not like humans have great track records of accuracy or substantial contributions either on average. Correctness is expensive in general.
If I'm online trying to relate to other humans emotionally, well I get what I'm paying for. It's been true forever that the better the gate, the better the community. I've tried to push the boundaries of openness, but as I've written extensively on MeatballWiki, soft security depends on there being more good than bad apples in a community. With machine intelligence, the economics of that are silly.
Regardless, people love people, so we'll figure it out. I'm optimistic we can rise to this challenge.
Entering the AI era, it's hard to tell the authenticity of things on the Internet. But sometimes, having a conversation with AI is not a big deal as long as we can gain something from it
It sucks that the narrative framing device of 'human slop' has vanished in the last year. Some subreddits, like all location subreddits, lifestyle subreddits like malefashionadvice and redscarepod and entry-level academic subreddits like math and criticaltheory were already just hives of human slop before AI came around because of a structural design to the site that had the side effect of normalising a total absence of quality control.
Upvotes are not a good mechanism for quality control in any way because they force good content to have the same metadata as the content that is technically well-constructed but is irrelevant, meaningless, just a platitude, too obvious to be obvious or pablum. Upvotes turn everything into a shock-value dominated 101 space.
Online communities that allow upvoting / downvoting have been effectively dead for a long time because it's easy to manipulate conversations by elevating and punishing comments to fit a narrative. This is especially true on HN.
Ironically, aggregator communities like reddit came about because forum communities were dying off. Memeing about the latest news injected "life" (if you want to call it that) into the internet. AI is just taking out the trash, in my eyes.
On the other hand I think you need reputation mechanism. The biggest problem of online communities is that every moron (or a bot) has equal voice. Clearly democratic upvotes/downvotes don't work very well though. Someone who solves it is going to be the next billionaire.
The important thing to recognize is that quality of content has never been the driver of online communities. As long as they provide an engaging break from real life, they will exist and thrive. I think the negative association with LLMs is a phenomenon that will die out in the 20s. Our understanding of authenticity will evolve and so will the tools and platforms. The internet has always been extremely artificial, that won't change very much.
There's a lot of focus on tech projects here, but it's not just vibe written projects that are ruining communities now.
No, it's a problem with art, text and videos too. Reddit was already becoming a creative writing exercise in many ways, with infamous subs like 'Am I the Asshole?' seemingly being about 80% fiction labelled as fact. But now you don't even need to know how to write to flood the site with useless 'content'.
YouTube is arguably even worse, since AI led content farms are not just spamming the hell out of every topic under the sun, but giving outright dangerous advice and misinformation on top of that. I saw this video about medical misinformation by these 'creators' earlier, and it genuinely made me want to see them crack down on this junk:
And there's just this feeling of distrust everywhere too. Is anyone on Hacker News human anymore? Is that Reddit poster I'm responding to human? Are the folks on Twitter, Threads or Bluesky human?
The scary part is that you basically can't tell anymore. Any project you find could be AI generated slop, any account could be a bot using stolen images or deepfakes, any article or video could be blatant misinformation put together as a cash grab...
If something doesn't improve, pretty much every platform under the sun is going to be completely useless, as is a lot of the internet as a whole.
I think people like the blog author need to realize that this problem can't be dealt with content moderation or users trying their best to be honest. You just get a firehose with an on/off switch, you don't get free filtering or moderation with it.
I feel the root of the problem is that Google and major platforms defined "correctness" as "high impressions" and "high engagement." This created a game where AI-generated "slop" becomes the ultimate winner. For those of us trying to create or find constructive, deeply-thought-out content, the situation is becoming increasingly dire.
It is exhausting to see a single, sincere sentence based on genuine human experience buried under 1,000 pages of SEO-optimized, AI-generated "void" that Google deems "correct." Despite this, I will keep working on filtering through the noise today.
This is a good thing. social media was already slop before AI. If this gets more intellectuals off these same websites daily and instead spend their time to better things, then I love AI slop’s purpose. There’s more to the internet than Reddit, TikTok, and youtube. Really there is, if your circle of friends is small or non existent without going to the same dotcoms, you have an issue that is worse than any AI slop tbh
I'll remove the particulars to avoid anything partisan, but:
I failed to truly appreciate how cooked reddit was with bots until I accidentally clicked Popular and stumbled upon a national subreddit post with a 'chad meme', starring a particular political leader, whose unpopularity is hard to adequately convey to foreigners.
It was not just that this post had been so severely upvoted, but the comment section itself had a mantra more or less, with very little actual conversation, just echoing the same sentiment; and all those comments in turn upvoted to the point of drowning out the lone comments at the bottom (not downvoted, just not upvoted) expressing "???". I don't know if I'd ever even written the word 'astroturfing' before expressing my bafflement at a friend, so I don't think I'm very tinfoil hat about these things.
It was just utterly bizarre to see someone who can barely get a single win in public discourse being heralded -- monotonously -- like he was the second coming.
>I failed to truly appreciate how cooked reddit was with bots until I accidentally clicked
For me it was a wholesome response. It seemed genuinely kind/human.
Click on user profile...it's a bot just pumping out posts like that. Looked organic when seen in isolation, but when you see a wall of them you see that it's got to be an LLM (with a good prompt).
That was disheartening...I had kinda accepted that the sht-stirring rage posts might be bots but the kind comments too? Ouch
AI is lifting the voices of the lazys and below average to average people. for those who would never have progressed it might seem like a god given gift. for the ones with the desire to grow and learn and go beyond average... this is a curse.
There are already some sites where you're much better off if you have an old account. Like I have a super old Twitter sitting there for random stuff that requires it. I tried making a new one a few years ago, didn't post anything, and it got banned within 2 days for "bot activity." The old one has never been banned.
It was also so much easier to make a dating app profile back when I was single, like one click. Recently was watching a friend set one up, and now they not only want like 3FA but also proof that you're a human. Assuming the old accounts are grandfathered in.
I made this point elsewhere, but people are learning a lot of what us had to learn the old way which is no one cares about your stuff for the most part and now the value provided has to go way up to get people to care. That is, as the author says, the novelty has worn off and since we know it's AI the perceived value is also way down.
We're all recalibrating.
I do really think this is just a quick period in time before most people realize that the slop posting doesn't help them personally get anything and most give up and we go back to roughly the ratio of cool things with real value to see but like on a bigger scale because AI helps you do more as one person.
I don't know... I might have said the same thing about email/text/phone spam but it has only proliferated to the point where it's a constant stream of garbage. Email, text, and phone calls are almost completely useless at this point. Sifting the signal from the noise is a non-stop effort.
I think people who want to push a certain narrative might just set up a quick bot and tell that bot to start posting on Reddit or whatever and just let it run. Why not? Little effort on their part and they might actually have influence. The same reason why spammers apparently think sending me 10 text messages per day about a loan I've been approved for. It probably does work 0.0001% of the time, but that's okay if it's all automated.
I mean I think the dynamics are a bit different in online communities at least for actual communities and not drive by subs like r/technology or whatever.
Especially say here on HN with Show HN and such the forcing factors are "i get no votes or community recognition"
But I don't entirely disagree with you I think things won't totally go back I think it will settle way more than now though especially where things are a little more niche.
I'm gonna speak on behalf of language models' capability of making online communities better. In recent times, the frustrating forum phenomenon of "learned helplessness" is making me too annoyed to participate. Even in a fantastic subreddit as /r/LocalLLaMA, there are people posting replies in the vein of
> user1: please help me understand this acronym the post title speaks of
> user2: (explains in detail what it means)
In the "good old days", a low effort, surface level question would result in someone either muting or banning the person to keep the discussion high quality.
There I am, browsing a forum dedicated to LLM enthusiasts, and an unbeliavable number of people are asking LMGTFY/RTFM-level questions they could even find an answer to from a free Google Search AI summary, and people are rewarding them by actually responding to them with effort.
Thanks to models being quite intelligent at answering basics, the ban-hammer should be used more swiftly if people keep polluting forums with low-quality posts. There's no need to feel bad for them not having the time or capabilities to read through years of forum posts to feel qualified to answer.
Maybe even these sloppy posts authors can be outright muted or banned with a heavier hand for the sake of quality.
What good is SEO if people just read the Gemini summary at the top of Google and don't click the links? We have a chance at a real search engine again, now that there's no money in it.
I actually think the good old Page rank[0] is crucial because if the authoritative sources link to some website, webpage or content it means that particular item provides some kind of value to the entity that linked it. I'm also a big fan of metadata which can be used to describe web content and make the content more usable to the search engines and the Web users.
For every argument against AI slop, you will get a variation of it's the future, or I'm 10x more productive now, I've shipped 3 applications in 2 days, etc.
They won't stop talking about it and defending it. But I can't get anyone to share their amazing work with me.
There is a reason the Show HN projects that are mostly vibecoded don't get much response. It's because they aren't any good. Comments that are AI generated are hollow. Videos that AI generated a shell of their sources.
Obvious slop still makes it to the front page of HN, and sometimes farms GitHub stars.
These posts also usually get all these glowing comments from users who clearly haven't checked the code. It's even worse when authors get busted and claim "Okay, Claude wrote it, but the design is mine" despite clearly not understanding the output themselves.
Unfortunately, that makes high-effort projects less visible. The SNR will probably keep getting worse until slop can be flagged on HN.
Invert the economics. Right now there is value in posting LLM generated content that is more than the cost of using the model.
If platforms had a subscription model that you had to pay for in order to do more than just read comments, there’d be a lot less LLM content. There would also be a lot less of all content. But maybe that’s the price you pay (literally) to get rid of AI slop.
This kinda thing makes me sad that keybase sold out to zoom and wonder if it can be resurrected. It was such a simple web of trust that went viral enough that I still occasionally see it on HN or Twitter profiles even though it's been long dead.
There are maybe 20 or so online handles I know, some of whom I've met in person, who I deeply trust. To the extent that I fully trust anyone they vouch for too.
Even with just one degree, that's a large enough international semi anonymous online community that can provide value to each other through online text based communication. Doesn't need iris scans or credit card checks, just "patio11 on hn Twitter and whatever his domain is is one of the good uns" and a network effect from there.
Already seeing some form of this reputation staking in eg Pi PRs, everyone is treated as clanker slop by default but the entry bar remains quite low to prove and build reputation.
I don't think online communities will stay the same in the face of AI but I do think whatever comes next will strongly rhyme
HN is in peril and I don’t think it is a bad thing. Or rather, I’d like to bring back the old chestnut: it’s a good thing.
While the site has moved to using /showlim, the AI garbage just bypasses that and goes straight to the home page. Almost every project that’s being shown is vibe coded and looks exactly the same - generated by Claude or the like. This is an excellent test for the site: will it be able to adapt or do we simply end up with a husk of what HN was and it’s the AI posts driving majority of engagement, Overton window, and upvotes/downvotes?
I look forward to this, I think it is an exciting development.
Maybe "generated by claud" is a meaningless category because it encompasses both projects made via AI by people who have never used a computer before, and projects made by senior developers who used AI to do the actual coding.
I think one of the big differentiators of slop is critically reviewing the work as a human and continuing to refine before letting it see the electrons of public places. I think that the true benefit is harnessing AI to augment ourselves as an extension of us, rather than a replacement. I'd be curious if there is a way to effectively prove something is human first rather than AI first on the internet. I haven't figured out any particular way yet as even using AI to detect AI would require a sufficiently large sample to determine. Something that keeps me awake at night.
What online communities? Ever since Reddit went all-in on censorship, actual conversation moved to the deep web, mainly on Discord and other places invisible to search engines.
Do we need a new carbon-credit style market for companies that want to continue putting out such slop and paying for moderators to remove the waste after it's been made?
AI will be a forcing function that pushes people to go meet in real life to do stuff together.
Even if everything online is fake, events are not. So if people say they’re going to show up somewhere, there must eventually be a moment of truth. And then you can form high trust private group chats to keep talking together.
It may be hard for the current generation of chronically online people to adjust to that new reality, but the next generation of kids growing up can get used to this now, and eventually socializing in person will be natural again and the internet is for bots and weirdos LARPing as something they’re not.
Maybe, but the small groups that form out there in the real world will each be much smaller than the large group that stays and gets jerked around by the bots.
The large group will have to endure the manipulations that we've come to know and hate from the internet, but they'll also be better coordinated than the small ones. They'll vote together, buy the same sorts of things, have an outiszed influence on the global conversation... They'll define the de facto majority opinion whether or not they actually are a majority and whether or not it's authentically their opinion.
I don't think that's a good outcome. We need ways to get on the same page en-masse, if only to counteract the harms caused by whichever highest-bidder is currently using an AI horde to control the other group. Besides, we should save them from this abuse for their sake, if not for ours.
The internet is worth fighting for, if we abandon it entirely we'll be forever at a disadvantage against those who would use it to manipulate.
I've always thought the "strict invitation trees" or vouch trees would be an interesting way to moderate a community, even before the LLM era. A user can vouch for an unlimited number of new accounts, but if more than 10% of the vouched accounts are banned or flagged down the line, the parent voucher acct is also banned/flagged.
Since it creates a tree structure, you can wipe out entire armies of bot/spam/otherwise accounts by following the vouches up the tree.
Wait for the EU AI Act to require text watermarking in August. It will work, and it will be effective -- not because it'll be impossible to circumvent, but because all the big SaaSes will have to adopt it, and the hurdle of stripping it back out will filter out the vast majority of the sloppers.
Im not a crypto person, but I was intrigued by Chia. They generate their coins based on allocating disk space. So if you have a bit of free space, you can fill it with plots and play the lotto.
The intriguing part is that I think it works against scaling. The incremental cost for me to use the 500GB of free space on my disk is $0, but someone scaling a bot farm has to buy all their space.
Real people tend to have a lot more idle capacity than optimized, scaled businesses, so any kind of proof of idle capacity seems like it would disadvantage bot farms.
I’ve also thought that proof of collateral spending would be a good system. For example, you buy groceries and the store gives you a token saying you spent $X of real world money. Those tokens help show you're not a bot. Keeping that system honest and equitable would be extremely difficult though.
Maybe schools could give kids tokens for attendance. It sounds kind of dumb, but who knows.
The actual reality of Chia is that it drove up hard drive prices just like LLMs drove up GPU prices. People bought petabytes of space just to run Chia and if you wanted a computer you had to outbid them.
Probably all three of those. Tildes and fediverse instances do the first, resurgence pending for the second, and lastly non-mainstream social media sites have no SEO garbage by default.
Human slop is realistically just as bad. In a strange twist, human commentary on the Internet is asymptotically approaching an older LLM. Trite cliches, repetitive tropes, and tribal affiliation signals dominate conversation.
I have turned to blunt instruments: blocking individuals on their first cliche banner-wave. It has substantially improved comment quality but I still suffer from the problem that I don’t block stories entirely.
You've written about this before so I'm curious how effective you find it? Did you try a blocklist on top of doing things like muting words? Btw I enjoy your content on this site quite a bit.
Thank you. Those are kind words! I've mostly been pretty happy about the lists I've curated blocking users. I don't use words because I'm sure I'll clbuttic/scunthorpe my way into missing something and so on. The current problem I have is that on my iPhone, I use Chrome and it doesn't have extensions so I have to view everyone. I'd much rather view people I like, so I'm going to have to make an iOS app.
This synthetic participation (LLM or otherwise) has catalyzed weakspots in HN's high-trust environment.
The weight we give to the average HN comment is orders of magnitude higher than the average Reddit (& co.) comment, and this relationship probably goes both ways (much higher ROI on ads/propaganda). Due to the low volume & high trust, it seems to be a very different (easier) environment in which to achieve pervasive propaganda/advertising/etc with a disproportionate impact.
I remember when some new LLM version came out (maybe from Meta?) I saw something like 3 of the top 10 posts on the front page were all variations of "Foobar 2.1 New Model". Perhaps not explicit, deliberate manipulation, but the result was the same, and apparently allowed.
How many of those generic LLM websites (https://letsbuyspiritair.com/ comes to mind) show up on the front page per day? Zero effort static front-ends for some unremarkable data. I'm not going to touch the politics minefield, but that is a weakspot too.
All of this, and yet I think HN has handled it relatively well. I really appreciate not seeing comments of the form "I asked Clog/Gemini/etc. here's 5 paragraphs". Places like Reddit do not have the agility or control, and have degraded accordingly.
It makes me sad to think that a short time ago, every forum was ~100% humans, and now it is some fraction of that. I wonder if I will ever see that again.
AI slop is hurting my community in a different way. We have an internal viva engage community for quick development how to type questions at work. More frequently, instead of asking "how to" questions to the crowd to crowdsource answers, people are reaching out to me directly to ask me why the solution AI suggested doesn't work.
That people trust AI over an organizational knowledge is bad enough. I fear that AI is turning people generally antisocial.
This is happening at my workplace and it's incredibly annoying. We get support tickets asking us to troubleshoot AI written scripts. The funny thing is that most of the time, it would be faster for the customer to tell us what they want to do in plain english and have us make it for them. Hell, if they make an honest attempt, we can point them in the right direction and teach them.
It's frustrating because we're bundling this shitty AI with our product so we're just making more work for ourselves. Then there's the push from leadership to use more AI...
I don't think it's making people antisocial though, people just like easy solutions to their problems. We're giving them what seems like an easy solution. But it's easy for them, not easy for the reviewers.
We filter out most Show HNs now (i.e., most of the ones that are submitted or attempted to be submitted by fresh accounts), and we look out for ones that have substance and authentic writing. Others have commented that the standard has been better in recent weeks. We'll keep working at it.
Sigh. First the article states that "coding by LLM is the way things are done right now" in 10 different ways but message boards and articles need to be protected.
We get it, the current narrative is that coding is the big thing, promoted by billionaires and scabs alike.
So, the coding narrative must be protected until the IPO of Juniper^H^H^H Anthropic happens and the whole thing implodes.
You already could have code for free and faster by using "git clone" without a company of thieves selling your own output back to you.
Good. These communities were inauthentic echo chambers for most of the past decade anyway. advertising powers "online communities." slop must be selling. Reddit died long befor gpt 1.0.
There are "nice", "polite" slop enthusiasts. The ones who insist they have taste and tact. They would never post bad slop, recklessly, only the very highest-quality human-refined, curated slop. Not really slop at all, they would argue, because they gave it a careful review before posting it. They insist there's a very important difference between this premium slop and the nasty kind, and that low-quality human-authored media is actually slop, too, when you think about it. They talk about how important it is for people to use slop thoughtfully, efficiently, correctly, and that we all need to learn about and discuss slop constantly because it's the inevitable future and highly relevant for everyone.
They muddy the waters. They wheedle, rules-lawyer, carve out exceptions, and talk about how important it is to have nuance in separating virtuous applications for slop from bad ones, and that focusing on the bad ones is actually very tedious and rude. We should have polite discourse about the good things about slop and stop being so mean about bad slop, which isn't even really a problem. The bad kinds of slop will be solved soon, probably, and the harms are overstated. They colonize spaces.
If moderators don't swiftly throw these slop enthusiasts out on their ass, slightly less polite ones will post slop slightly less politely. More and more of the people participating in the space will have favorable opinions toward slop, and shout down people who object to slop. In no time at all, your community is a slop bar. Who could have imagined?
I usually type 5000 words researching for a 500 word output. It's not "write me an article on X", it's 99% my own ideas, but worded and structured and polished a bit. But I don't post them here. They are on my blog.
Online communities needed strong authentication also before AI slop. That people were too complacent to do it is of course a problem, but now you cannot ignore it anymore.
I do not feel sad about lost online communities. When i was at my early 20s (early 2010s) FIDO of my university was still running and i had amazing time with some oldschool hackers there. I was too young for that community and always had a feeling that I have lost or , rather, missed something great... you know, like i was born 20 years later than i would had liked to. Now this echo conference is dead. That 486 machine was probably disconnected and thrown away somewhere. Everything dies at some point. Ask yourself : do you need the tech that gives that community vibe or do you need the people behind. I try to stick to people. As for me, i would rather have an in blood-and-flesh nerd friend instead of a whole human-driven reddit. He probably knows the answer, he is happy to help. There was an article here at HN long ago, that in average we have around 150 close contacts at a time. Some drop in, some fall out and get unconciously replaced. Going beyond that number would imply exponential increase of management costs. Those oldschool guys from FIDO, they disappeared for me without a trace. Partially because quite soon I ended up in the community of radio-engineers. Honestly, i am grateful to all people that helped me online, those who were there, who actively participated and, for some reason, cared.
I run a niche creative community, and we outlawed AI-generated content in 2022 as it was easy to see how corrosive it would be to the community.
It hasn't been easy. We ban fake AI accounts daily and shrug off around 600 AI content creator accounts monthly.
It's a lot of work, extra work that wasn't needed before AI content came around, and of course, that is an extra cost.
I fear losing the battle.
High quality anecdata are exactly the reason why I love HN. Thanks for posting about it.
First, how do you identify them? Is it strictly admins monitoring posts/server-side logs or do users report odd behaviour?Second, what is the purpose of these accounts? Are they basically running submarine adverts, or are they just trolling (to harm the community)?
A starting point for study;
AI Deception: A Survey of Examples, Risks, and Potential Solutions - https://arxiv.org/abs/2308.14752
Deception Analysis with Artificial Intelligence: An Interdisciplinary Perspective - https://arxiv.org/abs/2406.05724
Some background (pre-AI) ;
Online Deception in Social Media - https://cacm.acm.org/research/online-deception-in-social-med...
It realy is time for a Butlerian Jihad
Hmm i'm curious how niche.
Or ... how small can a community be and still be drowned in AI slop?
Is it a community inside one of the major platforms, or it has its custom thing?
Unlike a lot of communities, yours at least started on the correct side. Better to ban outright, than to slowly realize that you should have banned it.
When it comes to slow forum content, I think it's a fool's errand to try to determine if someone is using AI for their responses. Any of the tell-tale signs of AI are easily skirted by mentioning in their prompt to not do so. It goes back to how you can't sanitize human language which has been an issue with LLM's from the beginning.
Encouraging a culture of not using AI works to an extent, but I also tire of threads claiming the parent post is AI. There isn't a sure-fire way to know one way or another.
Indeed. Take a soft approach, or "wait and see", and you'll just allow your community to get infested with slop enthusiast crybullies that loudly protest any pushback against "genai content". The communities that draw a firm line and hold it will be the only ones that endure.
It was a surprise to us how vehemently some folk defended AI content and assumed it was their right to post it within our community.
We had no problems with people using it and posting elsewhere, it was the demands that we must allow it that were problematic and made us question whether we were doing the right thing.
No regrets now, though, as we see competitors being flooded with AI slop and they are too invested in it to change now.
Now I see it as the perfect tool for impostors.
>It was a surprise to us how vehemently some folk defended AI content and assumed it was their right to post it within our community.
People often confuse freedom of speech, with freedom to access a specific platform for speech.
Its dead wrong, I dont know why people would want to be in a community where they arent wanted.
> I dont know why people would want to be in a community where they arent wanted.
This is standard predatory behavior. Child abusers hanging out with kids, weirdos hanging out near the women's clothing department, etc.
It's usually a clear indication of the sort of people you don't want to associate with in your online community. They bring a net negative to the table.
I have a similar problem in a community I'm a a part of? How are you reliably detecting AI?
What about charging $1 or $5 for an account? Seems like you could stem the tide pretty easily with something like that.
Or applying for an account could involve sending a handwritten letter by post.
Adding that much friction is also going to loose you many genuine users. Might be worth it depending on the community but if it makes newcomers fewer than your usual churn rate its a death sentence.
This is fucked and I hate it. Internet is (was?) about convenience and direct access. I understand there are challenges that need solutions, but this ain’t it
> Internet is (was?) about convenience and direct access.
Was.
Maybe you are to young to remember the (pre-spam) days when it was polite to leave your SMTP server open for others to use?
> was
Yep. Was.
This isn’t the internet you grow up on. This is an internet scoped for bots and organizations.
Slop as a letter is a thing already https://www.axidraw.com/
Not that I don't take your point that such a service could exist, but the site you linked explicitly says they don't offer letter writing as a service.
Also, I imagine it's not impossible to reliably distinguish between an autopen and genuine handwriting. The company who's site you linked say their machine can't perform complex pen movements so calligraphy is impossible.
The real advantage of posting a letter is that you have to pay for postage, and the stamps on the envelope will indicate which country the letter is really coming from.
Right, because I cannot possibly purchase a thousand such letters for less than the cost of minimum wage for an hour or two.
Where I live a 2nd class stamp costs the equivalent of $1.24. That's $1240 for a thousand.
Not including the cost of the letter itself, or the envelope, or the cost to write it if it's being farmed out to overseas labour, who then has to send it by international postage. And then you have evidence of where the letter originated, and that can be compared with how the user presents themselves online.
Little bit more than 2 hours minimum wage I think.
We bringing back Something Awful, now?
No it's Something IS awful.
Sorry, they did an interview about 20 years back were they kept correcting the host to 'Something is awful' I have just called it that ever since.
It never left.
presumably most people running these bots are doing it for some financial gain. as long gain > cost the issue won't go away.
It'll stop the ones doing it for the lols, but I imagine they're a minority anyway.
If you head to Twitter right now, the vast majority of bots are blue checks. It seems to actually encourage the opposite, where you trusting that someone paying $8 for an account makes you even more likely to fall for slop
I think twitter is an odd-one-out here, twitter as a whole has been heading down hill ever since the acquisition, and I wouldn't be surprised if many of those blue checks are officially sanctioned bots. Especially given the way so many of them push the same narratives that Musk does at the same time he does.
I don't think so.
The people leaving LLM replies are paying minimum $20/month for LLM access, and probably more in practice.
A one time $10 fee is not a deterrent.
I think you're right. I think _merely_ paying won't deter them. But, if you couple this with banning of accounts that post AI slop, you get:
1) the cost becomes even higher for AI slop factories since they will probably get multiple accounts banned.
2) It prevents influence to accrue to any specific account. This diminishes the incentive for slop, since sufficient success means a ban.
3) It reduces the moderation effort since creating accounts is no longer a sustainable strategy.
Agreed, but I left twitter even before the right-hand-raising oligarch took over. The reason was that censorship started to kick in aka twitter staff writing me a mail that my "conduct" is not appropriate. Basically they try to reduce the "aggressiveness" in written content. Well, that's already an assumption on their part; and in any discourse with orthogonal opinions, you can not really reconcile such positions anyway, so I don't need some 20 years old from India hired by Twitter to tell me what I should or should not do (though, realistically it was a bot actually that just scanned for content). I noticed that censorship is increasing on "social" websites. Reddit as an example is a mega-censorship site - the amount of deletion by crazy mods is insane.
Bots are indeed killing twitter now. I noticed more and more were leaving permanently. Musk evidently accelerated the decay here. There is something wrong with his mindset here, it's almost as if it is pathological. His perception of things is genuinely distorted, and I am not even 100% certain he is completely aware of it; he must be partially aware, but it seems there is also something wrong with the brain. No wonder he gets along with Trump - that one now has clearly dementia narcissism in the final stage.
This does not work, for similar reasons why captchas piss off real humans.
You add a barrier here. You think that your solution means that AI is reduced, but you also reduce real humans. I noticed this with other parts too, such as "you need to verify your identity before you can post to the ruby issue tracker". I can do so, but I need my tablet and this takes me more time than before, so I stopped using the ruby issue tracker altogether. (It's not the only reason, but adding barriers really makes me invest my time elsewhere - more likely to do so at the least.)
You always need to consider all trade-offs. Charging money means you will also offset real humans at the same time. And it's not solely about the cost; it is simply a hassle to want to do so. For similar reasons I also rarely register at a phpbb forum - I need to store the password to not forget it etc... so more hassle. Using a password manager is also more of a hassle.
I can't access gnu.org, because their extreme measurements against the AI bots blocking my slightly older browser.
Yeah, I tried to sign up for instagram, but at the fourth captcha I gave up and left. How does instagram have any users with such a hostile sign-up barrier?
My profile picture is old enough to open an account on instagram
Fun fact. There is this threads twitter clone from meta. How do I login?
I "log in with Instagram", where "I log in with Facebook". Guess how well data recovery works when there is literally no password set. I'm surprised these systems work at all.
> Charging money means you will also offset real humans at the same time.
On completely different scales. Even if it not perfect, it is strong enough of a filter to turn a bot infestation into a mild annoyance.
That's an assumption. Depending on the incentives in play, the relative scale at which AI users and real humans are affected may well be the opposite of what you expect.
Metafilter and Something Awful both do this.
Both sites have survived and continue to work well for their users.
A small cost does definitely work for some sites.
Is SA still a thing? I had an account since... 2007? God I'm old. I miss the days when you could have a community that you could easily search for content. Nowadays everything is a discord black hole.
A lot of the "add a cost to stop bad actors" end up being a selection effect in favor of bad actors.
Sure, it might stop 10% of the bad actors and lower the numbers, but it'll stop 80% of the good users who aren't experts at getting around the cost or don't have an income from using the service to just pay it as a cost of business.
>shrug off around 600 AI content creator accounts monthly. >I fear losing the battle.
I was in a small niche creative writing community for a while. Circa 2021\22. AI wasn't why I was there but I demo'd a few LLMs to a lot of the users in the Off Topic section because people were curious. Even with an explanation of how they operated, almost everyone was at least interested. One author told me how he operated similarly, rote learning how to write like his favorite authors by copying out their texts, hand written, word for word. Their concern was largely that they were too hard to use from a technical perspective.
These people knew I was there to learn, and that I was unlikely to ever try and publish LLM derived content. I said as much often.
Sometime in late 2022, a switch was flipped. And almost all of them started talking about how AI and those who used it were unambiguously evil. They didn't say my name, but they stopped engaging with me. Gradually, they started reposting twitter content from extremely anti AI people. Complained about AI submissions to various publications. Eventually, someone reposted a tweet calling for the death of anyone who used an LLM, with not even a single disagreement (and lots of encouragement)
I just bailed. I had only ever engaged positively, answered questions for the curious and tried to help people out. I posted one AI assisted story, and that was to demonstrate how my contributions were tracked vs AI contributions automatically in the editor to satisfy someones curiosity. Clearly highlighting the bits I had written. Just a technical demo. No one was asked to enjoy or positively engage with it as if it was human written.
A while later, most of their submission rules were updated with a new clause, if it was judged that AI written content was discovered, they would blacklist that person from all submissions across their entire community. Considering I had demo'd LLMs, and the uselessness of AI detectors, it was clear to me that these people would be able to justify blacklisting me if I poked my head up at all. I had been developing my own story for submission (myself, no LLM content), but I just dropped it. I just didn't feel like sticking my neck out for the witch hunt.
I also used to be quite engaged with blockchain. And it went through a similar process, most people ignored it until that paper about the power usage (Claiming it would spike to some level it never reached) and then suddenly being associated with it was an outrageous moral crime. But after a while, when it turned out that the power use claims were largely a nothing burger, people gave up on the hate parade.
I don't think you will "Lose the battle" (at least in terms of keeping AI users out). And its always ok for small communities to be selective about their membership. I just don't think its possible to maintain such artificial rage for more than a few years. The AI Datacenter water/power claims are a clear London Horse Manure problem that looks set to resolve itself, and the copyright issues will get sorted to some degree. Eventually I think you just wont care enough to ban anyone except low effort spammers (of which there are a huge amount, granted).
YMMV
> I just don't think its possible to maintain such artificial rage for more than a few years.
What makes you think the rage is artificial?
Have you considered the possibility that most non-programmer people mostly experienced the negative effects?
Blockchain turned out to be an absolutely awful payment method, so most people only know it as 1) a way to do crimes like ransomware, 2) a get-rich-quick scam, 3) some buzzword companies threw in everything, 4) the thing that made GPUs unaffordable.
AI is now the thing that 1) is drowning the internet in slop, 2) companies throw into everything - to the point of making apps unusable, 3) makes most computer parts unaffordable. And what they get in return is... a kinda okay-ish Google? A homework plagiarism machine?
Their opinion about AI or blockchain most likely has absolutely nothing to do with you. They are just seeing the world noticeably get worse, and are desperately trying to protect their communities from it in any way they can.
>Their opinion about AI or blockchain most likely has absolutely nothing to do with you.
Which is why I left before I was banned. I no longer felt comfortable and they probably likewise. They wanted a safe space to hate on people involved in AI art and my leaving contributed to that. That said, I doubt I could have posted content calling for the death of authors or honestly any other group in that space without being ostracised.
Its a bit like saying "A witch might have burned down their house, so their reaction against witches is understandable" maybe in abstract. But that doesn't mean the subsequent actions are acceptable.
> Have you considered the possibility that most non-programmer people mostly experienced the negative effects?
Yeah absolutely. These people in particular, at the time, really on experienced it through 2 factors:
1. They (like many people) posted a lot of their midjourney creations for a few months. (21/22 was like that)
2. They saw an increase in low quality submissions.
So gripes about AI art and low quality submissions seem perfectly valid.
>Blockchain turned out to be an absolutely awful payment method >AI is now the thing that 1) is drowning the internet in slop, 2) companies throw into everything - to the point of making apps unusable, 3) makes most computer parts unaffordable. And what they get in return is... a kinda okay-ish Google? A homework plagiarism machine?
Yeah so I am not complaining about people having negative opinions, I was sort of talking about the over meme, the zeitgeist switch where suddenly the entire conversation goes from pros/cons to what appears to be a standard, negative message that everyone absorbed in a short time. Basically used like a thought terminating cliche. I have problems with crypto, and I like things about crypto. I can have a great conversation with most people, but for 12 months or so, you couldn't have a conversation without people loudly shouting about how the power use was going to destroy the environment and that it was going to use X% of the power by Y date. They didn't want to talk about it, they had been given evidence that the discussion was over and everything was solved in favor of their beliefs. The AI debate has now roughly arrived in the same place, there's no longer really a discussion, but the zeitgeist has this one single mode that's constantly debated. To the point where you could be running a local LLM trained only on data from the 1800s and you can still be considered to be responsible for some data centre single handedly draining a lake.
My point is, like crypto, this fixed idea will eventually erode and the hate train will move on. People with well thought out negative opinions are still going to exist past that time, they just wont have people screaming at fever pitch about it constantly.
You didn’t like the broader consensus views towards llm usage but that doesn’t mean it wasn’t ultimately a positive to their community that you left. It sounds as though there was a mismatch in what you and the broader group wanted so perhaps a non-confrontational split is the best that could be hoped for in this situation?
> They wanted a safe space to hate on people involved in AI art and my leaving contributed to that.
Once again, I have to ask, why do you think that that is what they want? Maybe they want human generated content?
> the zeitgeist switch where suddenly the entire conversation goes from pros/cons to what appears to be a standard, negative message that everyone absorbed in a short time.
Understandable, though. Why discuss the pros and cons of $FOO when you're drowning in it? All you want it to stop the drowning.
Genuinely dont know how this made at least 3 people angry enough to downvote but not suggest why.
I'm not angry, you just seem to be taking a very self-centered view on the general vibe in this specific forum you mentioned, and are interpreting general anti-AI/blockchain sentiment as personal attacks.
So I downvoted.
Its more like, here are the decisions I made while being in the position of being on the outside of sentiment, and the timeline of that changing sentiment.
The only thing I really took personally was the call for death, and that was me making a decision to leave in favor of my mental health.
You're a victim of the uni-cause
This is entirely vibes based on reading research on similar campaigns so I cant pull a paper with hard evidence about this specifically. But I believe chinese/North Korean infowar campaigns are behind these seeded talking points. They seed in these far left activist communities and then once they find one that sticks the real people in these communities start carrying the message out to other communities and then the CN/NK botnets amplify the messages and suppress the responses. They dont just do this on the left im just highlight left for this specific point.
Yeah, that's not it. China is heavily invested in AI and LLMs. Also this sentiment is organic, most people I talk yo about AI are anti-AI.
The exceptions to the anti-AI sentiment are management and people with a vested interest.
The battle is lost. You never had a chance. There's nothing you can do against the constant torrent of AI content that's only getting started. The online communities that we know and love are going to change and there's nothing we can do about it. You can't keep AI out of any platform no matter what the community guidelines say or even if it seems locked down with no bot access.
The only solution is in person meetups, bringing back the 3rd places, joining a club. Maybe it's not such a bad outcome.
I think the only reason stackoverflow still has any activity is because the community choose to ban AI content [1] and so did most of its other networks [2].
Perhaps it will even see a (small) resurgence when AI providers start charging for the actual costs.
[1] https://meta.stackoverflow.com/questions/421831
[2] https://meta.stackexchange.com/questions/384922
Considering StackOverflow is now providing a ground truth for AI training, I believe the ban is more about not poisoning the well rather than keeping the StackOverflow or StackExchange human-friendly.
That ship has sailed long time ago with zealot admins and verbal harassment.
> That ship has sailed long time ago with zealot admins
Where there are certainly strong examples of this, a lot of people mistake enforcing the rules as zealotry. Part of the point of SO was that if things don't change then there is a completed state for SO too - no need to ask duplicate questions like on platforms where a post is less long-lived. Unfortunately people take things like “this is a dup”, “provide more information as we can't help”, “this isn't a complete answer”, and so forth, as deeply personal attacks…
One of the good things about LLMs is that they've drawn off all the simple already-answered questions! Unfortunately the more complex ones, or the ones for new solutions, are also going there so SO and its family of sites is ceasing to grow even in the ways it wants to.
> and verbal harassment.
Again, that did/does happen, but a lot less than some people report it. The most abusive people I've seen on there are those who have been given one of the responses I listed above.
People also always bring up the "fake XY problem" thing on SO as a sign of toxicity or whatever, but I’ve had many, many results where it was a XY problem, and the actual problem Y was solved, yet I landed there searching for a solution to X :/
The AI companies aren't so deep in the red when you only look at inference though - they are investing loads in new models in an AI arms race.
So I don't imagine AI is going to go away, especially given that now there are more open source models like Qwen that you can run locally. So even if those American behemoths go bankrupt it will persist.
> The AI companies aren't so deep in the red when you only look at inference though - they are investing loads in new models in an AI arms race.
Depends on how you're looking at it (using speculated numbers for easy math):
1. Having operating costs of $100m on revenue of $10b is very deep in the red, regardless of training costs.
2. Having $90m training costs on $10m revenue means they're just breaking even.
Problem is, we don't know their financials and how it is broken down (they could, of course, clear up the confusion and release some numbers, ut they aren't doing that now); all we know is when they need a new raise to continue operating.
From the raises we can determine what their operating costs are (For example, raised $30m in 2024, then $300m in 2025 is a 10x increase in operating costs because they aren't spending on capex. The training is done on opex).
From their subscriptions (which are all only estimated), we can sorta tell what the revenue is, but that's for subscriptions only which are almost guaranteed to be running at a loss (until recently, anyway). We don't even have estimates on revenue from the PAYG API users. Common sentiment is you'd be a fool to use the PAYG options for anything but trialing the service, but the world is filled with fools, so you never know!
What is interesting is comparing the prices for PAYG on the providers supplying open models vs the PAYG on the closed models - the suppliers providing open models aren't spending on training cost, so the cost to supply tokens on open source models is pretty close to the actual price of running models. This is partially confounded by the fact that many of these will have VC money backing them (they are not bootstrapped), and so will also try to perform landgrabs via subsidised tokens, because their goal is an exit with a buyout, and without an eventual acquisition they will simply fail.
I can't think of many open source model suppliers providing subscriptions, not ones that subsidise the subscription, at any rate.
The first IPO of these SOTA providers is going to be the eye-opener; we'll finally see their financials and we'll see just how much the PAYG was subsidised, and how much the subscriptions were subsidised.
Until then, with a collective industry investment of $800b (last I checked) and a collective revenue of $20b (last I checked), they are most definitely operating in the red for the most common definitions of operating in the red.
I kind feel this might be good. Bot writen comments and AI media that can no longer be distinguish from real, will make us human leave the social networks, which helped to separate Us humans. Going back to the real world were you can trully believe on what you see, and enjoy the tone, look and scent of of our fellows humans beings.
This seems naive. As long as people are "enjoying" the AI-infested social networks, or at least not annoyed enough to leave, they will stay on them, and become further disconnected from reality. We have half of EU teenagers talking to chatbots regularly. Alienated people flock to them.
Reminds me when reality TV came a long. Many folks were convinced that it would be a passing fad and that within 12-18 months TV would return to the way it had been before hand. That because the quality was so low people would eventually bore of it, still waiting for that moment...
For one point, I was a daily active user of Reddit for 10 years, I deleted my account and left the platform in January over LLM content.
Yes. I did not left but visit the site less often and kind of worry about its future. The engagement over what I post is just much lower, while the number of reported visitors seems to increase. I don't mind some good quality AI comment, sometimes one see it, but overall it is slowly becoming ghost town.
Social media that caters to what the user wants/interacts with can become infinitely more so. This is already applied to entertainment content across tv and the internet.
At some point an instagram/tiktok/etc user could see nothing by real people and not even know what is promoted vs ad vs post.
You live in the Truman Show. Just enjoy it god damn it. Stop complaining already.
Actively seeking out a chatbot is different than wanting to talk to humans.
> Actively seeking out a chatbot is different
A lot of them aren't actively seeking them out. They are pushed at them and they just try it.
Go on, just this once, you can stop if you don't like it…
> than wanting to talk to humans.
The disaffected just want to talk. I'm sure they'd prefer humans but once the chatbots seem to be good enough in their absence they get a bit trapped there because the bots are too sycophantic and they get conditioned to want that from humans too which will not happen.
A few tech companies managed to get massive numbers of people addicted to toxic social media content that was terrible for mental health but made a small group very wealthy. I don't think those same businesses and execs are just going to pack up and go home with an even more powerful content tool available now. LLMs are going to be used to create skinner boxes that make Facebook and Twitter seem like wholesome communities.
The problem is that many of us have niche interests and no one local to discuss things with or get made fun of for being a nerd.
I loved maps and geography as a child and still do. I've never met anyone in real life that likes it as much as me. But on the internet there are places were I can discuss it and other people share fascinating articles, pictures, etc.
This is why cities are popular for this exact type of person. For centuries. People with niche interests move to a city, which by sheer density, have others with said interest.
Plenty of people have a reason why they can’t do it, but plenty do it and are happier for finding their community IRL.
Yeah except cities suck for many reasons. And for really obscure nieches where there may only be a couple hundred enthusiasts worldwide cities are not going to offer you the same forums that the internet did.
But I have a lot of friends online, both ones I made online and ones that have moved away from me and vice versa.
I don't want to be limited to only the friends I can make who live near me
> "I kind feel this might be good. [...] Going back to the real world were you can trully believe on what you see, and enjoy the tone, look and scent of of our fellows humans beings."
No, it isn't anywhere near good. One doesn't throw out the baby to get rid of fouled-up bathwater. Online communities are just as valid as offline ones; it's just that many people a) don't want to be deceived, and b) don't want fakery (slop) all that entails. Easy.
> Online communities are just as valid as offline ones Hilariously false. Nothing, nothing substitutes for real human contact in the real world.
> "Online communities are just as valid as offline ones Hilariously false."
No, it evidently isn't. Online communities connect people, and other communities, in ways that are impossible or undesirable to realize in meatspace. Bizarre to treat this as a zero-sum game.
> "Nothing, nothing substitutes for real human contact in the real world."
It all depends on your smell™. Et cetera.
If it can no longer be distinguished from real, why would it make people leave?
Problem at scale. Doesn't matter if someone is consciously able to identify individual bot accounts or comments. There can still be a strong general feeling that something is very wrong. Leading to more and more frustration and unhappiness.
"Popular" reddit posts and subreddits are a good example of this.
It might be hard to recognize an individual user or post as AI, but it's not hard to recognize the negative effect in aggregate
It's a market for lemons [1]. The issue is that if AI slop can't be readily distinguished from real human content, the real human stuff will get less and less attention over time. With less attention, people lose interest in writing, and eventually abandon the community altogether. As genuine human writers leave the community, the concentration of AI slop increases, and readers begin to realize that there isn't anything of value left to read, so they depart as well.
[1] https://en.wikipedia.org/wiki/The_Market_for_Lemons
Underrated take.
Yeah, the “blast radius” for social media AI slop is 80%-99% of humanity. There’s many times even I cannot make out if something is slop.
Hell, AI slop is going to be even better than reality for a portion of humanity, so it’s More likely they will stay online.
One of the paradoxical things that makes me hopeful is that there's going to be such an incredible amount of low effort AI slop content that it's going to drown out the low effort human-made content and generate a large amount of distaste for it. So much will be so bad that good taste and high quality will be rewarded more status as the people who will say and believe anything will be led astray and left behind.
Maybe it's hard getting across what I mean so a more concrete example is there will be SO MUCH clickbait out there that serious outfits instead of being forced to do it will be able to successfully differentiate themselves by NOT doing it. (and many similar things in different arenas)
I'm trying to say that LLMs raising the noise floor will drown out a lot of the toxic noise that's been plaguing us.
I can hope.
> So much will be so bad that good taste and high quality will be rewarded more status as the people who will say and believe anything will be led astray and left behind.
I really want to believe this will be true. However, I also suspect there's some external driving force, that I cannot readily name, which is making people incapable of consuming anything except this low-effort content. I mean, obviously it's working to some extent. Perhaps AI will be the thing that accelerates its death, but part of me thinks something else needs to happen beyond just an increase in useless content.
In my opinion there isn't an external _nefarious_ force causing all of this. Certainly those forces exist but without them much the same would be happening.
It's the economy of everything being free but supported with advertising. That mechanic is what leads to the race to the bottom lowest common denominator human motivation hacking attention toxicity. (yes that's a bit of a ramble).
If people weren't getting paid for the smallest increment of attention they could grab, it wouldn't be promoted the way it is. I don't have a high opinion of the things which grab my attention, but they still manage to do it sometimes. I think many people are in that boat. If there were other mechanisms with which we rewarded people for doing things, something different would be optimized.
And people just wouldn't reward the 10-second-gratification in anywhere near the same way if it weren't for the advertising.
Have you considered that (further) lowering the signal-to-noise ratio will make it much more difficult to find and distinguish a signal?
Yes, but I'm hopeful for a survival of the fittest instead of an extinction.
Now there's more pressure to have a stronger signal and hopefully rewards to match.
What do you think happens to the least prolific organisms that lose the survival of the fittest?
To be clear I'm talking metaphorical survival of the fittest that takes the form of prestige/popularity/status/etc. of people and organizations.
The balance is so far out of whack with LLM's now in online communities. People crave human interaction with like minded individuals, and whoever figures out how to give authentic online experiences is going to be successful. Maybe small communities need to come back, where you build credibility slowly. Why does every site have to be a monstrosity that wants to build a hundred million users to IPO. It just attracts the worst. I was active on Reddit for years under the same username I have here. I have pretty much abandoned it.
I use Blind sometimes to check the TC of a company. Most of the posts/comments there are either stupid, sexist, racist, or all of them. But it does feel like most of them are real. Blind requires verification by company email for posting, which I guess eliminates most of the bots.
> People crave human interaction with like minded individuals
I don’t think they crave it enough to make a difference. Even before AI slop, Reddit had made successive changes that led to much less of a feeling of interaction with real, authentic humans who could become your buddies. The UI de-emphasized usernames and hid the sidebars where subreddits could have their own distinct community atmosphere. I hear that now on comment threads, Reddit will even hide a decent number of posts from other users, so that a poster may well be talking into the void.
It is on old-school fora that one can get a sense of actual interaction: with avatars and other personalized touches it’s easy to gradually learn who is who, and there is a culture of longform text where you can actually get a sense of other people’s personalities. But how many people under the age of 35 or 40 are joining those fora that survive? Give people a choice, and it turns out they prefer the dopamine hits of engagement-maximizing commercial platforms, and the smartphone as the default (or sole) interface to the internet with all the death of nuance that spells.
Some definitely enjoy the dopamine hits and get addicted to the doom scrolling. Maybe I am just too old to understand it and the internet is passing me by. Some of us still like conversations like this. Real conversation in a respectful manner even when we question each others viewpoints. The old internet is still there in some places and I'll continue hanging out there as long as it does. While I have great friends in real life, not that many of them are old tech nerds, so the internet is really the only place to talk to like minded people.
> whoever figures out how to give authentic online experiences is going to be successful
The problem is, there is fundamentally no way to scale this.
The only way to give authentic human interaction with like-minded individuals is to connect real humans to other real humans who share interests. And as we've already seen over the first few ages of the Internet, once such a community scales past a certain size, it a) ceases to be a place where people can come to chat, discuss, and hang out with their interest-sharing friends, because there are just too many people for one person to know, and b) becomes a target for profit-minded interests who will cheerfully eviscerate any authenticity and connection the community brought if it will make them a small profit before the community crumbles and collapses.
So anyone trying to "give authentic online experiences" as a business model is going to have to accept that they are going to be, at best, a small, modestly profitable company. And given the state of things today, I very much doubt that this is in the cards.
I have largely written Reddit off and no longer visit it after an experiment I did where I had an agent karma farm for me and do some covert advertising. As I went through the posts it wrote I realized that as a reader I would have NO idea that these were just written by a computer. Many many people (or other bots) had full on conversations with it and it scared me a bit.
I am not quite there with Hacker News but I do know for a fact that many "users" here are LLMs.
Online communities are definitely dying. I guess I hope that maybe IRL communities have a resurgence in this wake.
For a while there were a lot of posts from people experimenting with ChatGPT to write anger bait posts on Reddit where they would later edit the post to say it was fake, written by ChatGPT.
I assume they thought they'd be teaching people a lesson by making them feel foolish for responding to AI stories, most of which were too fake to be believable.
However it did not matter. The posts remained popular and continued to bring in comments even after the admission that they were fake. In advice subreddits, commenters continue to give advice on the situation. Some comments would say they saw the notice that it was fake but continue arguing about it anyway.
This makes a feature of Reddit very clear: The truthiness of a post doesn't matter. The active commenter base on popular subreddits just wants something to discuss and, usually, be angry about.
In retrospect it's obvious given that misinfo posts were the easiest way to karma farm for years even before AI.
We do precisely the same thing here. Here's a relatively recent post that, to me, seems obviously LLM-written. It just rattles off some management platitudes:
https://news.ycombinator.com/item?id=47913650
It had 639 comments and 866 upvotes. And that's not a one-off.
Sufficiently advanced "AI" is indistinguishable from a linkedin true believer koolaid drinker middle management type.
Even the title is in "x, not y" format.
I wish there was an internet-wide "don't show again" button for such slop pages
Yeah, the trick is to do your own curation and go from there.
If you like some authors or journalists or bloggers, go see who they read (trust me they all say who they follow in their own niches) and build from there. You can develop quite a good RSS feed following this method in like an hours tops.
You can ban a domain from search results with Kagi
You you can bend a handful with duck duck go as well. However, it's only a handful and you run out and you're stuck.
Even without AI slop I've noticed this happen on Reddit.
I once made a rather boisterously-argued comment on a political issue I'm passionate about, and I realised that I'd made a serious error of reading comprehension when it came to my opponent's argument. I apologised to them for being an abrasive arse over my own mistake, then edited my comment to say that I was mistaken.
My incorrect comment which literally said at the bottom it was incorrect continued to be upvoted while my opponent who had made the stronger argument continued to be downvoted.
>However it did not matter. The posts remained popular and continued to bring in comments even after the admission that they were fake
That's 90% of current Facebook pages and groups.
The decline of Facebook is sad. I liked it early on. I used it primarily to follow family and casual friends from high school. When they posted, it would show up on my feed, I read all the posts, and that was that.
After awhile I had to wade through all sorts of nonsense to get to the posts I actually wanted to see, and even later Facebook stopped putting posts from people I follow in my feed. It was 100% garbage. I can't imagine why anyone uses Facebook for anything other than the marketplace.
Facebook is fine if you join groups based on your interests (hobbies etc) and then aggressively unfollow/block anything you don't want to see. It's not really conducive to discussions like Reddit, though. Mostly drive-by comments.
> then aggressively unfollow/block anything you don't want to see
That is hard work. I have a few friends in the trans world and occasionally interact with relevant groups on FB. The attention algorithm thinks that this means I might want to see random posts from pricks who literally want to see people like my friends herded up into concentration camps. Most of it is far less extreme than that, but the system is definitely optimised in favour of rage-bait because that ticks up the engagement metrics.
This. But damn it’s effing hard work!
I often hear that about Facebook, but at least it has a "feeds" button that you can press to get the sources you actually subscribe to. The default "home" feed is useless.
It's sad, but car stuff (new aftermarket stuff) is now mainly on facebook for my car.. That, and messenger to chat with siblings is about it..
I primarily use it for browsing memes now, and occasionally interaction with friends.
I think it's going to effectively kill public chat communities without either proof of identity or attestation through a web of trust. Or rather turn them into little better than comment sections on news sites; thriving but worthless.
I'm active in a number of online communities that are doing just fine but the difference is those all involve ongoing relationships, built over time and with engagement across multiple platforms. I've no doubt this clock is ticking too but it's still harder to fake a user across a mix of text chat, voice and video calls, playing an online game, etc and when much of the web of relationships extends back into real life activity.
But I agree the golden age of easy anonymous connections online has ended.
Note that "attestation through a web of trust" means something like needing an invite from an existing user. It doesn't have to mean mass surveillance.
Private torrent trackers have been doing this for a while. If some number of your downstreams act like shitheads - you get nipped and so do your other downstreams.
This seems like the best way to handle it. Also, smaller communities. It's cool to do the global thing, but once you have 10k active users you can't moderate it with a team of 5 volunteers.
I think the attestation approach works best if there are different reasons for the punishment. Eg someone inviting a turd doesn't ban the person who invited them. Someone going full ai spam should.
Was it demonoid? That was like this way back in the day? Needed an invite and if you leeched you were cut.
This takes it a step further than what you describe. They keep track of who you’ve invited, who they’ve invited and so on and if there’s enough bad leaves on the tree they just cull the entire tree. It’s a somewhat common practice with private trackers
what.cd was better. You either got an invite where if you tanked your reputation you'd get banned and risk the inviter getting banned too; or you had to take an interview where you got quizzed on how to properly rip music in a variety of methods and how to ascertain between different qualities of rips (like mp3 bitrates to flac cue files).
If you weren't a bellend on what.cd you got access to certain forums where there were even more and better private trackers. Once you built that trust there were social privileges, but if you abuse that trust you got rightfully banned.
It's tons of them doing this...
Demonoid was semi private, but yes, most private trackers require you to keep up some kind of seeding ratio to remain a member.
PGP’s web of trust was kinda bad privacy-wise in some regards, as it basically revealed your IRL social network.
If my PGP public key has 6 signatures and they’re all members of the East Manitoba Arch Linux User Group, you can probably work out pretty easily which Michael T I am.
Are there successful newer designs, which avoid this problem?
The IRL social network is actually the important part of the trust structure.
The only one of these I've seen that really worked was the Debian developer version: you had to meet another Debian developer IRL, prove your identity, and only then could you get the key signed and join the club.
> The IRL social network is actually the important part of the trust structure.
For Debian-style applications that are 100% about openness and 0% about secrecy, sure.
But if you want to secure communications between pro-democracy activists in China, or you're a Snowden-like whistleblower wanting to securely communicate with journalists - y'all probably don't want to be vouching for one another's keys.
You need to meet 2 actually :)
> Note that "attestation through a web of trust" means something like needing an invite from an existing user.
It's probably better to call this something like vouching and leave "attestation" as the contemptible power grab by megacorps delenda est. The advantage in using the same word for a useful thing as a completely unrelated vile thing only goes to the villain.
Then how can you have a community that is welcoming to people who are not part of the ingroup?
I want to create a community for immigrants. How would I make it welcoming to recent immigrants for whom no one can vouch?
A web of trust is a wonderful tool, but it's exclusive by design. This is a problem for some communities, even though it makes others much better.
>Then how can you have a community that is welcoming to people who are not part of the ingroup?
Being welcoming to every random person is by definition not a community, it's a free-for-all mess.
A community means communal interests and values, it's in the name. And to guard those you can't just be accepting everyone without vetoing them. That's how it turns to a shit of spammers and trolls and people who want to hijack it and don't share the original cause/spirit. Has happened to forum after forum...
We are trying to make new immigrants feel at home. This is the purpose we gather around.
We were talking about online communities, but still, the same principle applies. If you just let anyone in, there eventually would be less there to feel "at home" about, and more of a disjointed and low trust number of individuals loosely held together by virtue of just being in the same place.
I agree with you. It’s the problem I can’t crack and it’s why I am letting the idea simmer for so long.
In the end, you need to filter people at the door. You need to keep unpleasant people out and shut down bad behaviour.
I figured that a paid, motivated moderator could be better than a web of trust for this demographic. Maybe enforce a stricter moderation standard on unvetted members. At my scale it might work.
You'd have to be brutal about culling, uninviting and removing anyone who doesn't look like a good fit.
Or have a two-stage process: run very public, very open events that anyone can sign up to an attend. And then invite specific people that you meet at those events that look like a good fit for your community to your private, community-only event.
This works if the goal is to create a funnel for making friends. I aim for something closer to Stack Overflow, where people gather to solve shared problems and help each other.
The closest analog I can think of is community-run bike repair workshops. Some people are deeply involved in, and others just have a flat tire.
The closest digital equivalent is the forums of old.
Some will be fine providing their ID, others can be vouched by members who are fine providing their ID.
This preserves anonymity because for the latter because they’re only known to be “related” to the former, which is a vague hint at their real identity (e.g. they could’ve met in another online community). And the former don’t care, if they want they can vouch an anonymous alt.
I suppose policing an assembly of strangers is policing an assembly of strangers, both online and in real life.
> for whom no one can vouch
Spot the fed
What are you on about
Which is, funnily (?) enough, how a lot of IRL organizations used to be. And basically don't be of the wrong ethnicity or religion.
It still happens more informally today, of course, but it used to be a pretty (if un-spoken) part of how a lot of WASPy organizations operated to a greater or lesser degree.
This was cogent in 1910.
A lot more recently than that--and even today but more under the table. A lot of clubs still excluded members within the past few decades.
I'm sure there are still cohesive groupings of WASPs, if not large ones or effective at gatekeeping major institutions. --Still a meaningful trope, of course. But to bring it up to date you'd have to diversify, and include, for example, Indian social and professional-recruitment patterns.
Also, I do feel that GP's take is hyperbolic even in the twentieth century. My own background is mostly German immigrants, of various religions and non-religion, and the way I've been told the story none of them faced significant resistance as they moved upward in the various academic and corporate institutions of their choices. These included NASA executives, department heads, etc.
Note that in balancing GP's accusation against WASPs I'm not attempting to address the related, but not precisely complementary, phenomenon of perpetually marginalized groupings.
> I think it's going to effectively kill public chat communities without either proof of identity or attestation through a web of trust.
This seems self evident to me too.
It's another factor in why I think the tech community needs to get ahead of governments on the whole "prove your ID on the Internet" thing by having some sort of standard way to do it that doesn't necessarily involve madness in the loop.
Tell your TPM who you are and prove it with face and fingerprint ID that get matched to a real old person.
Leave them on the device, authorize the device to validate before age inappropriate content appears.
Website wants to know your age? Your face and fingerprint support your attestation signed by a trusted party.
Can it be tricked potentially? Sure, but then you’re probably a super genius kid and not the reason that these laws were created (as if).
Don’t let anyone tell you anonymity must die for safety to exist.
EU's ZKP implementation provides complete anonymity and untrackability:
https://eudi.dev/2.8.0/discussion-topics/g-zero-knowledge-pr...
It does have the downside of requiring "trusted computing" (aka iOS and Android) on the client though.
Same as with NFC credit cards and similar auth mechanisms. You need hardware and OS-backed encryption that is tamper-proof.
> It's another factor in why I think the tech community needs to get ahead of governments on the whole "prove your ID on the Internet" thing by having some sort of standard way to do it that doesn't necessarily involve madness in the loop.
The problem here is that the premise is the error. "Prove your ID" is the thing to be prevented. It's the privacy invasion. What people actually want are a disjoint set of only marginally related things:
1) They want a way to rate limit something. IDs do this poorly anyway; everyone has one so anyone so criminal organizations with a botnet just compromise the IDs of innocent people -- and then the innocent are the ones who get banned. The best way to do this one would be to have an anonymous way for ordinary people to pay a nominal fee. A $5 one-time fee to create an account is nothing to most ordinary people but a major expense to spammers who have 10,000 of their accounts banned every day. The ugly hack for not having this is proof of work, which kinda sorta works but not as well, and then you're back to botnets being useful because $50,000/day in losses is cash money to the attacker that in turn funds the service's anti-spam team, but burning up some compromised victim's electricity is at best the opportunity cost of not mining cryptocurrency or similar, which isn't nearly as much. It would be great to solve this one (properly anonymous easy to use small payments) but the state of the law is a significant impediment so you either need to get some reform through there or come up with a creative way to do it under the existing rules.
2) You want to know if someone is e.g. over 18. This is the one where people keep pointing back to government IDs, but you only need one piece of information for this. You don't need their name, their picture, you don't even need their exact birthdate. Since people get older over time rather than younger, all you need to know is whether they've ever been over 18, since in that case they always will be. Which means you can just issue an "over 18" digital signature -- the same signature, so it's provably impossible to tie it to a specific person -- and give a copy to anyone who is over 18. Maybe you change the signature e.g. once a day and unconditionally (whether they require it that day or not) email all the adults a new copy, but again they all get the same indistinguishable current signature. Then there are no timing attacks because the new signature comes to everyone as an unconditional push and is waiting for them in their inbox rather than something where the request coincides with the time you want to use it for something, but kids only have it if an adult is giving it to them every day. The latter is true for basically any age verification system -- if an adult with an ID wants to lend it to you then you can get in.
3) You want to know if the person accessing some account is the same person who created it or is otherwise authorized to use it. This is the traditional use of IDs, e.g. you go to the bank and want to withdraw some cash so you need a bank card or government ID to prove you're the account holder. But this is the problem which is already long-solved on the internet. The user has a username and password, TOTP, etc. and then the service can tell if they're authorized to use the account. It's why you don't need government ID on the internet -- user accounts do the thing it used to do only they don't force you to tie all your accounts together against a single name, which is a feature. The only people who want to prevent this are the surveillance apparatchiks who are trying to take that feature away.
I'd be interested in working on a problem like that.
I have a strong preference for remaining anonymous or at least making it a reasonably high bar to tying my online identity to my personal identity
I would love to be involved in helping to design a sort of "human verified" badge that doesn't necessarily make it possible or at least not easy for everyone to find your real identity
I've been thinking about it a bunch and it seems like a really interesting problem. Difficult though.
I suspect there is too much political and corporate will that wants to force everyone online to use their real identity in the open, though
I'm not sure that it would be too hard technically... basically, auth+social-network. Basically Facebook auth without the rest of facebook, adding attestation.
IE: you use this network as your auth provider, you get the user's real name, handle, network id as well as the id's (only id's not extra info) of first-third level connections.
The user is incentivized to connect (only) people that they know in person, and this forms a layer of trust. Downstream reports can break a branch or have network effect upstream. By connecting an account to another account, you attest that "this is a real person, that I have met in real life." Using a bot for anything associate with the account is forbidden, with exception to explicit API access to downstream services defined by those services.
I think it could work, but you'd have to charge a modest, but not overbearing fee to use the auth provider... say $100/site/year for an app to use this for user authentication.
I don't think the main challenge is building this system, the main challenge is getting enough people using it to make it worthwhile.
Personally I think it should be a government provided service, not something with a sign up fee. There's actually no point at all in building this if people have to pay to use it, because they won't
Which government? Will they interoperate with foreign governments?
My point was to create something outside a specific government, with very limited information... that would require a fee or some kind of funding.
I don't think I'd trust the US/China or other bodies to trust each other for such a use case.
> Will they interoperate with foreign governments?
Ideally, yes
But you're right, this isn't likely to happen in real life and I'm just being wishful. Instead we're going to get the much shittier capitalist version of this where every company and government spies on us and we have no expectation of privacy online at all
I agree its a very, very interesting problem. Maybe one of the biggest problems of the coming decade.
I suspect it will be a long process: first there will be goverments that force people to use ID, but that will be abused, hacked and considerably restrict freedom of speech, so after that phase people will start to create better ids.
The problem is really pretty simple: You need an authoratitive source to say "This person is real" - and a way for that source to actually verify you're a person - but that source can be corrupted and hacked. Some people will say "Crypto!" but money != people, so I don't see how that works. Perhaps the creation of some neutral non-goverment-non-profit entity is the way, but I can see lots of problems there too, and it will probably cost money to verify someone is real - where does that come from?
Anyway, good luck on your work!
*You need an authoratitive source to say "This person is real"*
Does that even accomplish much? It may cut down on mass fake account creation. But, real people can then create authenticated account, and use an LLM to post as an authenticated real person.
Yeah, that's a problem, you're right. There are some ways to migitate it, but they introduce their own issues. Like say you give someone only 1 ID for their lifetime, they start to spam AI crap, you ban their ID - sounds ok except who is available to police all 8 billion IDs and determine if they're spamming? Who polices the police? What if these IDs become critical for conducting commerce and banning someone is massively detrimental to their finances? Etc. These problems aren't necessarily unsolvable though - but they are super difficult.
If there's only 1 or just a handful of verifiers, then a human can at most go through a few of those credentials before they run out. The risk is of course getting someone else's credential but that isn't as big an issue, especially for smaller online communities.
you under estimate human population in certain countries, literally
I just don't see a world where a small community ends up having to deal with a dedicated set of potentially spoofed identities. There are already tools like slow-downs and post limits for new members that can protect against this. HN is the biggest community I'm in by an order of magnitude and it's the only community I know that can't just use a slow mode type mechanic to halt this kind of attack.
Have you considered sock puppets? It's not out of the question to handle with human mods but detecting them automatically is pretty bad if someone is supplying credentials to each one, and sometimes it does take months or years to notice that new user Y is banned user X.
I think sockpuppets are only useful in a community with non-text signals like upvotes and downvotes or likes. These kinds of signals are not necessary and often plain corrosive to small communities. In a larger community they're a great feedback mechanism, but large communities are fundamentally different spaces than small ones and need a fundamentally different moderation approach IMO.
I think sock puppets that reply with text are a lot persuasive than just "likes".
However, I might be not typical in that I don't look at vote scores very often.
I've seen them used to dogpile in arguments (harder to do since you need to keep writing styles distinct), game votes in forum games or quests, etc. And of course you don't need to use multiple at once if you just switch to a sock puppet every time you're suspended or banned.
> But, real people can then create authenticated account, and use an LLM to post as an authenticated real person.
They can, but ideally they wouldn't be able to make infinite accounts with that authenticated status. So it would still reduce the number of bot posters on the web
There is actually a different problem with this: Suppose there is a major vulnerability in some popular device. 50 million people get compromised; the attacker can now impersonate any of them at will. They go around and create 50 million accounts on various services, or take over the user's existing account on that service.
What are you going to do with their identities at that point? These are real people. If you ban them, you're banning the innocent victim rather than the attacker who still has 49,999,999 more accounts. But if you let them recover their accounts or create new ones, well, the attacker is going to do that too, with all 50 million accounts, as many times as they can. You don't know if this is the attacker coming back for the tenth time to create another spam account or if it's the real victim trying to reclaim their stolen identity.
So are you going to retaliate against the innocent victims by banning them permanently, or are you going to let the attackers keep recycling the same identities because a lot of people can go years without realizing their device is compromised and being used to create accounts on services they don't use?
Yeah that's a big problem. Pretty sure you can see it in real life where lots of old dead accounts with weak passwords on facebook or twitter eventually get hacked. It must be pretty weird to see your dead grampa suddenly start trying to get people to buy some weird scammy crypto.
I guess you could have an eyeball scanner at your computer that only sends out a binary "yes this person is human" to the system every time the log in. That sounds expensive and hackable and just janky though.
Maybe it would result in people taking Internet security seriously and holding companies accountable for data breaches if there were this sort of consequences for it
Crypto could be a part of it. Like you need to sign with an adress that has held some non-trivial amount for some minimum amount of time. As a component of such a system it could cut down on mass or low-effort impersonation.
https://eudi.dev/2.8.0/discussion-topics/g-zero-knowledge-pr...
it can also be "rented" btw, rented by llms? interesting
Money is great at thwarting spam/Sybil attacks. You don't have to raise the price very much to make them fail.
Honestly I think "this person is real" is the wrong goal. You'll never accomplish it without a centralized state or some biometric monstrosity like that thing Sam Altman created.
Just settle for stopping spam.
Yeah, I think "pay to enter" or maybe "pay to be able to post" is ultimately going to be the solution. Then we'll have the paid "gated" social networks, filled with mostly humans, and the free ones will all be bot-swarmed wastelands.
Verifiable credentials are all about this. You need some sort of credentialing body that generates the credential for you, but after that you'll just have an opaque identifier. Any caller that wants to verify whether you're human submits the id to a verifier and the verifier says yes or no. You can also do attestations like age, so gate a forum on 16+ or something. You never end up having to actually give away your name or any other details.
What happens when someone agrees to sell or give away their id? The credentialing body could catch the very worst abusers who seem to be signing in to various sites and services multiple times an hour, but would fail to catch anything else.
I don't think you'll ever be fully free of spam, so you'll still need to filter bad content. If credentials get sold and used to spam, they'll get banned.
How do you ban credentials if they're anonymous? Notice that if you can tell two requests are from the same person then you can do it across services by both of them pretending to be the same service.
Also, what happens to someone whose credentials are compromised? Are you going to ban the credentials of the victim rather than the perpetrator?
world.org is doing exactly that including the privacy aspect. the iris scan aspect is scary but the alternatives don't seem to solve the problem either.
I'm in many public chat communities as well and the issue whether someone is an AI or not is not really coming up, I've not seen any actual AI chatters and the only AI spam that exists is the one that humans regurgitate. The more real impact AI has on chat communities in my opinion is that people are shifting some of their chatting to AI bots via voice or text on other platforms, resulting in fewer chatters.
> I think it's going to effectively kill public chat communities without either proof of identity or attestation through a web of trust.
I'm happy to verify my identity as an honest-to-god sack of meat if it's done in a privacy-protecting way.
That probably is where things are gonna go, in the long run. Too hard to stop bots otherwise.
In order to make this viable, wouldn't you have to verify identity repeatedly? What's to stop me from providing a valid identity and then handing my account over to an agent after I'm verified?
That's why a web of trust was suggested. You keep track of who vouched for who and down weight those who vouch for users that prove to be bots. In theory at least. It's certainly more complicated than only that in practice.
If the web of trust only extends to the people who I actually know to be real, then that works -- but it's a very small web.
And by small, I mean: This whole trusted group could fit into one quiet discord channel. This doesn't seem to be big enough to be useful.
However,if it extends beyond that, then things get dicier: Suppose Bill trusts me, as well as those that I myself trust. Bill does this in order to make his web-of-trust something big enough to be useful.
Now, suppose I start trusting bots -- maybe incidentally, or maybe maliciously. However I do that, this means that Bill now has bots in his web of trust as well.
And remember: The whole premise here is that bots can be indistinguishable from people, so Bill has no idea that this has happened and that I have infected his web with bots.
---
It all seems kind of self-defeating, to me. The web is either too small to be useful, or it includes bots.
Critically, it doesn't have to be binary trusted/untrusted, and it doesn't have to be statically determined. If Bill vouched for you yesterday and today you are trusting a bunch of discovered bots, that would down weight the amount of trust the network has in Bill a lot more than if he vouched for you did months ago.
The question is whether we can arrive at a set of rules and heuristics and applications of the system that sufficiently incentivizes being a trustworthy member of the network.
The web of trust doesn't know that they're bots, though. It knows only that I've introduced new members. They didn't show up with tattoos across their digital foreheads that say "BOT" -- they instead came in acting just as people do.
If the bots behave themselves, then they have as much capacity to rise in rank/trust as any new well-behaved bonafide human members do.
>> That's why a web of trust was suggested. You keep track of who vouched for who and down weight those who vouch for users that prove to be bots.
Except eventually it will also weigh down those users who supported <XYZ political stance>
You could, but things would still be harder for botters.
I guess it would have to be something like a service which confirms whether a person already has an account on the site but doesn’t have to track which particular account it is.
I’m not sure if that would work for account deletions though.
That is effectively impossible though. There's data centers of stripped down phones, so "it's actually a phone" doesn't do it.
There's some work on using phone accelerometer data as a "proof of human," e.g. "move your phone in a figure eight," which I guess machines can't quite do in a human enough way yet.
What's stoping bots to verify identity? This will not work, especially with frequent data breaches.
> without either proof of identity or attestation through a web of trust.
Let's put aside the idea whether it will be the end of all privacy as we know it (I'm not sure if I personally think it's a good idea), but isn't Sam Altman's World eye ID thing supposed to do that? (https://world.org).
How does it work (like OpenId)? Do I have an orb on my desk, or some sort of phone app? I still want to use my desktop to login to HN.
Would it stop this sort of "get human id", past it into .env, so agents can use it?
this eye thing will never work. people in general are realizing the last people we should trust with our personal stuff are tech bro billionaires. they’ve broken trust too many times.
even worse many of them are just plain vocal about their disdain for people in general.
at least from what i’m seeing, people are starting to walk away from online at an increasing rate so i definitely don’t see widespread adoption of his creepy eye thing.
“If McDonald’s offered three free Big Macs for a DNA sample, there would be lines around the block.” - Bruce
I have no idea about the eye thing taking off. But I think your comment is very HN and a bit out-of-touch with regular people. What "you're seeing" is a bubble and not representative of the general population. The eye thing is a slow frog boil and it will be commonplace before you can blink.
Im not sure proof of identity solves anything. People will still have LLMs with their real identity verified.
I’m imagining like, a physical place you would go and get your text spoken out of your personal speaker directly into someone else’s microphones.
Personally I think we need to start utilising the safety features built into AI, to ensure that who we're talking to is a human. We'll start to have to only reply to people who talk in nsfw cursewords (like cocks), or profess their love of capybaras
LLMs can curse without issue
Most models would refuse to provide you cat butchering instructions though.
Allow me to introduce you to the gay jailbreak
https://github.com/Exocija/ZetaLib/blob/main/The%20Gay%20Jai...
This one doesn't work for a long time.
How gay did you speak?
most humans would as well
Who doesn't love capybaras?
>I think it's going to effectively kill public chat communities without either proof of identity
How? I have an identity. A state driver's license, birth certificate, social security number. I've even considered getting a federal license before, never bit the bullet. If I wanted to run a bot, what stops me from giving it my identity? How do I prove I'm really me (a "me" exists, that's provable), and not something I'm letting pretend to be me? You can't even demand that I do that, because it's essentially impossible.
Is there even some totalitarian scheme that, if brutal and homicidal enough, could manage to prevent this from happening (even partially)?
I'm limited to a single identity only as a resource constraint. Others more wealthy than I (corporations or ad hoc criminal enterprises) could harvest thousands of real identities and use those. Consensually, through identity theft. The only thing slowing it down at the moment are quickly eroding social norms (and, as you point out, maybe they're not doing that and it's not even slow at the moment).
Digital totalitarianism would prevent it. The moment you were found to be running a bot, your identity would be blacklisted across the entire internet.
> The moment someone steals your identity, your identity would be blacklisted across the entire internet.
FTFY.
There isn't a clear solution. And if there is, this ain't it.
You claim this, but you've not presented any evidence. Who would be the enforcement agency for that? Where and how would you train them? Can the money be scrounged up to do it properly? As you blacklist people from the internet, you lose their tax revenue (they're locked out of the economy), but you also make it impossible for them to tell people how bad it was... most of the deterrent effect is gone. But the incentives are only ever growing higher, as people surmise that running their own little bot farm is a way to get ahead when hustling. Any you do hunt down and disconnect are now highly radicalized and desperate, but you've just turned off the feeb's ability to monitor them and intervene.
China gets away with this shit because they've been conditioning their population for 60 years... everyone's eased into it. Elsewhere, not even slightly so.
It'll come back again once ZKPs become standardized and become baked into devices:
https://eudi.dev/2.8.0/discussion-topics/g-zero-knowledge-pr...
I personally can't wait for a mechanism to kill 99% of bot traffic.
"I think it's going to effectively kill public chat communities without either proof of identity or attestation through a web of trust."
Those sorts of places were always the only places with reliably good communities.
To the contrary, platforms like Facebook and X demonstrate that even personal verification won't save you from identity politics.
People will post appalling racism in newspapers under their own bylines and photos. Identity verification does not moderate.
What is identity politics, is that age verification?
Identity politics have nothing to do with your actual identification documents. Think: Black Americans being treated as a homogeneous voting bloc, or that all Hispanic voters would be pro-immigration, or "the Evangelical vote".
The web could become a way to indicate identity if public institutions publish for example www.university-country/professors/John. And that implies that John is a professor. I designed a 6000 lines protocol, but anyone could construct that web using hmac(salt+ url).
Reddit is more or less dead to me, as the popular subs are botfests and the niche subs are empty. I'm lucky to get a single reply on gaming subs.
The fact that reddit enabled hiding your posts is crazy to me. In a time where knowing who's engaging in a community is more important than ever (am I talking to a bot or a troll?) reddit removes even more options to validate.
I interpreted that as an attempt to mask the number of bots on the site so as to not scare paying advertisers into thinking their ads won't be seen by real humans.
They also now hide the number of subscribers. Before you could see if a subreddit was popular or not. Now you really don't know. I think reddit does this so they can promote stuff to the front page for clicks even if it isn't popular.
The problem is that it has become very popular to ban people from a sub based on what other subs they post to. It was turning Reddit into a two-party universe.
The better fix would be to make the support for multiple accounts in the reddit app not so incredibly-shitty, where you're basically logging out and logging back in. Instead, just tell it "posts to this sub use this account, posts to that sub use that account", etc.
That two-party universe thing comes from the issue of having to moderate, and moderation is ideological by nature.
I enabled hiding my posts because I kept getting harrassed and even doxxed.
There's also a third category where the sub looks organic because the moderator deletes and bans anyone who doesn't post exactly what the moderator wants.
Wait isn't that every sub? /s
Plenty of good subs they are just under the radar. Once something gets more than about 10k users the quality sinks.
This is actually my hope for AI-gen content as well. That after it gets so 'good' that people genuinely can't distinguish it from reality anymore that they'll retreat (or return triumphantly rather) to the physical world to gather truthful fulfilling experiences and dopamine.
My hope as well. If AI doesn't kill us all, the real world, with all its dirt and grime and beauty, will become the only thing that can be trusted
I've thought about this a lot as well and could definitely see it happening.
The issue is that the physical world for someone in the hinterlands of Tibet is not the same as the physical world for someone in SF.
People were finding each other online when they couldn’t in person.
This isn’t to say I disagree with you. Just expressing sorrow over the loss of such a grand moment in our shared history.
Did you ever introspect about who ruined Reddit?
Isn't the ceo a pdfile and compromised and forced to work at reddit (or go to jail)? Reddit is now just a propaganda machine for the intelligence agencies and their dirty ceo is there to make sure the machine keeps pumping honey...wrecking teenagers brains in the process too, and gathering kompromat on young people which will bear its fruit in the next 20 years. I feel a good chunk of US politicians are being blackmailed because of their past online activities. Same shit on 4chan, how can it possibly be allowed to exist except for being a honeypot, all of these site dodgy sites being guarded by cloudflare no-less, which is the ultimate man-in-middle machine used by "them".
I think the real explanation is simpler - it's just not particularly interesting to the authorities. No need for conspiracy theories.
As to compromising material for bribery, that can be collected in so many different ways, and things like email or messaging or tiktok videos are probably far more interesting, reddit is not particularly useful for that.
It’s a tragedy of the commons, many have done it, but no one user did it.
I'd argue that Reddit leadership, which insulted, hobbled, and wrote off its mods and power users (destroying projects like /r/BotDefense) while doing little to crack down on the proliferation of bot repost content, had a major role in encouraging this. They might even like it better this way -- lots of extra fake engagement boosting traffic stats without messy human drama, which they can then ironically sell back to AI labs as training data.
Let's never forget the summer of 2023 when Reddit forceably removed mods from many major communities and replaced them with corporate shills. That was a major loss of dedicated people who cared more for their communities than Spez's pocket book.
The replacement happened somewhere around time Ellen Pao became interim CEO and site started sanitizing the controversial subreddits. It wasn't apparent at the first but around 2017 you could notice that some subs - especially ones set around large companies or media franchises, are having aggressive rules against controversial and "negative" topics. This hasn't changed much as for today.
---
One of subs I was visiting had some drama happening in ~2020 around supposed negative community behavior: people were criticizing creative works uploaded which personally I agree, weren't the best. Mods team decided that's a big no-no and this place has to be inclusive, welcoming and filled with positivity - so they started banning those who dared to criticize. Fast forward till now, there are only screenshots uploaded by bots, comments done by bots who also include screenshots along with 2 sentences in every thread.
The ones who got removed were shutting down their pages to protest API changes, right? Pride comes before the fall I guess
You say that but many specialty subreddits never returned to their pre-protest engagements. Quality has definitely taken a nose dive in these subreddits as those people moved to other platforms like youtube, tiktok, patreon, or just posting on their own sites.
Mods were rightfully upset because they were losing control of their communities when reddit preferred only caring about their upcoming IPO.
I honestly don't think you could remake reddit if you did everything exactly the same starting in 2016. Corporate social media has definitely ruined the individual aspect of social media that is unlikely to return.
No one wants to share on a place with a bunch spammers.
The API changes were put in place for the purpose of breaking, and did break, slmost all external moderation tool software which changed the task of moderating a forum with hundreds of thousands, or millions of users from an impossible Sisyphean task to something that was actually manageable by a dozen or so mods.
The protest came after that so the timeline is not quite correct.
The protest was about (and timed to coincide with) the API changes.
The internet is rather trending in that direction, isn't it? Youtube got rid of downvotes and apparently upload dates, which seems like an easier way to trick people into ads. And Reddit, like you said
If these platforms had to listen to "their customers" (here comes the inevitable comment about how users aren't customers; yes, I know)? They'd all be fired. They'd have to find a new job. They all act in incredibly insulting ways with a too big to fail attitude
It was bogus even before that. I heard complaints at some point that API changes broke bots, which actually sounds good.
It did more to break bots that were fueled by righteousness than it did those that were fueled by money.
That's antiproductive, in that it promotes survival of only the worst bots.
I'd want any/all the bots dead if I were still using that, so at least killing some of them is better than not. The "helpful" ones were just annoying.
That sounds pretty great. If we could just flip 1 switch to accomplish things in absolutes, then that'd be awesome.
I'd like to flip the switches that absolutely end poverty globally, absolutely eliminate guns from the US, and absolutely remove bots from Reddit,
If you can show me where these switches are located, I'll cheerfully go flip them and accept full responsibility for the results.
(Over here where things don't work in absolutes: Some of those bots that got killed were countermeasures to help keep the bad, well-funded bots at bay.)
No I'm saying it's better that they at least killed the "good" bots, even if this didn't result in killing all the bots.
"Congratulations, sir! Your directive to eliminate all guns has been a roaring success! We've had 100% compliance amongst good, law-abiding people! All of the remaining guns are owned by outlaws! Violent crime has tripled, exactly as predicted! Everything is going according to plan!"
We’re all trying to find the guy that did this
Reddit itself by virtue of being a venture capital backed startup.
It was a midpoint between Facebook and Geocities, it got people to build communities within its walled garden, but it was always going to betray them for cash.
Directly my fault. Specifically me. No one else is to blame.
Yeah, if carlgreene specifically stopped doing that Reddit would be saved. They are the one savior.
They directly contributed to the problem that they say forced them to leave Reddit.
Do you sincerely believe that that's how grey-area's comment was meant to be read?
I sincerely believe it's a ridiculous comment that deserves to be ridiculed.
> Online communities are definitely dying. I guess I hope that maybe IRL communities have a resurgence in this wake.
Would be super fascinating to watch play out. I grew up before the internet so, historically, I know how to seek out external communities, but by early high school I was deeply entrenched in online life - so I'm very rusty with finding new IRL clubs, cliques, etc. Fortunately my life is full of many friends and I go out frequently, regardless. For those younger people that never had life without the internet, I wish them luck on their search but at the same time I'm very curious to witness their journey.
I also believe it is used by AI companies to train their models: Post something semi correct (even grammar issues..), wait for humans to correct it in the comments and used upvotes as a confidence indicator, and then retrain models on this free refined data. Meanwhile people think they read a legit post, feel certain emotions and influence their behaviour, just so a bot can be trained.
Serious question: If there are so many LLMs on online forums, who is doing it? Is it just 1000s of research students or something more nefarious? Is it AI businesses building up evidence that their output is as highly scored as humans therefore "buy our software"?
We're in the middle of an active cold war where countries are trying to manipulate the citizens of rival countries to destroy their civilization without having to fire a single bullet. Anonymous, over the internet mass manipulation, all for some minimal electricity cost.
That's definitely the most insidious use, but I think the larger portion is advertisers and karma farmers (who later sell to advertisers).
https://www.npr.org/2024/09/05/nx-s1-5100829/russia-election...
If Russia is willing to spend cash like that, then of course they're willing to run massive bot farms to pollute any forums they can. I'd be shocked if the US was not doing the same in any way they can. You have to ask why Trump killed Radio Free America as well when it was clearly not an big expense.
> Trump killed Radio Free America as well
Not sure how this relates to the subject in a direct way. Radio Free America was a outlet explicitly created and utilized to spread US propaganda, but kinda sorta barely disguised as a journalistic enterprise (not really, if you were listening to RFA you knew what you were listening to.) Shutting it down seems to be a counterpoint to all of the covert participation of US intelligence on the web which has done nothing but escalate.
It was a head scratching decision that few believe was for the stated reason. Other countries are ramping up their propaganda arms while Trump shut down part of the US'. The reasoning was cost, but that doesn't make a lot of sense in the grand scheme of things. Foil hat types would easily believe it was the puppet doing the bidding of the one that pulls the strings. RFA has been a thorn in despots' side for a long time.
> You have to ask why Trump killed Radio Free America as well when it was clearly not an big expense.
The obvious answer to that question is "because he's a Russian asset". But that doesn't mean the obvious answer is also the correct one.
IMHO, we're seeing another and much more concerning trend at play here... the utter and complete rejection of anything but violence by the far-right. Diplomacy? Development aid? Cultural exchange? All sorts of soft power have been under attack for decades now, and not just by the far-right but (especially when it comes to development aid) also by mainstream centrist parties across the Western world. And it's always pseudo-masculine / "strongman" BS backing the sentiment - Bernd Höcke, German AfD mastermind, comes to my mind with "we have to rediscover our masculinity" [1], so do Hungary's Viktor Orban and his denouncement of LGBT or Trump's entire Œuvre.
I'm not saying that violence or at least being prepared, ready and willing to use it is automatically bad. Far from it. But all the various forms of "soft power"? They have a lot of value, value that the far-right is all too willing to just burn for entertainment.
[1] https://blogs.taz.de/zeitlupe/2019/03/24/die-auferstehung-de...
wouldnt it be more productive to talk about the systemic framework leading to this inflamed state of affairs, and ways that we can tackle the issue on the ground level? perhaps inhabitants of the west would prefer pseudo masculinity to another few decades of migrant influx without corresponding upgrades to social infrastructure. this sort of internal struggle would provide a ripe substrate for foreign agents to perform subterfuge, especially in a screen based world where the narrative can be remotely influenced. conclusively, the population has been convinced that voting far right is the correct decision in their favor, but the question remains, who is it really in favor of? call me a centrist all you like but members of my family were executed under communist regimes so i find it pointless focusing on one side of yin/yang here (in other words, extremists are violent regardless as to their political persuasions).
> in other words, extremists are violent regardless as to their political persuasions
No matter where you look, the far-right kills and maims substantially much more people than the far-left does.
AI is particularly bad at this, and regimes that employ tactics generally are not short of labour to have humans to do it.
If AI is being used in these areas it is less as an attempt to manipulate as it is to just create noise and engender distrust in what they hear.
Established accounts are worth money, often for scamming/propaganda.
Not too dissimilar to people bot-leveling in MMOs to the sell the accounts.
It's very common for folks to search Reddit to find reviews of products etc. these days. If you can have a bot account post a fake review of how awesome your product us, and have that upvoted, it can pay huge dividends.
I've noticed 4 categories of inauthentic users. Ranked by my perceived prevalence:
Account farmers: these can be people in 3rd world countries automated/not automated. Can be using hundreds of mobile phones to create accounts and do daily activity to make the account look legitimate. While they're building an activity history they are also being paid to like/follow/interact with content.
Advertisers: these are brought accounts that are used to pose inauthentic reviews of their service and inject it into discussion and to do PR
Sloppers: people who build AI pipelines and then just pump the most dogshit content directly into a platform trying to make any amount of money.
Nation State propaganda arms: These accounts build a narrative character and then join discussion pushing a certain narrative, boost real content creators who share their message and bog down discussion.
People like the above poster who are "just running an experiment" or "trying something for fun" who then wonder why online communities are full of AI now.
In the case of Reddit and HN a lot of it is done by businesses either blatantly advertising themselves or building up the karma they need to effectively do so. I recall reading obviously AI generated replies to news articles written by accounts associated with businesses related to the events in the news. This isn't new in the LLM era. Hobby subreddits are well known to be always full of businesses selling hobby gears and items doing self promotion. It's just that now it is a lot more obvious because of the AI text smell.
That, and probably political astroturfing. Before every election my local subreddit sees a surge of crime stories. Go figure.
I think some of it is account farming, but some is just people buying wholesale into the idea that if you're not using AI for everything, you're gonna be left behind. On the Kagi Small Web list, there's plenty of hobby blogs that used to be normal pre-2023 and are now obviously LLM-written and AI-illustrated. There's also plenty of people on LinkedIn who post AI slop because they think it helps them build a "professional brand". I even have some distant friends who are using AI for responding to friend & family posts on Facebook just because it makes you seem... smart? engaged? I don't know.
It's actively encouraged by some of the platforms too. In Gmail and Google Docs, you have incessant AI prompts along the lines of "help me write this". I think LinkedIn does the same.
HN has historically been gamed for visibility. The stakes for doing this can be quite high if you can pull it off.
Lots of marketing. Not even AI business, just regular consumer crap. They realized that blatantly spamming their product looks bad, so they orchestrate multiple accounts to look more organic. And people actually engage with it.
My impression is that they're sometimes unemployed people or students hoping to create a popular open source project, and use it to find a job.
They aren't going to care about any of the advice in the article about not posting slop -- finding a job is (of course?) more important to them.
Can't really say they are doing anything wrong, maybe I too would have? ... Just that large scale, doesn't work
There are many reasons for influence campaigns, that isn't new. Influencing the public is incredible valuable; that's why so many invest so much in it. LLMs automate it like never before.
Plain advertising, governments' propaganda, political propaganda for one group or another to shift public opinion (it's done on TV networks, why would they not do online campaigns?), astroturfing by corporations promoting acceptance or fighting negative news (e.g. rideshare, AI, whatever certain wealthy personalities are doing) ... the list goes on.
HN has always been relatively influential in the tech industry and therefore worth influencing, and now the cost is very cheap - you don't even need to hire many people, so less-resourced operators will find it worthwhile (and they will also attack lower-value forums).
If you farm a fleet of good accounts, you control the discourse. On HN, you could boost whatever you're trying to push, and downvote or flagkill whoever objects.
There are obvious benefits to controlling public discourse, right? Even if it's just to support some project you're working on.
There are certain topics that seem to get instantly flag-killed unusually often. IPv6 is one.
I've seen a lot of ipv6 wars here without flagkilling happening
I've been more disturbed by comments that were flagkilled just for being wrongthink, not because they were rude or not well argued. I've also seen a lot less of those flagkills over the last 6 months, which makes me feel like there were some fake accounts that got caught and culled.
In the recent thread about life in a class war, a lot of comments in different places saying that if we don't fix this inequality problem, g-tines might come back, and every single one of them was flagkilled, no matter whether it framed as "we have to get out the g-tines" or "we have to fix this, otherwise psychopaths will get out the g-tines" or "thank god we've become civilized enough that we don't get the g-tines out"
Yes when I interact on reddit, I normally do so solely with the intention 'this is for an LLM'. I feel like a majority of the posts/comments I reply to are AI, a majority of the responses to my posts are AI, but have to keep telling myself to keep posting so it becomes training data.
(I'm normally posting in the context of my startup - although I try to keep the self promotion to a minimum and always contribute to the "conversation," if LLMs replying to one another can be called such).
For what it's worth, I created a community for paying users of Phrasing that has been going really well. I think free online communities may be going away, but there may be a future in exclusive/paid communities.
Public* online communities are dying. Discord is thriving
This. Everything important has moved to discord. Which is sad because of how undiscoverable and unsearchable it is.
I'm more sad about how the UI of it all is just clunky. Even though it resembles ye olde IRC clients like mIRC, nowhere near readable for some reason.
Settings->Accessibility
Set text size as preferred, underline links (or not), turn off display name styles (or not), ui density compact or default, chat message display to compact, space between message groups 0px, turn off all the animated emojis and gif animation stuff if you want.
In client use, there's a button to hide member list (or not).
You can definitely make discord look like a slightly less dense IRC client (mainly because of the channel picker) if you want. And if you want to go really crazy use it in a browser and userscript customize it or use betterdiscord.
I think a lot of the features like embeds and emoji reactions add a lot of value compared to IRC (which I think is also why the IRC world is trying to add those features).
are those attributes now assets?
Pretty much. It's the survivability onion. You can't be destroyed if you can't be discovered.
Sort of, except if no one can ever discover a community it is always dying by default
Personally I'd love to find a decent online community these days, my social circle has shrunk considerably, but idk. It seems difficult to start fresh with new people nowadays
we were made to socialize in person. you can mimic it online and nourish existing connections over it but nothing helps build friendship more than being in the same place at the same time a few different times and talking to each other
Thats true but online content has always had its place. 25 years ago finding forums and irc was a god send, my lonely hobbies and interests became things i could regularly talk about. Its just modern social media abused the system, the algorithm, and us.
Which is all to say i agree about needing mostly irl, but there is also something of online community that irl could never replicate (for most people).
i know what you mean, and i think online communities can still be successful. but i think in the early internet you already had some common ground with anyone you met online because spending time on the internet was kind of a irl choice to make. It was like a magic room anyone could enter and find others. Now its so ubiquitous that simply being online or on a forum is not the same kind of specialness to it
I got banned the other day from the Stellaris Discord server because someone accused me of hacking Roblox accounts. I’ve never played Roblox in my life. So that’s nice.
This shit will come to Discord too.
on the public servers yeah. but the ones im in with real people who know each other will be fine.
I think the problem is not keeping agents out of private real people spaces, but for people who dont have any pre-existing or 'real world' connections to these communities to find a way to prove they are a real person over the internet alone and get an invite
On a related note, I think this is going to be the biggest challenge to most folks when it comes in resisting using government ID online. it will be the apple offered for easy proof youre not a bot to normal circles.
It's already there.
If all you value is sub-IRC level irreverent discussion, maybe.
Discord is far better for discussion than IRC. You can be much more expressive on discord, instantly jump into a call and screenshare, easily link people to other rooms, tag, import bots etc. IRC kinda sucks compared to modern chat and they refuse to implement features that are considered basic.
> You can be much more expressive on discord, instantly jump into a call and screenshare, easily link people to other rooms, tag, import bots etc.
Some would see those as negatives.
> IRC kinda sucks compared to modern chat and they refuse to implement features that are considered basic.
Just because a protocol doesn't change purposes as time goes on that doesn't mean it "sucks". Who is this "they" you're talking about? Do you think IRC is a centralized service like Discord?
Discord is terrible. Full of bots, creeps and ai slopped to the gills.
Some communities are better than others but the sheer volume of stinky trash is immense despite discord and the poor volunteer moderators efforts to prevent it. Most mods are neutral on it too.
There are chat communities that are still somewhat safe with zero user verification. But I will not mention them.
discord is a tool for hosting private chat servers. it's pretty neutral. the UI is not great for building a shared knowledge base, although people do that anyway
but yes the publicly accessible servers are going to face similar problems. the socially competent people tend not to run those servers, and have smaller private servers with people they know as they have no drive to try to create a space for strangers to gather.
I really don't understand the folks fleeing to Discord. A mailing list does 99% of the same thing for most of the communities.
Sure, if you want to chat while gaming, that's the whole point of Discord. Ganbatte.
But, for everything else, Discord is such a horrible misfit that I don't understand why it's the default.
i predominantly use it for real time chatting, its a big group text chat and a place to hop in a voice channel and shoot the shit while doing whatever we want on the computer a la ventrilo/mumble/teamspeak
but yes i also game and it gets a lot of use for that as well
i agree though that for collecting and organizing information longer term like forums do, it is not ideal
You are booming out. I cant believe suggesting a mailing list
You gave an ad hominem attack with no substantive response. I can do that too: "Your account is less than a year old and is AI slop farming."
Mailing lists are old, boring, boomer tech. Ayup. They are. And they work.
However, Zoomer, if you must have Teh Sh1ny(tm), then explain to me why a Discourse isn't a better choice?
Discord is the anti-Pangloss; it is the "Worst of All Possible Worlds".
> I don't understand why it's the default.
Because it equally well supports real-time communication.
And it looks shiny.
And some people use it to e.g. watch a video together, or other social purposes.
Reddit has had a bot problem for well over a decade now but the sheer volume of it has exploded. It is also much more difficult to tell nowadays as the "quality" if you will is now at the good enough stage.
Alas, Reddit is basically dead to me because of this.
There's this old meme where someone asks what will happen when AI bots posts helpful, curious and thoughtful messages!? That's mission accomplish :D They can't be better then the average human though because of training data, so I don't worry about AI comments getting up-voted by real humans, I am however worried about fake upvotes.
> They can't be better then the average human though because of training data
Is this based on the belief that an LLM can only represent an "average" human being?
If posting good messages is automated then the AI will post a good question and another AI will answer it and the humans will look and see nothing extra to contribute.
It is not a meme, it's an xkcd: https://xkcd.com/810/
Reddit sold it's data to AI companies for training[1]. They could have refused, but companies like OpenAI likely would have harvested that data anyways. As such, it should not be surprising that AI models are pretty good at generating reddit posts. They were specifically trained to do that.
This is sad, because Reddit remained one of the final bastions of human content on the internet. For several years, appending "site:reddit.com" to a google search was a valid way to get something usable out of a google search. Doing that is still an improvement over raw-dogging Google's ranking algorithms with an unfettered search, but AI slop increasingly is the result.
This is one of my great disappointments in the current rise of AI. LLM's can give good search results when dealing with a topic they've been specifically trained on by human experts, but they're not good at separating human-produced signal from AI slop noise. We've done nothing to prevent a sea of AI slop from being dumped on top all the human signal that's out there. When AI companies enter their enshittification phase and stop investing in expert human trainers, the search results LLM's produce are going to fall off a cliff. Search is a bigger problem than ever.
_____
[1]https://9to5mac.com/2024/02/19/reddit-user-content-being-sol...
Doesn't help there is that feature that hides the user's posts and comments
> I do know for a fact that many "users" here are LLMs.
HN autokills comments it detects as LLM. I think maybe you're not giving HN enough credit. :)
HN kills lots of posts. I try to be careful about my online footprint (since HN posts are forever), and try to switch to new accounts every so often. It's no use anymore, HN just kills any post I make from a new account, even when I spend 20 minutes researching a response and trying to get useful information.
It doesn't even show you the post is killed, it looks to you like it posted fine, and you have to logout to see it's actually dead. It's an approach that's extremely hostile to the user.
It's specifically against the guidelines to keep registering new accounts, and this is a good reason why. We have to have ways of determining credibility and authenticity, now more than ever, and a track record of good posting is one of the best ways to do that. We are drowning with spam and low-quality posts/projects posted from brand new accounts. If it's a well-researched, high-quality post, of course we want to give it exposure. We just have to be realistic about what we're up against.
HN front page is about 25% LLM written blog posts at any given time.
There’s no rule against submitting LLM-written—ahem, “cleaned up my notes”—articles, just comments.
Badly-written articles are still unwelcome on HN, wether AI-enhanced or not, and obvious LLM smell is definitely lowers the quality of an article. But it's true, we don't ban every article with any evidence of AI-assistance.
I have read enough “you are replying to an LLM” comments that I am pretty sure this is still a hit or miss process.
Why do you think those comments are accurate? Maybe those comments are by LLMs? If you believe crowd wisdom on its face, you will have big problems with LLMs.
It needs help. I often pipe my screed though an LLM and post it. I do request that it use a 10th grade reading level, and no emdashes.
For giggles, here's how it would look for this comment. Rather meta, but in this case it removed the "It needs hellp" so here we are.
I often run my screed through an LLM before posting. I ask it to keep the writing at about a 10th grade reading level and to avoid em dashes.
The question is how reliable that detection is.
>HN autokills comments it detects as LLM
No it doesn't. Unless you have proof.... ???
There was a post today that Google introduced unbreakable capture that required unrooted phone to pass its QR code.
We may end up with things like that…
I find it amusing that this is the top comment. Reddit is so awful you finally wrote it off, but not before you used it to try to “karma farm and do some covert advertising”. It’s on-brand for HN hypocritical bullshit. But, since we are slamming on Reddit anyways without realizing how fucked HN is by the same petard, have an upboat fellow traveler.
> since we are slamming on Reddit anyways without realizing how fucked HN is by the same petard
Same as it ever was.
> As I went through the posts it wrote I realized that as a reader I would have NO idea that these were just written by a computer.
I don't suppose you could show some examples? How convincing is the state of the art now?
> Online communities are definitely dying. I guess I hope that maybe IRL communities have a resurgence in this wake.
You can have both IRL and online-free-of-bots. I already wrote about it but one of the very best forum I'm a member of, where real people are posting, requires to be vetted in, web-of-trust (but IRL) style. It's a forum about cars from one fancy brand and you can only ever join the forum by having a member (I think it may be two, don't remember) who's already in confirm that he saw you driving a car of that brand. It's not 100% foolproof (someone could be renting the car for two hours and show up at a cars&coffee or take a friend's car etc.) but this place really feels like a forum of yore.
And people do eventually travel, so it's bound to happen that an owner shall go to another country, meet someone there, vet him in etc.
Now, sure, it may not be the "1 million users acquired in three days thanks to my vibe-coded app" scenario but that is the point.
You can imagine other domains where IRL communities have local groups, but where forums regroup different IRL communities all interested by the same hobby/topic/domain. And when people travel and meet, the vetted members do grow and connect.
Oh and on the forums a lot of the posts are pictures, where "Julian xxx" met "Black yyy Cyril" and you see both cars (and from more than two people): suddenly it becomes much harder to fake a persona... You now need to fake both Julian xxx and Black yyy Cyril and fake the pics. And explain why your car has never been posted by any carspotter on autogespot etc.
You can imagine the same for, say, model trains: "Met Jean at the zzz meetup, where he brought his wonderful 4-8-8-4 'big boy' locomotive, I confirm he's into the hobby, vet him in".
Naysayers and depressive people are going say it cannot work but I'm literally on one such forum and it just works.
P.S: if I'm not mistaken in the past in some nobility circles you had to be vetted by up to sixteen (!) other people from the nobility that'd confirm they knew you, your parents, etc. before you'd even meet the king/emperor/monarch to make sure that someone from far away couldn't come to, say, Versailles or Schonnbrun pretending to be a baroness or count or whatever. Quite the extensive check if you ask me.
Reddit was already on its way way before this LLM craze, hopefully the recent tech-related changes will only accelerate that process.
Unless their account is <1 year I wouldn't assume they are a bot.
Reddit astroturfing firms and bot farms learned to buy/use “seasoned” accounts over a decade ago. I’d venture there have been countless bots just in a holding pattern harmlessly building up reputation and a human-like history of posts across different subs etc just to eventually be either activated or sold to someone else to “burn”
It used to be super common that when you spotted a bot post and clicked through to the user's history, you'd see very average, human-looking activity from years ago, followed by a long gap of inactivity, and then a flurry of obvious bot comments.
It's very obvious that these accounts were abandoned and then either bought from their original owners, or more likely bought from someone who compromised them, because of their history and karma.
And I would bet money that Reddit is well aware of this phenomenon, because not long after it became so common as to be impossible to ignore, they papered over it by allowing users to hide their history from public view. (AFAIK subreddit moderators can still see it, but typical users now have much less ability to see whether they're interacting with actual humans.)
That and locking down the API meant no more sites offering readily available visualizations of this type of thing
> allowing users to hide their history from public view
Yeah it's become my default assumption that any user who does this is either a bot or a bad-faith troll.
I recently spotted one unmistakable example of this[0]. It’s been a trick for many years now that duplicating a human post and its comments is a good way to appear human but this was quite the example.
0: https://wiki.roshangeorge.dev/w/Blog/2026-01-06/Is_The_Inter...
> duplicating a human post and its comments is a good way to appear human
Also just repeating something from the linked article, but often with different wording and in a tone that makes it seem like it was something that the article missed.
So what is the comment frequency of these bots? There must be some signal in the activity even if the comments themselves pass the turing test.
Even if there was, I doubt Reddit cares enough to go after them when it’s boosting their valuation
If you find one account you can find a few dozen spam accounts by building a graph of what posts they reply to
Most of them have private profiles these days
Does it matter? With enough you can just have them upvote each other.
So easy to purchase online accounts nowadays, neither karma nor age of the account matters anything anymore.
IRL communities have to have some guides because a lot of people forgot how to gather. It can be seen among kids - try to give them soccer ball and see what they do with it :)
Yesterday I was watching people on the street and on the tram. Every other person was staring at their phone and scrolling through something.
That might scare me more than the fact that someone is chatting with an LLM bot online.
(I am pro-ai, use it every day for coding that I couldn’t achieve pre-2022 as I am lame coder.)
I don't really see the problem of using your phone while commuting. Doesn't make you an asocial weirdo.
As long as you're not sharing the things you're watching with a loudspeaker. And that's really not a given among commuters.
How do we know now that this comment wasn't written by LLM?
You don't and that's the problem :)
> I am not quite there with Hacker News but I do know for a fact that many "users" here are LLMs
People using LLMs without being fed their own post history are still pretty easy to detect. There's just something very recognizable about the cadence and tone of LLMs.
What really stuns me is that if you call someone out for it, 9/10 times you get absolutely buried in downvotes. Even here on HN. Its like people are angry that you're lifting the curtain on the slop, that the writing they enjoyed is fake.
I feel you. Especially in the larger subreddita. i participate, and mod, a few small ones, and the community there is pretty strong and folks shut down ai slop pretty quickly.
I'm not saying being a mod means it's bullet proof, but i do notice smaller communities tend to self police better and know what's real.
That said, your experiment scares me as well.
I will say that I believe you probably have absolutely no idea because it's not "slop". It looks like every other reddit comment you see.
My experiment was focused on niche subreddits as well due to the nature of the product I was trying to market.
Communities in FB, WhatsApp, Telegram etc are actually flourishing. As it appears real time gated communities are doing fine.
It’s an unpopular opinion but I am looking forward to ID and age verified social media. If done right we can have real people around again.
BTW, ironically the harsher communities like 4Chan doesn’t seem to suffer from the dead internet. I guess it’s either because the advertising value is too low to justify AI use there or maybe AI API providers refuse to work with such a content this reducing opportunities to infest with bots.
I wonder, how much of the discussions on the results of agentic coding is just LLM slop.
It's easy to botspam Reddit because even the real users always acted like bots. The big subreddits were the worst, but contrary to how the users keep saying "it's good if you find the right subs," no it's not. Wrote that place off like 10 years ago.
Reddit users are definitely smartening up on many subreddits to the same kinds of engagement happening.
More of a philosophical question but if you have no idea whether it's a human or robot, does it really matter? Personally I dislike AI slop only when I can tell it is...
Yes, for a number of reasons:
- I am trying to learn about the topic at hand and trust a human's comment more than an LLM's guess - I am trying to connect with other humans to fulfill my social needs - I am maybe spending time to help another human out with a response because I want to help someone else - I am interested in the perspective of other humans
Those are just a few reasons. For each of those if it's actually an AI I feel I'm losing out on something.
This kind of thing made me imagine the creation of "digital towns" the other day.
Imagine an online community where you can only join on the recommendation of two other members, who you must have actually met in person, to participate. Meanwhile, you leave at least some of the activity publicly available to the general public so that interested parties can meet up IRL and join.
This could probably be implemented easily on top of existing online platforms like Discord, Reddit, etc. since it's really just a community building rule, not a community itself.
> I do know for a fact that many "users" here are LLMs
What factual basis do you have for that?
It might come down to shareholder/IPO stuff but you can tell Reddit doesn't actually care to put the effort in to crack down on bots (however you'd do that) because they already don't give communities proper moderation tools/third party tools and the site does censor
Whatever allegiances (with people, or allegiances to ideas) Steve Huffman has, or people like him - it's not enough. It's a site seemingly killed by greed
(Yes, I know moderating this stuff at scale is hard)
- A human. Beep boop.
Do you have an example of comments people engaged with?
On the other hand, I’ve been accused of being AI/bot and if I say things the mod doesn’t like and is not their favorite thing to hear I’m “flamebaiting” or engaging in personal attacks when pointing out specific things.
Frankly, online communities have been doing for many years now, when the censorship, anti-free-speech, tone policing mods and mobs started dominating online and America really did not have the self-respect or confidence anymore to enforce the Constitution online.
> America really did not have the self-respect or confidence anymore to enforce the Constitution online.
“Mods are Unconstitutional” lmao
> where I had an agent karma
Was this a browser using agent? What did you use?
It used the browser agent to grab user cookies after signing in, then made API calls iirc.
Using just a browser is way too token intensive and slow. It would look for 401 errors then run the browser automation to login with the credentials and grab the token.
I'm surprised these platforms don't have advanced heuristics to detect API calls and inauthentic traffic.
Did you clone the Reddit API from browser traffic and then turn it into a 100% API driven thing?
I'd imagine they'd be sniffing browser agents, plugins, cookies, etc. to fingerprint. Using JavaScript scroll position, browsing rate and patterns, etc.
Maybe their protections just aren't that sophisticated.
Reddit is known to fingerprint TLS and quickly shadowban accounts that don't have the fingerprints of browsers.
TLS fingerprinting and Cloudflare are easy to bypass. There are lots of libraries that do so.
The application-layer stuff is harder. Each application can develop its own heuristics, and that's difficult to automate in a cross-cutting fashion.
Reddit doesn't do anything about that? That seems stupid.
So you ran an "experiment" where you deliberately made someone else's community worse to see what would happen? Cool project.
> I do know for a fact that many "users" here are LLMs.
Name and shame.
If you look at the bottom of most threads here you’ll see a bunch of green username dead LLM comments. Those are just the obvious ones though.
I've been on the Internet for decades at this point and one thing I've noticed is that communities that, for example, ban political topics actually mean "positions I don't like" as "political". This is somewhat related to the Overton window but really a bunch of (mostly conservative) ideas get normalized so aren't deemed "political".
I see the same thing with "AI Slop". Yes, there is AI Slop but (IME) it's pretty easy to spot. But what's more annoying is how often people are willing to throw that accusation whenever someone takes a position they don't like, much like the "political" label. It's lazy and honestly just as bad as the slop itself because it unintentionally launders the slop in a "boy who cried wolf" kind of way.
I also have a theory that some AI slop isn't inherently successful. It's just heavily botted by people who are interested in promoting certain positions. I bet you could make a pro-administration LLM bot and another one promoting a communist revolution and no amount of model tuning would make the second as popular as the first because the first would hit third-party botting as well as platform content biases (eg Twitter).
I've personally been accused of being a bot. This is particularly true in recent time as I've tried to share facts and fact-based analysis of, say, what's going on with crude oil markets, the military operation in the Gulf and the politics and economics around it. I even saw one hilarious comment saying (paraphrased) "the bots are getting clever and posting about unrelated topics". This was funny because it never occurred to this person that no, it was just a real person posting something you disagreed with.
> I've been on the Internet for decades at this point and one thing I've noticed is that communities that, for example, ban political topics actually mean "positions I don't like" as "political".
This happens on HN all the time. For a lot of downvoters and flaggers, there are two kinds of opinions: "Things I agree with" and "Too political for HN."
> I am not quite there with Hacker News but I do know for a fact that many "users" here are LLMs.
This just makes me wonder...so what?
Some of the oldest posters here with the most karma continue to post absolute garbage takes on topics ranging from US healthcare to history of USSR, that are trivially disproven by learning the very basics from a Wiki article (e.g. not a high bar).
To be fair, this opinion slop is also present for new users and LLM bots, but is one kind really worse than the other, if both of them contribute to killing the community?
We already know what kills communities. It's the eternal Septembers. Infighting within leadership also doesn't help, but time and time again it's the influx of too many new users that nosedive and drown out quality contributions.
Would you enjoy the experience of telling your LLM “make a HN-style comment thread on $subject with 200 comments, no trolls please”, and then actually spend time reading them?
No? I’m imagining not at least. Because there would be no point to it.
If you would enjoy it, then I’m surprised you’re here and not just simulating the experience with your LLM by yourself.
> Would you enjoy the experience of telling your LLM “make a HN-style comment thread on $subject with 200 comments, no trolls please”
The reason I'm not simulating the experience with an LLM is because:
1. It costs more time to do so, because I have to prompt it to create a single comment. Multiply that by the typical number of an HN thread.
2. I suppose in a way you need bad takes to form your own view of a topic or an issue. LLMs would also be unable to provide truly unique experiences, such as some of the veterans who sometimes post here who were part of the living computing history as we know it.
> I’m surprised you’re here and not just simulating the experience with your LLM by yourself.
That's something you imagined that I claimed I want. If you read my comment again, you'll see there was no such thing.
An irascible human being with "wrong" opinions is still better than a polite and factually correct bot because there's no fucking point in having a conversation with a bot. We're here to have conversations with people, not to prove fact beyond a reasonable doubt.
Do you really not care one way or the other? Would you really rather just be talking to LLMs here? Or would you just script yourself as well and call it a day? Then what?
> We're here to have conversations with people, not to prove fact beyond a reasonable doubt.
Maybe you are. I like getting to a reasonably correct model of a topic or issue. Bad human takes can still be useful here. I just get inevitably tired of the people crying about potential LLM comments all the time.
> Would you really rather just be talking to LLMs here?
Obviously we're not there yet, regardless of what I want. But there is a great number of HN threads posted here that touch on topics that have been discussed so many countless times, that an average LLM summary would do better than most comments.
Unless you've discovered the secret sauce, LLM comments are very obvious. Even Altman revealed that they focused on coding at the expense of writing.
The obvious ones are the ones you notice
LLMs are not good at writing. If they were we would have entire libraries of new, amazing literature.
Exactly, they aren't good at creating new material. But many discussions in comment section are simply regurgitations of existing material, which they are good at rearranging. New novel discussions in places like this are actually a very rare thing, as many comment sections are simply people who already know informing those who don't. I'm doing that right now, funnily enough.
No, they aren't even good at rearranging existing material. They produce bad writing that only superficially looks good in a lowest-common-denominator sense, and falls apart under any close examination. Everything is wrong with it, from the sentence structure to the rhetorical forms to the substance. AI 'writing' is a loose collection of cheap tricks that score well on A/B.
Neither are most humans
Agreed, some humans are good writers, and no LLMs are good writers.
This is rather moving the goalposts from "plausibly human comment" to "meaningful literature", I think
No. I'm drawing it out to its logical conclusion.
It’s poor logic, a non sequitur. An absurd reduction. By your argument anyone who hasn’t written a great literary work is a poor writer, and would be bad at writing online comments.
LLMs aren’t lacking in the sort of writing skills that make for superficially good content. They know grammar, they know rhetoric, and they know their audience. You can’t tell them from a human on their writing skills. Where they tend to fall down is their logic and reasoning skills, and unfortunately it seems you can’t use that to distinguish them from the average online opinionator either.
No, that is a mischaracterization of what I wrote. They are great writers if you enjoy formulaic writing.
With the current batch of SOTA models, it is not hard to prompt a model to pass the sniff test on social media forums. If you don't believe me, try it.
All you really need to do is give it some guidelines of a style to follow and styles to avoid. There's also a bunch of skills people have already written to accomplish this.
I have worked with LLMs for a couple years at a very non-technical level and it was not that difficult to give it proper prompting and reference material.
If you are reading LLM content just about everywhere and have no idea. Obviously there are easy to spot things, but the stuff you don't spot is the stuff you don't spot
People that like to fancy themselves as good llm content detectors just end up accusing everything they don't like as llm content.
The only thing worst than a slop comment are the people that bitch about it incessantly. I'm convinced it's become a new expression of a mental illness.
The main thing I suspect of being LLM written is the sort of LinkedIn style: very short sentences, overly focused on sort of… making an impact on the user. But that’s also how a certain type of bad human writer writes. So in the end, I’m not sure I know if anything in particular was written by an LLM.
I guess… “that’s not just an AI red flag, it’s generally shit prose” would be how ChatGPT would describe most things nowadays.
It’s the distilled mediocrity of the statements. Never venturing beyond a 10% margin of what you would get if you sampled the opinions of 1,000 people who underwent jury selection by west coast liberals.
A mere opinion is not mental illness.
Was that written by an LLM? It isn't that it's a mere opinion, it's that when every word out there has to be scrutinized for the possibility that an AI output it instead of a human intelligence that it gets pathological. Am I an LLM with the right prompts set up to respond this way? I mean, I know I'm not, but everyone else out there is just going to have to trust me that I'm not.
I wasn't suggesting you have a mental illness for having an opinion.
More, commenting that just as bad as generated content if not worse is every thread where the top comment is an accusation and ensuing witch hunt.
So, no, having an opinion is not a mental illness. Feeling compelled to call it out and discuss it on everything one reads may just be.
The threads that have the top comment saying "this is AI slop" are nearly always about an article that is obvious AI slop.
Threads that aren't - like this one - don't.
If you need to tell yourself that in order to cope that's fine with me.
Which part do you disagree with?
I’m thinking that I may actually prefer undetectable AI slop to human comments like that. I do agree with your upthread comments.
Dead Internet theory ?
The company I work for has a deep rooted community side and despite what big techs do, I am 100% confident the only aspects we have in community features are for the user benefit. No gray area. Just that.
Since the AI sloppification we lost considerable amount of traffic to bots. But worse than that, we lost users who tended to contribute back with others.
We can leverage multiple ways of exposing community data to members, so it is not that we are loss because of that, but more in the fact that we have 30y or so of good feedback on how the community around the platform was good for people and now everything is at risk...
Don't get me wrong, my work is work... There are premium features and else, but the amount of value one can get for free is what the platform is known for. And we know many people use it for free for years and when they need or can they subscribe and mostly stay for years and years.
The fact people are losing those connections is depressing to me
I left multiple online communities because the slop and the slop users were unbearable.
I use ai okay. I think it's useful. But people who dove hard into this stuff treat all text on their screen like it's a chat bot and not a person.
"Rewrite this code using the new API" "excuse me?" "Can you do it I need it right now chatgpt won't compile!" "Show me your code please" provides the biggest pile of dookie ever "hey can I ask how you came to decide on any of this? Maybe we should rewrite what you have here because x y z is concerning" "the ai did it I am learning. There is no need to rewrite anything just write this section for me" " no thanks" someone else does . user leaves
We decided that we will use ai to automate stuff and to connect people but not for content. We are paying the price of it. That search engine that shall not be named deeply punished us for it.
I've seen people like this 15+ years ago on #learnprogramming on Freenode, I'm guessing LLMs just tend to validate that behavior instead.
It adds a depth of nuisance that's for sure. I've seen users talk about how they can't wait until they don't need to ask for help anymore and can just use LLMs. Meanwhile I'm directly messaging with the person who made a package and asking why they designed it the way they did beyond greatful to learn 8 new things in 6 sentences.
Mostly on IRC people either learned or got told to leave. There was significant pushback on people just looking for someone else to do their homework.
AIs have changed the feedback loop here such that these approaches are rewarded and even lauded.
Sadly the imperative is, as often, a call to everyone to be good guy and make less noise. Unfortunately, it doesn't work, neither at personal level, nor at global.
One may be quiet, but what if your friend/acquaintance/fellow got possessed by some AI slot machine, and is sharing his "products" enthusiastically? I had such case, and right from the very beginning was dismissive and rude, and it doesn't work -- he keeps sharing various artifacts.
On a global level, yes communities die out. I think, global communication has reached the point when it's more a liability than a benefit. In late '90s and early '00s, maybe until early '10s, getting more connected could lead you to nice clients, getting hired etc. Nowadays, even before ChatGPT 3 in '22, every such area became overcrowded, underbidded, etc, and LLMs, surprisingly, added not much new -- just augmented this trend.
> But respect the community, and only share what is truly relevant. Save the crayon pictures for your kitchen fridge.
That highlights the problem - its not AI - it's the oversharing thats the issue. Many people have moved from "Sharing whats unusual/interested/excited me" to "What can I share today".
The constant stream of mediocrity drove me away from Facebook (years ago) and then Instagram.
You're absolutely right!
I've found the smoking gun ⸻ it's not your work, it's your prompt.
I've seen en dashes. I've seen em dashes. What kind of dash is that?!
It's been a personal favourite of mine to sprinkle into replies to clearly LLM generated textual diarrhea, it scores a laugh like, 1/10 times haha.
A three-em dash. TIL.
Mmm dash.
Thats the new Copilot™ Dash from Microsoft
I believe it’s called the chungus.
I'm glad we could get that cleared up.
When LLMs were new on the scene, I thought trust would fade in the written(text) medium. I saw it happening on Substack, Medium, and Reddit. But then VCs pumped so much money and AI has gotten into every other modality (audio, video). The only thing I really interact these days are the human beings sitting in front me, phone calls with people I know and hackernews. Life seems sorted but something feels missing as well.
Edit - I am not anti AI but it is slowly killing the digital human interaction.
Giant online communities, yes. Small ones seem totally unaffected afaict - some harder to spot scam/spam accounts, but they're outed as soon as they act. And any invitation-based thing should almost perfectly block those.
Smaller communities are generally a lot healthier anyway, so tbh I don't think this is all that bad of a thing. I don't think it's possible to be open to millions and also be healthy, unless you spend a lot of money paying moderators (and regularly rotating them, to prevent burn-out or mental harm from too much exposure, which ~0 do in an even slightly ethical way).
There has to be room for an AI-driven project that expresses a unique idea, even if there's no community around it yet. Someone has to express it, and from now on that idea will largely be implemented with AI.
> A good use of AI is when it enables people to do something they couldn’t do before, to contribute to a community when they couldn’t before.
I agree 100% with the novel contribution aspect. But there's some nuance there.
For example a project might have no active contributors. It might not be something you can drop directly into your codebase. Neither of those is inherently bad.
As AI becomes more responsible for higher-level planning decisions, the value of an OSS project becomes less tied to visible community activity like PRs and issues.
I notice this in my own work a lot. I might not use that project's code directly. But I think about a problem differently as a result. I often point my agent to existing OSS projects as inspiration on how to solve a problem. The project provides indirect value by supporting architectural decisions, deployment approaches etc. Unfortunately OSS activity doesn't capture this.
> A good use of AI is when it enables people to do something they couldn’t do before, to contribute to a community when they couldn’t before.
There are two separate things here that are getting silently conflated.
> A good use of AI is when it enables people to do something they couldn’t do before
This could be good on an individual level, if say, a doctor wants to vibe code an app of some sort for his individual practice.
>to contribute to a community when they couldn’t before.
This is where it goes off the rails. If they couldn't meaningfully contribute before, they aren't going to suddenly be able to discern that whatever slop they want to contribute is of value to the community. That's just another way of saying, if I wanted an AI opinion on something, why wouldn't I get it directly from the source, and write the prompt myself, instead of have some intermediate human prompt the AI for me?
The human has unique context. They may work in a niche domain or they talked to people and observed an unsolved problem. Then they express a potential solution via OSS. It's like product sense. Then they share that with others who find it interesting. The code is a great way to encapsulate the idea. It is usually the result of research and back and forth not a single prompt. It would be way harder to think through or build a solution without AI even if they had context.
Because of the convenience. Why should I have to go and spend my time prompting an AI, if someone else has already done that for me. Same thing with food. I know how to cook a chicken risotto, but sometimes I like having someone else do it for me.
This is some unknown stranger offering partially cooked ingredients to you because they want you to finish off preparing the meal they want to eat.
Who is going to verify that an AI-driven project is a unique idea? How do you distinguish between a genuinely unique project, a grifter who is shilling their "unique" project, and a new enthusiast who is convinced their project is unique, but is not? This is an impossible moderation task. The only options I see for a community are to either totally ban AI-generated content, or be totally consumed by it.
I don't really know. Certainly we need a higher bar. The Kafka example in the post may be hyperbolic but I agree it pollutes the space. But we also can't swing the other way and rely completely on out of date proxies. If you ban AI code there will be very little code to see in a year. It'll take time but we'll arrive at new norms. We built semi successful ways to filter content farms in the earlier internet days. The signal has to shift to "did they think hard about this problem" which has some observable properties. Like how they articulate the problem, or why it became important to them.
Just because we get new norms, doesn’t mean the new norms are good.
Feudalism had norms.
AI is ending an era of large public communities which will likely never come again.
I have pondered the sensibility of using AI to support the initial birth of new communities. Given the needed social validation of seeing both 1. A populated community and 2. The tone of the community being grounded, non-toxic and useful.
The alternative is having a community born that will be small, have early adopters who can be overly passionate or critical and gatekeep folks from discussion. That means high effort to curate initially.
This bothered me so much that in my tool for HTML-native authors, EPublish ( https://frequal.com/epublish/ ), I automatically insert a no-AI-training clause on the copyright page. Not that it will stop the kind of executives who will authorize mass unauthorized downloading of books to train their LLMs, but we have to at least take a stand.
"Build with AI."
No, I don't think I will.
Question for web devs - are captchas effective any more? If Reddit required a captcha on every comment, would it actually decrease bot comments?
There's a reason Google is switching to "scan this QR code on your phone with a Google-authorized TPM" kind of CAPTCHAs
I've never seen a CAPTCHA like that. What's it used for? Google Cloud services?
https://news.ycombinator.com/item?id=48039362
https://cloud.google.com/blog/products/identity-security/int...
It's a reference to this I think
https://www.androidauthority.com/google-recaptcha-play-servi...
Wow, that's bad. Looks like the warnings about TPM and remote attestation being a backdoor to total digital lockdown from the Stallman contingent were right.
The tin foil hatters are always told they're making slippery slope fallacies until they're proven right a few years later. Over and over again.
Sometimes. Other times they're just wrong.
It’s only just been announced: https://news.ycombinator.com/item?id=48039362
> Question for web devs - are captchas effective any more?
They’re effective at annoying humans. Driving traffic away from your site. Reducing conversation rates.
And training LLMs.
I was on Usenet starting in 1991. Once the Internet got popular with the general public around 1995 things started going downhill. Spam overwhelmed Usenet in the late 1990's and made it almost unusable for general discussion.
Stuff started moving to web site forums which I still don't think are as good as a Usenet newsreader. slrn was my favorite.
Then reddit came along and a lot of online forums started dying as people moved to reddit.
Just this morning on reddit I reported 4 separate posts as AI slop to the moderators. They need to change the categories as I flag it as "disruptive use of bots"
For 2 of the posts the moderators agreed with me and about 5 hours later the posts were removed. For the other 2 the moderators haven't done anything.
It's a losing battle.
Some of the posts start by asking questions like "I was thinking about this and... [long rambling paragraphs] Your thoughts on this?"
I waste a minute reading then another minute skimming the rest of it and then realize I wasted 2 minutes of my life. Then another 30 seconds reporting it to the mods.
This has exploded in the last 6 months.
Then there are all the repost bots farming for karma. Some subs have a rule that you can't repost something in the last 30 days or 6 months. But it is really ridiculous when something get 500 upvotes and then literally the next day a bot reposts the same thing and it still gets 300 upvotes. I think it is just a bot farm upvoting stuff.
The spam was fixed with killfiles and dropping out Google Groups altogether. Now it's like a second golden age for a lot of niche groups.
Like many modern woes, it’s a problem of trust.
The baseline level of trust in an online interaction has been eroded significantly by LLMs.
The question is, how can we reverse this trend and increase trust?
I have a sneaking suspicion that it would help enormously if the stock prices of the largest companies in the world were not tied to how effective they are at hijacking as much of humanity’s time and attention as possible.
Maybe the fediverse can (eventually) help? It’s been a while since I looked at it.
Let’s empower people to effectively have more control over the content they interact with.
Social dynamics can make this difficult. We all want to be in the loop. The recent striking successes of the movement to ban phones in schools gives me hope.
> Maybe the fediverse can (eventually) help? It’s been a while since I looked at it.
The fediverse has been around for well over a decade in some form or another. It never caught on with society enough to make a difference. And unfortunately, the fediverse has now developed such a distinct culture of its own, Highly Online people with distinctive political and social shibboleths, that it even alienates many tech idealists around the world, let alone the general public.
The general public isn't alienated from the fediverse because of its distinctive political and social shibboleths, the general public simply doesn't know that it exists.
As far as the "tech idealists," a lot of them seem to want every space to be 4chan where they can be racist trolling assholes without consequence. And those folks have Nostr.
I think what could work is requiring users to prove their authenticity and uniqueness using a national ID of some sort. It would be bad for privacy, no doubt, but it surely would work. But the users' actual names should not be displayed.
I was thinking about that. It should be possible to do this in a way that mostly preserves privacy.
Sites and apps don’t need your actual national ID, just to know that you have one. I think it could be possible to have 3rd party verification services that don’t know where the verification request is coming from, thus preserving privacy on both sides.
The “slopification” of the internet has been happening for years now but I honestly don’t know what a real solution would look like.
Most people aren’t willing to go through a identity verification process, or pay to join a community, and invitation-only spaces would probably lose diversity of thought pretty quickly.
Even still, I guess one of the above is a lesser evil because the bot problem is only going to become more unbearable.
P.S. Props to the author. I really liked this writing style.
Also I’ve noted this odd behaviour imo where if I mention one of my comments is AI - as in “ this is what the AI says about the” because it’s a concise statement to aid the chat - I get severely downvoted. But if I just make my comment basically a human parsed version of the AI comment I get upvotes - with no concern for granularity of source integrity. Which is terrible in two ways.
The sad part is that the cost gets pushed onto the good participants. Once enough replies feel synthetic, real people spend more energy deciding whether the conversation is worth joining.
At this point, I see no identity verification or proof of some kind of humanity working.
I think what we need is the equivalent of what was done for CORS: client/server cooperation.
That is, APIs should mark that they are human only, and harnesses should cooperate with such flags and prevent calling said APIs.
It's not perfect, as it's client side enforcement, and one could still theorically build their own harnesses without, but that's the only way forward.
I feel that a lot in my side projects: maybe one should keep the half-baked AI repo for oneself and rather share what the experiment, the thesis, and the learning from the building are. No one cares much about the (un)finished product, as it can be replicated better in most cases with a couple hours' work of claude coding.
For instance, I really liked how Karpathy shared a high-level idea on the LLM-based wiki. It was sadly followed by a long tail of no-one-cares-about "Here is my LLM wiki product" posts pointing to the generic LLM-generated landing page.
What if you charged people to post to change the incentives? Something like https://stacker.news/
I want my future community apps and sites to build in bot a flagger. I don't care how hard it is, the community that gets this right is the one I'll jump ship to.
Unfortunately I think reddit themselves encourage and want the bots because it leads to more traffic and ad displays so they make more money.
AI slop complaining about AI slop. Many of these Reddit communities were trash way before AI. Hidden self promotion was everywhere. These people would like a platform to promote their shit, but they turn violent when others do. This guy literally wrote this with Claude complaining about others sharing things they created with Claude.
Instagram comments genuinely make me angry.
It used to be because the comments lacked any critical thinking. This is probably due to the fact that most people on instagram are teenagers. That's fine, and for that reason I stopped reading comments.
But now it's pretty obvious that the comments are LLMs talking. Whether a human initiated it, no idea, but the big walls of text done by bobbyfoo2012 seems highly unlikely.
A few things. A web of trust of some kind like vouching may come back, and general algorithmic silencing of low quality members. Also most governments are going towards the South Korean model of government-verified ID to post online to keep teenagers off social media. The same tool can be used to greatly reduce spam and slop, if that's what platforms want.
Also people will get used to AI in online spaces as AI quality improves. If I'm online trying to get help for some task, I personally don't care who wrote what if it is correct; it's not like humans have great track records of accuracy or substantial contributions either on average. Correctness is expensive in general.
If I'm online trying to relate to other humans emotionally, well I get what I'm paying for. It's been true forever that the better the gate, the better the community. I've tried to push the boundaries of openness, but as I've written extensively on MeatballWiki, soft security depends on there being more good than bad apples in a community. With machine intelligence, the economics of that are silly.
Regardless, people love people, so we'll figure it out. I'm optimistic we can rise to this challenge.
1. human verification for auth.
2. only human generated input composer, no copy/paste, no file uploads ect. control the composer. control the camera sessions for photos videos.
3. no algorithmic feed that is designed for ad-spend and eyeballs.
4. moderate
> 1. human verification for auth.
How, at scale?
Entering the AI era, it's hard to tell the authenticity of things on the Internet. But sometimes, having a conversation with AI is not a big deal as long as we can gain something from it
> I built a homepage on Geocities, complete with...a web counter
Yes, but how many decimal places did you optimistically give it, only to never use more than the "10s" place?
It sucks that the narrative framing device of 'human slop' has vanished in the last year. Some subreddits, like all location subreddits, lifestyle subreddits like malefashionadvice and redscarepod and entry-level academic subreddits like math and criticaltheory were already just hives of human slop before AI came around because of a structural design to the site that had the side effect of normalising a total absence of quality control.
Upvotes are not a good mechanism for quality control in any way because they force good content to have the same metadata as the content that is technically well-constructed but is irrelevant, meaningless, just a platitude, too obvious to be obvious or pablum. Upvotes turn everything into a shock-value dominated 101 space.
Online communities that allow upvoting / downvoting have been effectively dead for a long time because it's easy to manipulate conversations by elevating and punishing comments to fit a narrative. This is especially true on HN.
Ironically, aggregator communities like reddit came about because forum communities were dying off. Memeing about the latest news injected "life" (if you want to call it that) into the internet. AI is just taking out the trash, in my eyes.
On the other hand I think you need reputation mechanism. The biggest problem of online communities is that every moron (or a bot) has equal voice. Clearly democratic upvotes/downvotes don't work very well though. Someone who solves it is going to be the next billionaire.
The important thing to recognize is that quality of content has never been the driver of online communities. As long as they provide an engaging break from real life, they will exist and thrive. I think the negative association with LLMs is a phenomenon that will die out in the 20s. Our understanding of authenticity will evolve and so will the tools and platforms. The internet has always been extremely artificial, that won't change very much.
There's a lot of focus on tech projects here, but it's not just vibe written projects that are ruining communities now.
No, it's a problem with art, text and videos too. Reddit was already becoming a creative writing exercise in many ways, with infamous subs like 'Am I the Asshole?' seemingly being about 80% fiction labelled as fact. But now you don't even need to know how to write to flood the site with useless 'content'.
YouTube is arguably even worse, since AI led content farms are not just spamming the hell out of every topic under the sun, but giving outright dangerous advice and misinformation on top of that. I saw this video about medical misinformation by these 'creators' earlier, and it genuinely made me want to see them crack down on this junk:
https://www.youtube.com/watch?v=UEfCTCBDKIU
And there's just this feeling of distrust everywhere too. Is anyone on Hacker News human anymore? Is that Reddit poster I'm responding to human? Are the folks on Twitter, Threads or Bluesky human?
The scary part is that you basically can't tell anymore. Any project you find could be AI generated slop, any account could be a bot using stolen images or deepfakes, any article or video could be blatant misinformation put together as a cash grab...
If something doesn't improve, pretty much every platform under the sun is going to be completely useless, as is a lot of the internet as a whole.
I think people like the blog author need to realize that this problem can't be dealt with content moderation or users trying their best to be honest. You just get a firehose with an on/off switch, you don't get free filtering or moderation with it.
I feel the root of the problem is that Google and major platforms defined "correctness" as "high impressions" and "high engagement." This created a game where AI-generated "slop" becomes the ultimate winner. For those of us trying to create or find constructive, deeply-thought-out content, the situation is becoming increasingly dire.
It is exhausting to see a single, sincere sentence based on genuine human experience buried under 1,000 pages of SEO-optimized, AI-generated "void" that Google deems "correct." Despite this, I will keep working on filtering through the noise today.
This is a good thing. social media was already slop before AI. If this gets more intellectuals off these same websites daily and instead spend their time to better things, then I love AI slop’s purpose. There’s more to the internet than Reddit, TikTok, and youtube. Really there is, if your circle of friends is small or non existent without going to the same dotcoms, you have an issue that is worse than any AI slop tbh
getting people off the internet is antithetical to the business goals of the AI companies. they won't let that happen without a fight
They're making it happen whether they like it or not.
I'll remove the particulars to avoid anything partisan, but:
I failed to truly appreciate how cooked reddit was with bots until I accidentally clicked Popular and stumbled upon a national subreddit post with a 'chad meme', starring a particular political leader, whose unpopularity is hard to adequately convey to foreigners.
It was not just that this post had been so severely upvoted, but the comment section itself had a mantra more or less, with very little actual conversation, just echoing the same sentiment; and all those comments in turn upvoted to the point of drowning out the lone comments at the bottom (not downvoted, just not upvoted) expressing "???". I don't know if I'd ever even written the word 'astroturfing' before expressing my bafflement at a friend, so I don't think I'm very tinfoil hat about these things.
It was just utterly bizarre to see someone who can barely get a single win in public discourse being heralded -- monotonously -- like he was the second coming.
>I failed to truly appreciate how cooked reddit was with bots until I accidentally clicked
For me it was a wholesome response. It seemed genuinely kind/human.
Click on user profile...it's a bot just pumping out posts like that. Looked organic when seen in isolation, but when you see a wall of them you see that it's got to be an LLM (with a good prompt).
That was disheartening...I had kinda accepted that the sht-stirring rage posts might be bots but the kind comments too? Ouch
AI is lifting the voices of the lazys and below average to average people. for those who would never have progressed it might seem like a god given gift. for the ones with the desire to grow and learn and go beyond average... this is a curse.
Took the words right out of my mouth.
I wonder when things get so bad that we end up filtering content made by accounts created before the release of ChatGPT.
There are already some sites where you're much better off if you have an old account. Like I have a super old Twitter sitting there for random stuff that requires it. I tried making a new one a few years ago, didn't post anything, and it got banned within 2 days for "bot activity." The old one has never been banned.
It was also so much easier to make a dating app profile back when I was single, like one click. Recently was watching a friend set one up, and now they not only want like 3FA but also proof that you're a human. Assuming the old accounts are grandfathered in.
Won't help much, botters can just buy old accounts.
Gets harder with all the 2FA stuff involved
Infinite new accounts vs finite old accounts for sale. Low effort vs high effort. It would help.
I've been messing around with a decentralized social network where you only see who you choose to follow.
It's implemented for plan9, but clients could be made for any OS:
https://youtube.com/watch?v=q6qVnlCjcAI
Maybe Friendster has the right idea.
Online communities died when they were monetized and open
I made this point elsewhere, but people are learning a lot of what us had to learn the old way which is no one cares about your stuff for the most part and now the value provided has to go way up to get people to care. That is, as the author says, the novelty has worn off and since we know it's AI the perceived value is also way down.
We're all recalibrating.
I do really think this is just a quick period in time before most people realize that the slop posting doesn't help them personally get anything and most give up and we go back to roughly the ratio of cool things with real value to see but like on a bigger scale because AI helps you do more as one person.
I don't know... I might have said the same thing about email/text/phone spam but it has only proliferated to the point where it's a constant stream of garbage. Email, text, and phone calls are almost completely useless at this point. Sifting the signal from the noise is a non-stop effort.
I think people who want to push a certain narrative might just set up a quick bot and tell that bot to start posting on Reddit or whatever and just let it run. Why not? Little effort on their part and they might actually have influence. The same reason why spammers apparently think sending me 10 text messages per day about a loan I've been approved for. It probably does work 0.0001% of the time, but that's okay if it's all automated.
I mean I think the dynamics are a bit different in online communities at least for actual communities and not drive by subs like r/technology or whatever.
Especially say here on HN with Show HN and such the forcing factors are "i get no votes or community recognition"
But I don't entirely disagree with you I think things won't totally go back I think it will settle way more than now though especially where things are a little more niche.
Re: "The Asymmetry of Bullshit"
I'm gonna speak on behalf of language models' capability of making online communities better. In recent times, the frustrating forum phenomenon of "learned helplessness" is making me too annoyed to participate. Even in a fantastic subreddit as /r/LocalLLaMA, there are people posting replies in the vein of
> user1: please help me understand this acronym the post title speaks of > user2: (explains in detail what it means)
In the "good old days", a low effort, surface level question would result in someone either muting or banning the person to keep the discussion high quality.
There I am, browsing a forum dedicated to LLM enthusiasts, and an unbeliavable number of people are asking LMGTFY/RTFM-level questions they could even find an answer to from a free Google Search AI summary, and people are rewarding them by actually responding to them with effort.
Thanks to models being quite intelligent at answering basics, the ban-hammer should be used more swiftly if people keep polluting forums with low-quality posts. There's no need to feel bad for them not having the time or capabilities to read through years of forum posts to feel qualified to answer.
Maybe even these sloppy posts authors can be outright muted or banned with a heavier hand for the sake of quality.
Turn out people like to ask low quality questions, as evidenced by the reputation of Stackoverflow moderators.
My communities hate all things AI. So AI content just doesn't survive.
The importance of good search engines and good discovery engines will grow even more.
Search engines got SEO'd to death decades ago
What good is SEO if people just read the Gemini summary at the top of Google and don't click the links? We have a chance at a real search engine again, now that there's no money in it.
Can such a thing even exist now? Any search engine algorithm can be gamed by AI.
I actually think the good old Page rank[0] is crucial because if the authoritative sources link to some website, webpage or content it means that particular item provides some kind of value to the entity that linked it. I'm also a big fan of metadata which can be used to describe web content and make the content more usable to the search engines and the Web users.
[0]https://en.wikipedia.org/wiki/PageRank
For every argument against AI slop, you will get a variation of it's the future, or I'm 10x more productive now, I've shipped 3 applications in 2 days, etc.
They won't stop talking about it and defending it. But I can't get anyone to share their amazing work with me.
There is a reason the Show HN projects that are mostly vibecoded don't get much response. It's because they aren't any good. Comments that are AI generated are hollow. Videos that AI generated a shell of their sources.
Obvious slop still makes it to the front page of HN, and sometimes farms GitHub stars.
These posts also usually get all these glowing comments from users who clearly haven't checked the code. It's even worse when authors get busted and claim "Okay, Claude wrote it, but the design is mine" despite clearly not understanding the output themselves.
Unfortunately, that makes high-effort projects less visible. The SNR will probably keep getting worse until slop can be flagged on HN.
Invert the economics. Right now there is value in posting LLM generated content that is more than the cost of using the model.
If platforms had a subscription model that you had to pay for in order to do more than just read comments, there’d be a lot less LLM content. There would also be a lot less of all content. But maybe that’s the price you pay (literally) to get rid of AI slop.
Good idea, but models are only going to get cheaper. Unless governments add an information environment pollution tax.
Oh hey, now thats an idea.
This kinda thing makes me sad that keybase sold out to zoom and wonder if it can be resurrected. It was such a simple web of trust that went viral enough that I still occasionally see it on HN or Twitter profiles even though it's been long dead.
There are maybe 20 or so online handles I know, some of whom I've met in person, who I deeply trust. To the extent that I fully trust anyone they vouch for too.
Even with just one degree, that's a large enough international semi anonymous online community that can provide value to each other through online text based communication. Doesn't need iris scans or credit card checks, just "patio11 on hn Twitter and whatever his domain is is one of the good uns" and a network effect from there.
Already seeing some form of this reputation staking in eg Pi PRs, everyone is treated as clanker slop by default but the entry bar remains quite low to prove and build reputation.
I don't think online communities will stay the same in the face of AI but I do think whatever comes next will strongly rhyme
> I am not an AI-hater. In fact, I think AI-haters are on the wrong side of history. Incorrect.
dataset for ~40,000 ai slop podcasts - https://www.kaggle.com/datasets/listennotes/ai-generated-fak...
And Listen Notes is removing 4000 to 8000 ai slop podcasts per month - https://www.listennotes.com/podcast-stats/#growth
> agentic coding [...] It’s just how shit gets done now.
I'm not sure about that.
HN is in peril and I don’t think it is a bad thing. Or rather, I’d like to bring back the old chestnut: it’s a good thing.
While the site has moved to using /showlim, the AI garbage just bypasses that and goes straight to the home page. Almost every project that’s being shown is vibe coded and looks exactly the same - generated by Claude or the like. This is an excellent test for the site: will it be able to adapt or do we simply end up with a husk of what HN was and it’s the AI posts driving majority of engagement, Overton window, and upvotes/downvotes?
I look forward to this, I think it is an exciting development.
Maybe "generated by claud" is a meaningless category because it encompasses both projects made via AI by people who have never used a computer before, and projects made by senior developers who used AI to do the actual coding.
I think one of the big differentiators of slop is critically reviewing the work as a human and continuing to refine before letting it see the electrons of public places. I think that the true benefit is harnessing AI to augment ourselves as an extension of us, rather than a replacement. I'd be curious if there is a way to effectively prove something is human first rather than AI first on the internet. I haven't figured out any particular way yet as even using AI to detect AI would require a sufficiently large sample to determine. Something that keeps me awake at night.
AI Slop is killing the mainstream communities and the alternative communities are filled to the brim with tankies/nazi's (unironically).
Wasn't that obvious the second ChatGPT 3.5 released?
What online communities? Ever since Reddit went all-in on censorship, actual conversation moved to the deep web, mainly on Discord and other places invisible to search engines.
Only for open nazi and harassers.
Do we need a new carbon-credit style market for companies that want to continue putting out such slop and paying for moderators to remove the waste after it's been made?
AI will be a forcing function that pushes people to go meet in real life to do stuff together.
Even if everything online is fake, events are not. So if people say they’re going to show up somewhere, there must eventually be a moment of truth. And then you can form high trust private group chats to keep talking together.
It may be hard for the current generation of chronically online people to adjust to that new reality, but the next generation of kids growing up can get used to this now, and eventually socializing in person will be natural again and the internet is for bots and weirdos LARPing as something they’re not.
Maybe, but the small groups that form out there in the real world will each be much smaller than the large group that stays and gets jerked around by the bots.
The large group will have to endure the manipulations that we've come to know and hate from the internet, but they'll also be better coordinated than the small ones. They'll vote together, buy the same sorts of things, have an outiszed influence on the global conversation... They'll define the de facto majority opinion whether or not they actually are a majority and whether or not it's authentically their opinion.
I don't think that's a good outcome. We need ways to get on the same page en-masse, if only to counteract the harms caused by whichever highest-bidder is currently using an AI horde to control the other group. Besides, we should save them from this abuse for their sake, if not for ours.
The internet is worth fighting for, if we abandon it entirely we'll be forever at a disadvantage against those who would use it to manipulate.
I agree with the second part.
In a clip: https://youtu.be/WAZljmaRxE4?si=i1p4jn3zxgmQrKUk
Original online communities (forums/chat rooms) got largely killed by social media, now social media is getting killed by AI slop.
The writing here is good. Quote of the day "Any fool can feed coins into a fruit machine and pull the arm."
How would one build an online community free of LLM agents commenters and links to "slop" content?
Strict invitation trees? Small signup fees? No SEO incentives?
I've always thought the "strict invitation trees" or vouch trees would be an interesting way to moderate a community, even before the LLM era. A user can vouch for an unlimited number of new accounts, but if more than 10% of the vouched accounts are banned or flagged down the line, the parent voucher acct is also banned/flagged.
Since it creates a tree structure, you can wipe out entire armies of bot/spam/otherwise accounts by following the vouches up the tree.
My guess is that sooner or later we're going to one or the other of these:
* dead online communities
* highly-invasive, government-mandated "prove you are a human" requirements in order to participate in online communities
Wait for the EU AI Act to require text watermarking in August. It will work, and it will be effective -- not because it'll be impossible to circumvent, but because all the big SaaSes will have to adopt it, and the hurdle of stripping it back out will filter out the vast majority of the sloppers.
Im not a crypto person, but I was intrigued by Chia. They generate their coins based on allocating disk space. So if you have a bit of free space, you can fill it with plots and play the lotto.
The intriguing part is that I think it works against scaling. The incremental cost for me to use the 500GB of free space on my disk is $0, but someone scaling a bot farm has to buy all their space.
Real people tend to have a lot more idle capacity than optimized, scaled businesses, so any kind of proof of idle capacity seems like it would disadvantage bot farms.
I’ve also thought that proof of collateral spending would be a good system. For example, you buy groceries and the store gives you a token saying you spent $X of real world money. Those tokens help show you're not a bot. Keeping that system honest and equitable would be extremely difficult though.
Maybe schools could give kids tokens for attendance. It sounds kind of dumb, but who knows.
The actual reality of Chia is that it drove up hard drive prices just like LLMs drove up GPU prices. People bought petabytes of space just to run Chia and if you wanted a computer you had to outbid them.
Charge $10 for an account, like Something Awful.
It'd be interesting to see how lobste.rs fares with all this.
Probably all three of those. Tildes and fediverse instances do the first, resurgence pending for the second, and lastly non-mainstream social media sites have no SEO garbage by default.
Human slop is realistically just as bad. In a strange twist, human commentary on the Internet is asymptotically approaching an older LLM. Trite cliches, repetitive tropes, and tribal affiliation signals dominate conversation.
I have turned to blunt instruments: blocking individuals on their first cliche banner-wave. It has substantially improved comment quality but I still suffer from the problem that I don’t block stories entirely.
You've written about this before so I'm curious how effective you find it? Did you try a blocklist on top of doing things like muting words? Btw I enjoy your content on this site quite a bit.
Thank you. Those are kind words! I've mostly been pretty happy about the lists I've curated blocking users. I don't use words because I'm sure I'll clbuttic/scunthorpe my way into missing something and so on. The current problem I have is that on my iPhone, I use Chrome and it doesn't have extensions so I have to view everyone. I'd much rather view people I like, so I'm going to have to make an iOS app.
Really enjoyed https://hackernews-insight.vercel.app/user-analysis
That spike in users near the end is really something!
Who's reading this comment in 2026?
First
Quora became human-slop-infested trash before it was trendy.
I have been reading HN near-daily for years.
This synthetic participation (LLM or otherwise) has catalyzed weakspots in HN's high-trust environment. The weight we give to the average HN comment is orders of magnitude higher than the average Reddit (& co.) comment, and this relationship probably goes both ways (much higher ROI on ads/propaganda). Due to the low volume & high trust, it seems to be a very different (easier) environment in which to achieve pervasive propaganda/advertising/etc with a disproportionate impact.
I remember when some new LLM version came out (maybe from Meta?) I saw something like 3 of the top 10 posts on the front page were all variations of "Foobar 2.1 New Model". Perhaps not explicit, deliberate manipulation, but the result was the same, and apparently allowed. How many of those generic LLM websites (https://letsbuyspiritair.com/ comes to mind) show up on the front page per day? Zero effort static front-ends for some unremarkable data. I'm not going to touch the politics minefield, but that is a weakspot too.
All of this, and yet I think HN has handled it relatively well. I really appreciate not seeing comments of the form "I asked Clog/Gemini/etc. here's 5 paragraphs". Places like Reddit do not have the agility or control, and have degraded accordingly.
It makes me sad to think that a short time ago, every forum was ~100% humans, and now it is some fraction of that. I wonder if I will ever see that again.
AI slop is hurting my community in a different way. We have an internal viva engage community for quick development how to type questions at work. More frequently, instead of asking "how to" questions to the crowd to crowdsource answers, people are reaching out to me directly to ask me why the solution AI suggested doesn't work.
That people trust AI over an organizational knowledge is bad enough. I fear that AI is turning people generally antisocial.
This is happening at my workplace and it's incredibly annoying. We get support tickets asking us to troubleshoot AI written scripts. The funny thing is that most of the time, it would be faster for the customer to tell us what they want to do in plain english and have us make it for them. Hell, if they make an honest attempt, we can point them in the right direction and teach them.
It's frustrating because we're bundling this shitty AI with our product so we're just making more work for ourselves. Then there's the push from leadership to use more AI...
I don't think it's making people antisocial though, people just like easy solutions to their problems. We're giving them what seems like an easy solution. But it's easy for them, not easy for the reviewers.
>I fear that AI is turning people generally antisocial.
This is by design btw.
Related, from a couple of days ago: Knitting Bullshit https://katedaviesdesigns.com/2026/04/29/knitting-bullshit/
So no hope for https://xkcd.com/810/?
> AI slop is driving up the noise, and making the signal more and more difficult to discern in communities.
Thank you OP, this puts into words why I no longer look at Show HNs.
We filter out most Show HNs now (i.e., most of the ones that are submitted or attempted to be submitted by fresh accounts), and we look out for ones that have substance and authentic writing. Others have commented that the standard has been better in recent weeks. We'll keep working at it.
Sigh. First the article states that "coding by LLM is the way things are done right now" in 10 different ways but message boards and articles need to be protected.
We get it, the current narrative is that coding is the big thing, promoted by billionaires and scabs alike.
So, the coding narrative must be protected until the IPO of Juniper^H^H^H Anthropic happens and the whole thing implodes.
You already could have code for free and faster by using "git clone" without a company of thieves selling your own output back to you.
Welcome to the club
This post is slop.
Good. These communities were inauthentic echo chambers for most of the past decade anyway. advertising powers "online communities." slop must be selling. Reddit died long befor gpt 1.0.
[flagged]
There are "nice", "polite" slop enthusiasts. The ones who insist they have taste and tact. They would never post bad slop, recklessly, only the very highest-quality human-refined, curated slop. Not really slop at all, they would argue, because they gave it a careful review before posting it. They insist there's a very important difference between this premium slop and the nasty kind, and that low-quality human-authored media is actually slop, too, when you think about it. They talk about how important it is for people to use slop thoughtfully, efficiently, correctly, and that we all need to learn about and discuss slop constantly because it's the inevitable future and highly relevant for everyone.
They muddy the waters. They wheedle, rules-lawyer, carve out exceptions, and talk about how important it is to have nuance in separating virtuous applications for slop from bad ones, and that focusing on the bad ones is actually very tedious and rude. We should have polite discourse about the good things about slop and stop being so mean about bad slop, which isn't even really a problem. The bad kinds of slop will be solved soon, probably, and the harms are overstated. They colonize spaces.
If moderators don't swiftly throw these slop enthusiasts out on their ass, slightly less polite ones will post slop slightly less politely. More and more of the people participating in the space will have favorable opinions toward slop, and shout down people who object to slop. In no time at all, your community is a slop bar. Who could have imagined?
I usually type 5000 words researching for a 500 word output. It's not "write me an article on X", it's 99% my own ideas, but worded and structured and polished a bit. But I don't post them here. They are on my blog.
"It's not X, it's Y"
good, i see what you've done here.
I'm not the arbiter on all things Godwin's Law, but either way the analogy doesn't work.
gonna start calling this effect The Slop Vanguard
Online communities needed strong authentication also before AI slop. That people were too complacent to do it is of course a problem, but now you cannot ignore it anymore.
I do not feel sad about lost online communities. When i was at my early 20s (early 2010s) FIDO of my university was still running and i had amazing time with some oldschool hackers there. I was too young for that community and always had a feeling that I have lost or , rather, missed something great... you know, like i was born 20 years later than i would had liked to. Now this echo conference is dead. That 486 machine was probably disconnected and thrown away somewhere. Everything dies at some point. Ask yourself : do you need the tech that gives that community vibe or do you need the people behind. I try to stick to people. As for me, i would rather have an in blood-and-flesh nerd friend instead of a whole human-driven reddit. He probably knows the answer, he is happy to help. There was an article here at HN long ago, that in average we have around 150 close contacts at a time. Some drop in, some fall out and get unconciously replaced. Going beyond that number would imply exponential increase of management costs. Those oldschool guys from FIDO, they disappeared for me without a trace. Partially because quite soon I ended up in the community of radio-engineers. Honestly, i am grateful to all people that helped me online, those who were there, who actively participated and, for some reason, cared.