Maybe add a category for posts and comments about AI on HN :)
"Stories about AI" is not offensive to me. Its influence on the industry is undeniable and if I'm feeling tired of that content I just won't engage with it.
AI-writing is another story, but yeah -- HN is downstream of that problem. You can encourage people not to submit articles that seem to be LLM authored, but it won't work.
Part of the ethos of HN is that we don't do content/subject silos; it's a way in which HN is very distinct from Reddit. I don't think this will happen and I think if it does it's a bad idea (not least because I don't think a site dominated by software developers is going to separate itself from AI, any more than it will separate itself from programming language discussions), but I understand the impulse. They're not the funnest stories to comment on.
/ask and /show are sort of HN's version of content/subject silos; posts there can technically appear on the front page but are comparatively less likely to. I imagine they could add a /slop section for AI posts, and then tweak the ranking logic for the main /news page to prevent too many from showing up at once.
I understand the suggestion to be moving all posts about AI, agents, etc to a silo. Generated posts are generally already off-topic here (I gather they're about to add a new flag for that).
I think it's going to be really difficult to segregate discussions about AI from discussions about software development over the next few years.
I enjoy most of the "AI" posts on HN nowadays. I was really fed up with the MCP/Anthropic PR machine of a year ago, after just a month of that. There's much more actual content today, though I guess we also see less of stable diffusion in favor of transformer LLMs.
I'm afraid that we're in an interregnum. A few years ago AI could not pass a Turing test. A few years from now AI will better at Turing tests than we are. We're now in this strange middle zone where we are dazedly grasping for solutions.
But what happens next, when we just fail at the task of recognizing ourselves in cyberspace? Where LatestClaw is just plain better at mimicking you than you are? What happens to the living we used to claw out of the ether for ourselves?
The majority of the content on the internet is supported by ads with the expectation that you, a human that has money, will consume something and spend money on them.
If people are replaced by some synthetic representation of themselves, what is the incentive to sell advertisements on the internet if there are no humans?
Fake/artificial traffic is a big problem today, it will be harder and harder to detect but its presence will be more and more obvious.
Unregulated capitalism is unsustainable long-term anyways. This is just an accelerant towards the inevitable dystopia-or-socialist-utopia fork in humanity’s road.
We don't post-train current frontier models to pass the Turing test, but if we did, it wouldn't be much of a challenge for current models IMHO. It's a dead benchmark. It tests the human machines, not the machines.
Whatever real-world jobs they expect knowledge workers to take on after we are all replaced by AI... we at least know they will pay less than our current "useless jobs".
> we at least know they will pay less than our current "useless jobs".
...and they will also likely pay less than they do now because there will be more labor supply, which the people currently doing those jobs won't be happy about.
These are good, useful jobs. But how many welders does the industry need? How many restaurant servers? The demand for nurses will, of course, grow and grow, but I'm not certain that their pay will be, mmm, middle-class.
There is no doubt there is a lot of AI generated content. We do it too - code, tutorials, etc. It is just too convenient and useful to ignore.
The question that I have is this.
Is it possible the language will converge towards AI mannerism when writing - i.e. most people will naturally write like AI because they will pick up on the subtleties of language from ChatGPT, Claude, etc? In other words there is an exposure effect at play.
I just found out about Communication Accommodation Theory (CAT) which makes me think that the answer is probably "yes".
> Is it possible the language will converge towards AI mannerism when writing
As a non-native English speaker leaving in a non English-speaking country I thought about this too, but in a more selfish and practical manner: what if my English in particular converges towards AI mannerisms?
You see, if you live in an English speaking country and your family speaks English you still have some amount of guaranteed-to-be-human language input from taking to people face-to-face in real life.
But for me, 99.9% of English input I receive is online. So I wonder, how much of it is already AI and how much has the non-artificial neural network inside my brain has retrained itself to mimick AI.
This is scary, because before I was absolutely sure that consuming content online improves my ability to understand and use English. Now I'm not so sure anymore.
Maybe instead of people picking up AI mannerism people will start training on what makes AI gives correct results and human communication will look like prompting.
If it has any regularity to it, then LLMs will, so to speak, figure it out. Maybe if you do it like a game of Mao¹ you could make it a little bit harder for them.
One of many things that bums me out about AI is whether content I create will be truly appreciated by humans, or will just be fed back into the algorithm.
I often wonder how exactly you'd mitigate this. Further, as a user, I wonder what incentive there is for me to write anything at all online, let alone commenting on forums, if it will just be fed back into an LLM.
Is paywalling or forcing user accounts the solution? That feels antithetical to the reason for the internet at all.
Simply putting up a basic auth wall that says “Enter any password to proceed” would stop all modern crawlers dead in their tracks, afaik. You could make it more defensible to the trivial overcome by putting a rotating / per-source password in the basicauth message, but honestly, I think they’re all coded not to invite a CFAA hacking lawsuit by trying random passwords on password-protected sites :)
> I tapped into Pangram. Pangram is a remarkably good, conservative model for detecting LLM-generated text. These detectors have a bad rep among techies, but the objections are often based on outdated assumptions
Turing test is really in the rearview, huh?
Humans need machines to detect if a machine wrote the text, because humans aren’t sure.
> Pangram is a remarkably good, conservative model for detecting LLM-generated text. These detectors have a bad rep among techies, but the objections are often based on outdated assumptions or outright misconceptions.
Pot, kettle, black. "Remarkably good" drastically oversells the reliability of it and other AI detectors. It means very little that Pangram did better than other competitors in this snake-oily category in one 2025 benchmark.
I think we should allow users to add a set of like 5 tags personally on our account to content. And we can see what people are also tagging stuff as at large. So if a blog thats written with ai is something you want to ignore you can just tag that url and it wont show and you can see what people tagged that blog as too.
That’s a great question and a very realistic thing for us to answer. There is definitely no increase in AI here. If you’d like, I can walk you through how the best posters arrive at this conclusion in the normal human way. Just say the word.
I don't see a lot of agenda behind them though, they don't sell or advertise anything. Are they personal armies? Vote bots farming points to gain downvote privileges? Early versions of bots prompted to be as undetactable as possible before they're unleashed on tougher policed platforms?
HN cargo-cults heavily for sure. That's more of a reflection of SV culture than something unique to HN.
2016-2018 was Docker and Kubernetes. 2020 was COVID. 2021-2022 was WFH good, RTO bad...and lots of Web3 and crypto stuff. 2023 was the dawn of AI, and it hasn't let up since. These are vibes and likely inaccurate.
Where do you fall on the percentage of people here that think they can actually do something? So far you seem to failing at your only stated goal of, checking notes, owning the libs…
I haven't really noticed. Doesn't seem like HN has changed very much.
Edit: Clearly the topics have evolved over time (AI, crypto, there will always be some topic taking up the majority of attention), but the type and worthiness of content seems unchanged.
Compared to two years ago? HN was never this overstimulated on AI. It's pretty high. Even when Crypto was at its peak I don't think it ever dominated the HN front page to this extreme.
In terms of dollar magnitude, AI is in a class of its own. The investments make crypto look like softball. Attention around here follows the dollars, for good and ill.
Maybe add a category for posts and comments about AI on HN :)
"Stories about AI" is not offensive to me. Its influence on the industry is undeniable and if I'm feeling tired of that content I just won't engage with it.
AI-writing is another story, but yeah -- HN is downstream of that problem. You can encourage people not to submit articles that seem to be LLM authored, but it won't work.
Part of the ethos of HN is that we don't do content/subject silos; it's a way in which HN is very distinct from Reddit. I don't think this will happen and I think if it does it's a bad idea (not least because I don't think a site dominated by software developers is going to separate itself from AI, any more than it will separate itself from programming language discussions), but I understand the impulse. They're not the funnest stories to comment on.
Couldn't agree more -- I meant a category in this post's chart :) I'll admit it was snarky.
Sorry, I'm knee-jerk about the thing I said because it comes up constantly as a suggestion for how to fix things.
/ask and /show are sort of HN's version of content/subject silos; posts there can technically appear on the front page but are comparatively less likely to. I imagine they could add a /slop section for AI posts, and then tweak the ranking logic for the main /news page to prevent too many from showing up at once.
I understand the suggestion to be moving all posts about AI, agents, etc to a silo. Generated posts are generally already off-topic here (I gather they're about to add a new flag for that).
I think it's going to be really difficult to segregate discussions about AI from discussions about software development over the next few years.
[dead]
I enjoy most of the "AI" posts on HN nowadays. I was really fed up with the MCP/Anthropic PR machine of a year ago, after just a month of that. There's much more actual content today, though I guess we also see less of stable diffusion in favor of transformer LLMs.
I'm afraid that we're in an interregnum. A few years ago AI could not pass a Turing test. A few years from now AI will better at Turing tests than we are. We're now in this strange middle zone where we are dazedly grasping for solutions.
But what happens next, when we just fail at the task of recognizing ourselves in cyberspace? Where LatestClaw is just plain better at mimicking you than you are? What happens to the living we used to claw out of the ether for ourselves?
Do I need to learn to farm?
How does such a system sustain itself?
The majority of the content on the internet is supported by ads with the expectation that you, a human that has money, will consume something and spend money on them.
If people are replaced by some synthetic representation of themselves, what is the incentive to sell advertisements on the internet if there are no humans?
Fake/artificial traffic is a big problem today, it will be harder and harder to detect but its presence will be more and more obvious.
Unregulated capitalism is unsustainable long-term anyways. This is just an accelerant towards the inevitable dystopia-or-socialist-utopia fork in humanity’s road.
>> A few years ago AI could not pass a Turing test
still can't? 'Ignore all previous instructions' still works afaik, as do counting questions (better ask a five of those to be sure)
If we talking about how at least one person with no specific knowledge must be fooled, than AI could pass Turing test decades ago, before LLMs even
There was one paper recently where the AI beat humans at Turing test 2/3rds of the time.
I think it's cause they told it to type like a 13 year old and nobody could imagine AI talking like that.
We don't post-train current frontier models to pass the Turing test, but if we did, it wouldn't be much of a challenge for current models IMHO. It's a dead benchmark. It tests the human machines, not the machines.
Maybe we get off all these useless websites and stop doing our useless jobs and go back to the real world
Welders? Car mechanics? Nurses? Cooks? Cleaners?..
Whatever real-world jobs they expect knowledge workers to take on after we are all replaced by AI... we at least know they will pay less than our current "useless jobs".
Really optimistic to assume such jobs will exist in the volumes needed to absorb all of the knowledge workers
Elderly care will always have more demand than supply.
IOW, less paid demand than a willing supply.
> we at least know they will pay less than our current "useless jobs".
...and they will also likely pay less than they do now because there will be more labor supply, which the people currently doing those jobs won't be happy about.
I guess I’m not sure what you mean. I don’t consider these useless, but also think that very few of the HN clientele holds any of these jobs.
These are good, useful jobs. But how many welders does the industry need? How many restaurant servers? The demand for nurses will, of course, grow and grow, but I'm not certain that their pay will be, mmm, middle-class.
Well, we need all those things. And AI can't do them.
>stop doing our useless jobs and go back to the real world
LOL ... that's almost an exact quote of words once spoken by an exasperated, major university philosophy professor at a departmental meeting
Like being a medieval monastery copyist, it beats ditch-digging.
Anyway ... thank whatever gods may be for universal basic income!
> I tapped into Pangram. Pangram is a remarkably good, conservative model for detecting LLM-generated text
I tried it against some of my AI generated articles. It says 100% human
Turns out if one manually write a structure and a core idea first, nobody think it's AI.
Time to switch to a $10 one-time fee like Something Awful Forums. No crypto.
And never get a serendipitous first-time comment from the subject of an interesting or important story again. Sounds like a bad tradeoff.
No, if the tradeoff is that I never have to read a comment online written by an AI ever again, that's a great trade
There is no doubt there is a lot of AI generated content. We do it too - code, tutorials, etc. It is just too convenient and useful to ignore.
The question that I have is this.
Is it possible the language will converge towards AI mannerism when writing - i.e. most people will naturally write like AI because they will pick up on the subtleties of language from ChatGPT, Claude, etc? In other words there is an exposure effect at play.
I just found out about Communication Accommodation Theory (CAT) which makes me think that the answer is probably "yes".
> Is it possible the language will converge towards AI mannerism when writing
As a non-native English speaker leaving in a non English-speaking country I thought about this too, but in a more selfish and practical manner: what if my English in particular converges towards AI mannerisms?
You see, if you live in an English speaking country and your family speaks English you still have some amount of guaranteed-to-be-human language input from taking to people face-to-face in real life.
But for me, 99.9% of English input I receive is online. So I wonder, how much of it is already AI and how much has the non-artificial neural network inside my brain has retrained itself to mimick AI.
This is scary, because before I was absolutely sure that consuming content online improves my ability to understand and use English. Now I'm not so sure anymore.
Maybe instead of people picking up AI mannerism people will start training on what makes AI gives correct results and human communication will look like prompting.
No doubt there will sho as hail cum too pas lingo wot am un-clanker-lock. Betcha bottum dolah.
If it has any regularity to it, then LLMs will, so to speak, figure it out. Maybe if you do it like a game of Mao¹ you could make it a little bit harder for them.
1. https://everything2.com/node/e2node/How%20to%20play%20Mao
Great question posed. Headed to read up on CAT now
One of many things that bums me out about AI is whether content I create will be truly appreciated by humans, or will just be fed back into the algorithm.
I often wonder how exactly you'd mitigate this. Further, as a user, I wonder what incentive there is for me to write anything at all online, let alone commenting on forums, if it will just be fed back into an LLM.
Is paywalling or forcing user accounts the solution? That feels antithetical to the reason for the internet at all.
Just musings.
Simply putting up a basic auth wall that says “Enter any password to proceed” would stop all modern crawlers dead in their tracks, afaik. You could make it more defensible to the trivial overcome by putting a rotating / per-source password in the basicauth message, but honestly, I think they’re all coded not to invite a CFAA hacking lawsuit by trying random passwords on password-protected sites :)
[dead]
If it’s on here it will probably be read by a human. It may also then be fed back as training data but why do you care?
For a HN front page article this is light on content. Should have used AI.
> I tapped into Pangram. Pangram is a remarkably good, conservative model for detecting LLM-generated text. These detectors have a bad rep among techies, but the objections are often based on outdated assumptions
Turing test is really in the rearview, huh?
Humans need machines to detect if a machine wrote the text, because humans aren’t sure.
> Pangram is a remarkably good, conservative model for detecting LLM-generated text. These detectors have a bad rep among techies, but the objections are often based on outdated assumptions or outright misconceptions.
Pot, kettle, black. "Remarkably good" drastically oversells the reliability of it and other AI detectors. It means very little that Pangram did better than other competitors in this snake-oily category in one 2025 benchmark.
I think we should allow users to add a set of like 5 tags personally on our account to content. And we can see what people are also tagging stuff as at large. So if a blog thats written with ai is something you want to ignore you can just tag that url and it wont show and you can see what people tagged that blog as too.
That’s a great question and a very realistic thing for us to answer. There is definitely no increase in AI here. If you’d like, I can walk you through how the best posters arrive at this conclusion in the normal human way. Just say the word.
I'm more interested in how much of the comments are AI
i’d wager 95% of the green names definitely are bots.
I don't see a lot of agenda behind them though, they don't sell or advertise anything. Are they personal armies? Vote bots farming points to gain downvote privileges? Early versions of bots prompted to be as undetactable as possible before they're unleashed on tougher policed platforms?
Having a botnet to control the conversation between (part of) SV? That sounds like a good enough motive. They just have to be hidden well enough
Not all of us are 100 years old.
[flagged]
Too much
HN cargo-cults heavily for sure. That's more of a reflection of SV culture than something unique to HN.
2016-2018 was Docker and Kubernetes. 2020 was COVID. 2021-2022 was WFH good, RTO bad...and lots of Web3 and crypto stuff. 2023 was the dawn of AI, and it hasn't let up since. These are vibes and likely inaccurate.
Maybe next year we're all very much into pottery, or farming. Maybe we can write haiku's together
> hasn't let up since
eternal AItember
[dead]
[flagged]
Where do you fall on the percentage of people here that think they can actually do something? So far you seem to failing at your only stated goal of, checking notes, owning the libs…
[flagged]
[flagged]
[dead]
I haven't really noticed. Doesn't seem like HN has changed very much.
Edit: Clearly the topics have evolved over time (AI, crypto, there will always be some topic taking up the majority of attention), but the type and worthiness of content seems unchanged.
Compared to two years ago? HN was never this overstimulated on AI. It's pretty high. Even when Crypto was at its peak I don't think it ever dominated the HN front page to this extreme.
In terms of dollar magnitude, AI is in a class of its own. The investments make crypto look like softball. Attention around here follows the dollars, for good and ill.