> SlopStop is getting ready! Report processing has not started yet at-scale. We are reviewing the initial wave of reports, and finalizing our systems for handling them.
> We will start processing reports officially in January. Please continue submitting reports as you find more content!
The section What is considered “Slop"? conspiciously fails to answer this question. It simply describes the recognition of material as "AI-generated".
From SlopStop is Kagi’s community-driven feature for reporting low-quality, mass‑generated AI content (“AI slop”) found in web, image and video search results. one might conclude slop is low-quality, mass‑generated content, but why limit opposition to the subset that's from "AI"?
> If the page is AI‑generated but the domain is mixed (not mostly AI), we flag the page as AI‑generated but do not downrank it.
> If a domain is found to be mostly AI‑generated (typically more than 80% across its pages), that domain is flagged as AI slop and downranked in web search results.
I think that's pretty clear, no? One AI item is merely AI generated, a trough of AI items is AI slop.
Edited as I think I misunderstood: there's more slop of the AI kind than of whatever other low-effort content, and I think Kagi is already doing a good job of keeping a neat little index that avoids content farms, AI or otherwise. AI slop just happens to be a little harder to evaluate than regular slop (and in my experience is now more pervasive because it's cheaper to produce).
> It simply describes the recognition of material as "AI-generated".
That works as a good definition for me. Whether or not you want to call it "slop", anything that helps to filter out AI-generated stuff could be helpful.
My only concern about this is that it seems to rely on user reporting, and if that reporting includes (mistakenly or otherwise) sites that don't have AI generated content, that could make the tool less useful.
I think this really needs to be framed as a "report low-quality content" feature, not a "report AI slop" feature. Otherwise, it just incentivizes people to hide their process, and it risks turning into a witch hunt where everything gets judged on whether it "looks AI" rather than whether it’s actually bad content.
I would disagree. I would never activate a feature that down-ranks or hides results based on some ominous judgement on quality by a Kagi team.
For AI Slop its pretty easy to determine because it is always lows quality and useless content that provides zero value and is a waste of time. The guidelines as specified also allow for some margin or error here I would argue.
Just my take: I don’t think “AI” automatically equals “slop.” There’s plenty of human-made slop too, and some AI-assisted content is genuinely useful. I’d rather see this framed as “report low-value/spammy content” than “report AI slop,” since the AI label tends to turn into “this looks AI” witch-hunting. That said, our baseline assumptions seem pretty different here, so we probably won’t fully agree.
Current status:
> SlopStop is getting ready! Report processing has not started yet at-scale. We are reviewing the initial wave of reports, and finalizing our systems for handling them.
> We will start processing reports officially in January. Please continue submitting reports as you find more content!
The section What is considered “Slop"? conspiciously fails to answer this question. It simply describes the recognition of material as "AI-generated".
From SlopStop is Kagi’s community-driven feature for reporting low-quality, mass‑generated AI content (“AI slop”) found in web, image and video search results. one might conclude slop is low-quality, mass‑generated content, but why limit opposition to the subset that's from "AI"?
It's correct. Anything "AI-generated" is slop in my eyes, and I'm not interested in it.
If I wanted to read what ChatGPT would tell me about a subject, I would have been on ChatGPT.
> If the page is AI‑generated but the domain is mixed (not mostly AI), we flag the page as AI‑generated but do not downrank it.
> If a domain is found to be mostly AI‑generated (typically more than 80% across its pages), that domain is flagged as AI slop and downranked in web search results.
I think that's pretty clear, no? One AI item is merely AI generated, a trough of AI items is AI slop.
Edited as I think I misunderstood: there's more slop of the AI kind than of whatever other low-effort content, and I think Kagi is already doing a good job of keeping a neat little index that avoids content farms, AI or otherwise. AI slop just happens to be a little harder to evaluate than regular slop (and in my experience is now more pervasive because it's cheaper to produce).
> It simply describes the recognition of material as "AI-generated".
That works as a good definition for me. Whether or not you want to call it "slop", anything that helps to filter out AI-generated stuff could be helpful.
My only concern about this is that it seems to rely on user reporting, and if that reporting includes (mistakenly or otherwise) sites that don't have AI generated content, that could make the tool less useful.
Further info here https://blog.kagi.com/slopstop indicates a key attribute is "deceptive".
I can see this being weaponised against controversial sites made by humans, with no way to prove they are human.
Reports being weaponised is a big issue with asymmetric (report-only) systems, but at least here there seems to be a “report as not slop” button.
“Symmetric” user reporting is dearly needed in some websites; as you say something can be mass-reported with no real recourse.
There may be low quality human slop but AI-slop is the most common slop on the internet.
I think this really needs to be framed as a "report low-quality content" feature, not a "report AI slop" feature. Otherwise, it just incentivizes people to hide their process, and it risks turning into a witch hunt where everything gets judged on whether it "looks AI" rather than whether it’s actually bad content.
I would disagree. I would never activate a feature that down-ranks or hides results based on some ominous judgement on quality by a Kagi team. For AI Slop its pretty easy to determine because it is always lows quality and useless content that provides zero value and is a waste of time. The guidelines as specified also allow for some margin or error here I would argue.
Just my take: I don’t think “AI” automatically equals “slop.” There’s plenty of human-made slop too, and some AI-assisted content is genuinely useful. I’d rather see this framed as “report low-value/spammy content” than “report AI slop,” since the AI label tends to turn into “this looks AI” witch-hunting. That said, our baseline assumptions seem pretty different here, so we probably won’t fully agree.
Previous discussion on the blog post: https://blog.kagi.com/slopstop (https://news.ycombinator.com/item?id=45919067)