I worked at FB for almost 2 years. (I left as soon as I could, I knew it wasn't a good fit for me.)
I had an Uber from the campus one day, and my driver, a twenty-something girl, was asking how to become a moderator. I told her, "no amount of money would be enough for me to do that job. Don't do it."
I don't know if she eventually got the job, but I hope she didn't.
Yes, these jobs are horrible.
However, I do know from accidently encountering bad stuff on the internet that you want to be as far away from a modern battlefield as possible.
It's just kind of ridiculous how people think war is like Call of Duty. One minute you're sitting in a trench, the next you're a pile of undifferentiated blood and guts. Same goes for car accidents and stuff. People really underestimate how fragile we are as human beings. Becoming aware of this is super damaging to our concept of normal life.
Watching someone you love die of cancer is also super damaging to one's concept of normal life. Getting a diagnosis, or being in a bad car accident, or the victim of a violent assault is, too. I think a personal sense of normality is nothing more than the state of mind where we can blissfully (and temporarily) forget about our own mortality. Obviously, marinating yourself in all the horrible stuff makes it really hard to maintain that state of mind.
On the other hand, never seeing or reckoning with or preparing for how brutal reality actually is can lead to a pretty bad shock once something bad happens around you. And maybe worse, can lead you to under-appreciate how fantastic and beautiful the quotidian moments of your normal life actually are. I think it's important to develop a concept of normal life that doesn't completely ignore that really bad things happen all around us, all the time.
there’s a difference between a one or two or even ten off exposure to the brutality of life, where various people in your life will support you and help you acclimate to it
Versus straight up mainlining it for 8 hours a day
hey kid, hope you're having a good life. I'll look at the screen full of the worst the internet and humanity has produced on the internet for eight hours.
I get your idea but in the context of this topic I think you're overreaching
Actually reckoning with this stuff leads people into believing in anti-natalism, negative utilitarinism, Scopenhaur/Philipp Mainlander (Mainlander btw was not just pro-suicide, he actually killed himself!), and the voluntary extinction movement. This terrified other philosophers like Nietzsche, who spends most of his work defending reality even if it's absolute shit. "Amor Fati", "Infinite Regress/Eternal Recurrence", "Übermensch" vs the literal "Last Man". "Wall-E" of all films was the modern quintessential nietzschian fable, with maybe "Children of Men" being the previous good one before that.
You're literally not allowed to acknowledge that this stuff is bad and adopt one of the religions that see this and try to remove suffering - i.e. Jainism, because at least historically doing so meant you couldn't use violence in any circumstances, which also meant that your neighbor would murder you. There's a reason that Jain's population are in the low millions
Reality is actually bad, and it should be far more intuitive to folks. The fact that positive experience is felt "quickly" and negative experience is felt "slowly" was all the evidence I needed that I wouldn't just press the "instantly and painlessly and without warning destroy reality" (benevolent world-exploder) button, I'd smash it!
I felt this way for the first 30 years of my life. Then I received treatment for depression (psychoanalysis) and finally tasted joy for the first time in my entire life. Now I love life. YMMV
EDIT: If you’re interested what actually happened is that I was missing the prerequisite early childhood experience that enables one to feel secure in reality. If you check, all the people who have this feeling of philosophical/ontological pessimism have a missing or damaged relationship with the mother in the first year or so. For them, not even Buddhism can help, since even the abstract idea of anything good, even if it requires transcendence, is a joke
No it isn’t, it’s empirically justified, look it up. Hence why the state insurance here in Germany is willing to pay for me to go three times a week. It works
I'm not sure what point you are trying to make. I don't look up to Freud and psychoanalysis doesn't work for everyone! I don't even necessarily recommend it. It just worked for me and I realised that in my case the depression was a confused outlook conditioned by a certain situation.
My point really is that you can feel one way for your entire life and then suddenly feel a different way. I'm not suggesting psychoanalysis specifically. Perhaps for others, CBT or religion or just a change in life circumstances will be enough.
The fact that these philosophies are dependent on the life situation to me is a reason to be a little sceptical of their universality. In my personal experience, in those 30 years of my life, I thought everyone thought the way I did, that reality was painful and a chore and dark and dim. Psychoanalysis helped me realise that other people actually were happy to be alive, and understand why I have not been my entire life.
I’m not sure why people act coy when a straightforward mirroring of their own comment is presented. “What could this mean?” Maybe the hope is that the other person will bore the audience by explaining the joke?
> I don't look up to Freud and psychoanalysis doesn't work for everyone! I don't even necessarily recommend it.
Talking about your infant parental relationship as the be-all-end-all looks indistinguishable from that.
> > If you check, all the people who have this feeling of philosophical/ontological pessimism have a missing or damaged relationship with the mother in the first year or so.
.
> I'm not suggesting psychoanalysis specifically. Perhaps for others, CBT or religion or just a change in life circumstances will be enough.
Except for people who have “this feeling of philosophical/ontological pessimism”.
> > For them, not even Buddhism can help, since even the abstract idea of anything good, even if it requires transcendence, is a joke
Which must paint everyone who defends “suffering” in the Vedic sense. Since that was what you were replying to. (Saying that reality is suffering on-the-whole is not the same as “I’m depressed [, and please give me anecdotes about how you overcame it]”.)
> > The fact that these philosophies are dependent on the life situation to me is a reason to be a little sceptical of their universality. In my personal experience, in those 30 years of my life, I thought everyone thought the way I did, that reality was painful and a chore and dark and dim. Psychoanalysis helped me realise that other people actually were happy to be alive, and understand why I have not been my entire life.
I don’t know how broad your brush is. But believing in the originally Vedic (Schopenhauer was inspired by Eastern religions, maybe Buddhism in particular) concept of “suffering” is not such a fragile intellectual framework that it collapses once you heal from the trauma when your mother scolded you while potty training at a crucial point in your Anal Stage of development.
> Worth noting that I trained formally in Buddhism under a teacher for a few years. I’m not unaware of all this
You trained personally for a few years and yet you make such sweeping statements/strokes that a neophyte is prompted to point out basic facts about this practice (apparently an adequate retelling since you don’t bother to correct me)? You might think this bolsters something (?) but I think the case is the opposite.
It helps to point out exactly what part that you are talking about (apparently not the Vedic gang). In fact this initial reply (just the above paragraph before the edit) seemed so out of place. Okay, so what are they talking about?
> And the Vedic version of suffering is all full of love for reality, not wanting to delete it by smashing a button
Oh, so it’s about the small wish to commit biocide.
It’s a clear category error to talk about love/want/hate when it comes to that statement. Because that’s beside the point. The point is clearly the wrongheaded, materialistic assumption that suffering will end if all life would end by the press of a button. And if you think that life on the whole is suffering? Then pressing the button is morally permissible.
It’s got nothing to do with hate.
It seemed interesting to me that someone would have such a “Schopenhauer” (not that I have read him) view of existence. You don’t see that every day.
My comment was saying that this part was about ending suffering, not about wishing ill-will. I don’t understand what’s unclear.
> > Reality is actually bad, and it should be far more intuitive to folks. The fact that positive experience is felt "quickly" and negative experience is felt "slowly" was all the evidence I needed that I wouldn't just press the "instantly and painlessly and without warning destroy reality" (benevolent world-exploder) button, I'd smash it!
> This is coming off as incoherent rambling to me
You do like to gesture vaguely and tell me that "I don’t know what this is". Meanwhile I have pointed out at least one instance where you flat out just contradicted yourself on psychoanalysis. Or "incoherent rambling" (on psychoanalysis) if you will
Interesting to see this perspective here. You’re not wrong.
> There's a reason that Jain's population are in the low millions
The two largest Vedic religions both have hundreds of millions of followers. Is Jainism that different from them in this regard? I know Jainism is very pacifist but on the question of suffering.
Emergency personnel might need to braze themselves for car accidents every day. That Kenyans need to be traumatized by Internet Content in order to make a living is just silly and unnecessary.
Even the wording is wrong - those aren’t accidents, it is something we accept as byproduct of a car-centric culture.
People feel it is acceptable that thousands of people die on the road so we can go places faster.
Similarly they feel it’s acceptable to traumatise some foreigners to keep social media running.
1) I don't have squeamishness about trauma. In the end, we are all blood and tissue. The calls that get to me are the emotionally traumatic, the child abuse, domestic violence, elder abuse (which of course often have a physical component too, but it's the emotional for me), the tragic, often preventable accidents.
2) There are many people, and I get the curiosity, that will ask "what's the worst call you've been on?" - one, you don't really want to hear, and two, "Hey, person I may barely know, do you think you can revisit something traumatic for my benefit/curiosity?"
That’s an excellent way to put it, resonates with my (non medical) experience. It’s the emotional stuff that will try to follow me around and be intrusive.
I won’t watch most movies or TV because they are just some sort of tragedy porn.
What's interesting now is how many patients will say "You're not going to give me fentanyl are you? That's really dangerous stuff", etc.
Their perfect right, of course, but is sad that that's the public perception - it's extremely effective, and quite safe, used properly (for one, we're obviously only giving it from pharma sources, with actually properly dosed solutions for IV).
It's also super easy to come up with better questions: "What's the funniest call you've ever been on?" "What call do you feel like you made the biggest difference?" "What's the best story you have?"
I'm pretty sure watching videos on /r/watchpeopledie or rekt threads on 4chan has been a net positive for me. I'm keenly aware of how dangerous cars are, that wars (including narcowars) are hell, that I should never stay close to a bus or truck as a pedestrian or bycicle, that I should never get into a bar fight... And that I'm very very lucky that I was not born in the 3rd world.
I get more upset watching people lightly smack and yell at each other on public freakout than I do watching people die. It's not that I don't care about the dead either, I watched wpd and similar sites for years. I didn't enjoy watching it, but I liked knowing the reality of what was going on in the world, and how each one of us has the capacity to commit these atrocities. I'm still doing a lousy job at describing why I like to watch it. But I do.
One does not fully-experience life until you encounter a death of something you care about. It being a pet, person; nothing gives you that real sense of reality until your true feelings are challenged.
I used to live in the Disney headspace until my dog had to be put down. Now with my parents being in their seventies, and me in my thirties I fear losing them the most as the feeling of losing my dog was hard enough.
That's the tragic consequence of being human. Either the people you care about leave first or you do, but in the end, everyone goes. We are blessed and cursed with the knowledge to understand this. We should try to maximize the time we spend with those that are important to us.
Well, i think it goes to a point. I'd imagine there's some goldilocks zone of time spent with the animal, care experienced from the animal, dependence on the animal, and manner/speed of death/ time spent watching the thing die.
I say animal to explicitly include humans. Finding my hamster dead in fifth grade did change me. But watching my mother slowly die a horrible, haunting death didn't make me a better person. I'm just saying that there's a spectrum that goes something like: easy to forget about, I'm able to not worry, sometimes i think about it when i dont want, often i think about it, often it bothers me, and do on. You can probably imagine the cycle of obsession and stress.
This really goes for all traumatic experiences. There's a point where they can make you a better person, but there's a cliff after which you have no guarantees that it won't just start obliterating you and your life. It's still a kind of perspective. But can you have too much perspective? Lots of times i feel like i do
It's not that we're particularly fragile, given the kind of physical trauma human beings can survive and recover from.
It's that we have technologically engineered things that are destructive enough to get even past that threshold. Modern warfare in particular is insanely energetic in the most literal, physical way - when you measure the energy output of weapons in joules. Partly because we're just that good at making things explode, and partly because improvements in metallurgy and electronics made it possible over time to locate targets with extreme precision in real time and then concentrate a lot of firepower directly on them. This, in particular, is why the most intense battlefields in Ukraine often look worse than WW1 and WW2 battles of similar intensity (e.g. Mariupol had more buildings destroyed than Stalingrad).
But even our small arms deliver much more energy to the target than their historical equivalents. Bows and arrows pack ~150 J at close range, rapidly diminishing with distance. Crossbows can increase this to ~400 J. For comparison, an AK-47 firing standard issue military ammo is ~2000 J.
Funny you mention crossbows; the Church at one point in time tried to ban them because they democratized violence to a truly trivial degree. They were the nuclear bombs and assault rifles of medieval times.
Also, I will take this moment to also mention that the "problem" with weapons always seem to be how quickly they can kill rather than the killing itself. Kind of takes away from the discussion once that is realized.
Watch how a group of wild dogs kill their prey, then realise that for milenia human like apes were part of their diet. Even the modern battlefield is more humane than the African savannah.
Yeah, I tracked a lost dog and found the place it was caught by wolves and eventually eaten. Terrible way to go. I get now why the owner was so desperate to find it, even without any hope of the dog surviving - I'd want to end it quicker for my dogs too if this happened to them.
> Humans can render other humans unrecognizable with a rock.
They are much less likely to.
We have instinctive repulsion to violence, especially extending it (e.g. if the rock does not kill at the first blow).
It is much easier to kill with a gun (and even then people need training to be willing to do it), and easier still to fire a missile at people you cannot even see.
Extreme violence then? With rocks, clubs of bare hands? I was responding to "render other humans unrecognizable with a rock" which I am pretty sure is uncommon in schools.
Not in public schools in the British sense. I assume it varies in public schools in the American sense, and I am guessing violence sufficient to render someone unrecognisable is pretty rare even in the worst of them.
You are discounting the complexity of the logistics required for an AK47 army. You need ammo, spare parts, lubricant and cleaning tools. You need a factory to build the weapon, and churn out ammunition.
Or, gather a group of people, tell them to find a rock, and go bash the other sides head.
Complexity of logistics applies to any large army. The single biggest limiting factor for most of history has been the need to either carry your own food, or find it in the field. This is why large-scale military violence requires states.
It should be noted that the purported advantages of AK action over its competitors in this regard are rather drastically overstated in popular culture. E.g. take a look at these two vids showing how AK vs AR-15 handle lots of mud:
As far as cleaning, AK, like many guns of that era, carries its own cleaning & maintenance toolkit inside the gun. Although it is a bit unusual in that regard in that this kit is, in fact, sufficient to remove any part of the gun that is not permanently attached. Which is to say, AK can be serviced in the field, without an armory, to a greater extent than most other options.
But the main reason why it's so popular isn't so much because of any of that, but rather because it's very cheap to produce at scale, and China especially has been producing millions of AKs specifically to dump them in Africa, Middle East etc. But where large quantities of other firearms are available for whatever reason, you see them used just as much - e.g. Taliban has been rocking a lot of M4 and M16 since US left a lot of stocks behind.
The only small arms cartridge plant that Ukraine had originally was in Luhansk, so it got captured even before 2022. It's only this year that they've got a new plant operational, but it produces both 5.45 and 5.56.
And Western supplies are mostly 5.56 for obvious reasons, although there are some exceptions - mostly countries that have switched fairly late and still have substantial stocks of 5.45, such as Bulgaria. But those are also limited in quantity.
So in practice it's not quite so simple, and Ukraine seems to be aiming for 5.56 as their primary cartridge long-term, specifically so that it's easier for Western countries to supply them with guns and ammo.
If you think the AKs in use in Russia and Ukraine aren’t getting regular maintenance, cleaning and spare parts, I don’t think you’re watching enough of the content coming out of the war zone.
Soldiering isn’t sexy, it’s digging trenches, cleaning kit, and eating concussive blasts waiting to fight or die.
You don’t sit in a bunker all day waiting to defend a trench and not clean your gun.
I spent my civil service as a paramedic assistent at the countryside, close to a mountainroad that was very popular with bikers. I was never interested in motorbikes in the first place, but the gruesome accidents I've witnessed turned me off for good.
Yes, but you're also far less likely to kill other people on a motorcycle as in a car (and even less, as in an SUV or pick-up truck). So some people live much less dangerously with respect to the people around them.
I don't mean to trivialize traumatic experiences but I think many modern people, especially the pampered members of the professional-managerial class, have become too disconnected from reality. Anyone who has hunted or butchered animals is well aware of the fragility of life. This doesn't damage our concept of normal life.
My brother, an Eastern-European part-time farmer and full-time lorry driver, just texted me a couple of hours ago (I had told him I would call him in the next hour) that he might be with his hands full of meat by that time as “we’ve just butchered our pig Ghitza” (those sausages and piftii aren’t going to get made by themselves).
Now, ask a laptop worker to butcher an animal whom used to have a name and to literally turn its meat into sausages and see what said worker’s reaction would be.
Laptop worker here. Have participated/been present in butcher of sheep and pigs and helped out making sausages a couple of times. It was fine. An interesting experience.
There is a lot of skill going in to it, so I couldn't do it myself. You need guidance of someone who is knowledgeable and has the proper tools and facilities for the job.
Lots of people who spend time working with livestock on a farm describe a certain acceptance and understanding of death that most modern people have lost.
In Japan, some sushi bars keep live fish in tanks that you can order to have served to you as sushi/sashimi.
The chefs butcher and serve the fish right in front of you, and because it was alive merely seconds ago the meat will still be twitching when you get it. If they also serve the rest of the fish as decoration, the fish might still be gasping for oxygen.
Japanese don't really think much of it, they're used to it and acknowledge the fleeting nature of life and that eating something means you are taking another life to sustain your own.
The same environment will likely leave most westerners squeamish or perhaps even gag simply because the west goes out of its way to hide where food comes from, even though that simply is the reality we all live in.
Personally, I enjoy meats respecting and appreciating the fact that the steak or sashimi or whatever in front of me was a live animal at one point just like me. Salads too, those vegetables were (are?) just as alive as I am.
Plenty of westerners are not as sheltered from their food as you. Have you never gone fishing and watched your catch die? Have you never boiled a live crab or lobster? You've clearly never gone hunting.
Not to mention the millions of Americans working in the livestock and agriculture business who see up close every day how food comes to be.
A significant portion of the American population engages directly with their food and the death process. Citing one gimmicky example of Asian culture where squirmy seafood is part of the show doesn't say anything about the culture of entire nations. That is not how the majority of Japanese consume seafood. It's just as anomalous there. You only know about it because it's unusual enough to get reported.
You can pick your lobster out of the tank and eat it at American restaurants too. Oysters and clams on the half-shell are still alive when we eat them.
>Plenty of westerners are not as sheltered from their food as you. ... You only know about it because it's unusual enough to get reported.
In case you missed it, you're talking to a Japanese.
Some restaurants go a step further by letting the customers literally fish for their dinner out of a pool. Granted those restaurants are a niche, that's their whole selling point to customers looking for something different.
Most sushi bars have a tank holding live fish and other seafood of the day, though. It's a pretty mundane thing.
If I were to cook a pork chop in the kitchen of some of my middle eastern relatives they would feel sick and would probably throw out the pan I cooked it with (and me from their house as well).
Isn't this similar to why people unfamiliar with that style of seafood would feel sick -- cultural views on what is and is not normal food -- and not because of their view of mortality?
You're not grasping the point, which I don't necessarily blame you.
Imagine that to cook that pork chop, the chef starts by butchering a live pig. Also imagine that he does that in view of everyone in the restaurant rather than in the "backyard" kitchen let alone a separate butchering facility hundreds of miles away.
That's the sushi chef butchering and serving a live fish he grabbed from the tank behind him.
When you can actually see where your food is coming from and what "food" truly even is, that gives you a better grasp on reality and life.
It's also the true meaning behind the often used joke that goes: "You don't want to see how sausages are made."
I grasp the point just fine, but you haven't convinced me that it is correct.
The issue most people would have with seeing the sausage being made isn't necessarily watching the slaughtering process but with seeing pieces of the animal used for food that they would not want to eat.
I wouldn't want to eat a cockroach regardless of whether I saw it being prepared or not. The point I am making is that 'feeling sick' and not wanting to eat something isn't about being disconnected from the food. Few people would care if you cut off a piece of steak from a hanging slab and grilled it in front of them, but would find it gross to pick up all the little pieces of gristle and organ meat that fell onto the floor, grind it all up, shove it into an intestine, and cook it.
> Few people would care if you cut off a piece of steak from a hanging slab
The analogy here would be watching a live cow get slaughtered and then butchered from scratch in front of you, which I think most Western audiences (more than a few) might not like.
A cow walks into the kitchen, it gets a captive bolt shoved into its brain with a person holding a compressed air tank. Its hide is ripped off and it is cut into two pieces with all of its guts on the ground and the flesh and bones now hang as slabs.
I am asserting that you could do all of that in front of a random assortment of modern Americans, and then cut steaks off of it and grill them and serve them to half of the crowd, and most of those people would not have an problem eating those steaks.
Then if you were to scoop up all the leftover, non-steak bits from the ground with shovels, throw it all into a giant meat grinder and then take the intestines from a pig, remove the feces from them and fill them with the output of the grinder, cook that and serve it to the other half of the crowd, then a statistically larger proportion of that crowd would not want to eat that compared to the ones who ate the steak.
> I am asserting that you could do all of that in front of a random assortment of modern Americans, and then cut steaks off of it and grill them and serve them to half of the crowd, and most of those people would not have an problem eating those steaks.
I am asserting that the majority of western audiences, including Americans, would dislike being present for the slaughtering and butchering portion of the experience you describe.
I'm a 100% sure none of my colleagues would eat the steak if they could see the live cow get killed and skinned first. They wouldn't go to that restaurant to begin with and they'd lose their appetite entirely if they somehow made it there.
I probably also wouldn't want to eat that, but more because that steak will taste bad without being aged properly.
Most audiences wouldn’t like freshly butchered cow - freshly butchered meat is tough and not very flavorful, it needs to be aged to allow it to tenderize and develop.
That the point is being repeated to no effect ironically illustrates how most modern people (westerners?) are detached from reality with regards to food.
In the modern era, most of the things the commons come across have been "sanitized"; we do a really good job of hiding all the unpleasant things. Of course, this means modern day commons have a fairly skewed "sanitized" impression of reality who will get shocked awake if or when they see what is usually hidden (eg: butchering of food animals).
That you insist on contriving one zany situation after another instead of just admitting that people today are detached from reality illustrates my point rather ironically.
Whether it's butchering animals or mining rare earths or whatever else, there's a lot of disturbing facets to reality that most people are blissfully unaware of. Ignorance is bliss.
To be blunt, the way you express yourself on this topic comes off as very "enlightened intellectual." It's clear that you think that your views/assumptions are the correct view and any other view is one held by the "commons"; one which you can change simply by providing the poor stupid commons with your enlightened knowledge.
Recall that this whole thread started with your proposition that seeing live fish prepared in front of someone "will likely leave most westerners squeamish or perhaps even gag simply because the west goes out of its way to hide where food comes from, even though that simply is the reality we all live in." You had no basis for this as far as I can tell, it's just a random musing by you. A number of folks responded disagreeing with you, but you dismissed their anecdotal comments as being wrong because it doesn't comport with your view of the unwashed masses who are, obviously, feeble minded sheep who couldn't possibly cope with the realities of modern food production in an enlightened way like you have whereby you "enjoy meats respecting and appreciating the fact that the steak or sashimi or whatever in front of me was a live animal at one point just like me." How noble of you. Nobody (and I mean this in the figurative sense not the literal sense) is confused that the slab of meat in front of them was at one point alive.
Then you have the audacity to accuse someone of coming up with "zany" situations? You're the one that started the whole zany discussion in the first place with your own zany musings about how "western" "commons" think!
I grew up with my farmer grandpa who was a butcher, and I've seen him butcher lots of animals. I always have and probably always will find tongues & brains disgusting, even though I'm used to seeing how the sausage is made (literally).
Some things just tickle the brain in a bad way. I've killed plenty of fish myself, but I still wouldn't want to eat one that's still moving in my mouth, not because of ickiness or whatever, but just because the concept is unappealing. I don't think this is anywhere near as binary as you make it seem, really.
>> ridiculous how people think war is like Call of Duty.
It is also ridiculous how people think every soldier's experience is like Band of Brothers or Full Metal Jacket. I remember an interview with a WWII vet who had been on omaha beach: "I don't remember anything happening in slow motion ... I do remember eating a lot of sand." The reality of war is often just not visually interesting enough to put on the screen.
Earlier this year, I was at ground zero of the Super Bowl parade shooting. I didn’t ever dream about it, but I spent the following 3-4 days constantly replaying it in my head in my waking hours.
Later in the year I moved to Florida, just in time for Helene and Milton. I didn’t spend much time thinking about either of them (aside from during prep and cleanup and volunteering a few weeks after). But I had frequent dreams of catastrophic storms and floods.
Different stressors affect people (even myself) differently. Thankfully I’ve never had a major/long-term problem, but I know my reactions to major life stressors never seemed to have any rhyme or reason.
I can imagine many people might’ve been through a few things that made them confident they’d be alright with the job, only to find out dealing with that stuff 8 hours a day, 40 hours a week is a whole different ball game.
A parade shooting is bad, very bad, but is still tame compared to the sorts of things to which website moderators are exposed on a daily/hourly basis. Footage of people being shot is actually allowed on many platforms. Just think of all the war footage that is so common these days. The dark stuff that moderators see is way way worse.
It absolutely does not look the same. You instinctively know that what you see is just acting. I somehow don't believe that you have seen a real video of a person getting shot or beheaded or sucked into a lathe. Seeing a life getting wiped out is emotionally completely different because that's more than a picture you are emotionally processing. It looks only the same if you have zero empathy or are a psychopath.
I have often wondered what would happen if social product orgs required all dev and product team members to temporarily rotate through moderation a couple times a year.
I can tell you that back when I worked as a dev for the department building order fulfillment software at a dotcom, my perspective on my own product has drastically changed after I had spent a month at a warehouse that was shipping orders coming out of the software we wrote. Eating my own dog food was not pretty.
Many (all?) Japanese schools don't have janitors. Instead students clean on rotation. Never been much into Japanese stuff but I absolutely admire this about their culture, and imagine it's part of the reason that Japan is such a clean and at least superficially respectful society.
Living in other Asian nations where there are often defacto invisible caste systems can be nauseating at times - you have parents that won't allow their children to participate in clean up efforts because their child is 'above handling trash.' That's gonna be one well adjusted adult...
Perhaps this is what happens when someone creates a mega-sized website comprising hundreds of millions of pages using other peoples' submitted material, effectively creating a website that is too large to "moderate". By letting the public publish their material on someone else's mega-sized website instead of hosting their own, perhaps it concentrates the web audience to make it more suitable for advertising. Perhaps if the PTSD-causing material was published by its authors on the authors' own websites, the audience would be small, not suitable for advertising. A return to less centralised web publishing would perhaps be bad for the so-called "ad ecosystem" created by so-called "tech" company intermediaries. To be sure, it would also mean no one in Kenya would be intentionally be subjected to PTSD-causing material in the name of fulfilling the so-called "tech" industry's only viable "business model": surveillance, data collection and online ad services.
It's a problem when you don't verify the identity of your users and hold them responsible for illegal things. If Facebook verified you were John D SSN 123-45-6789 they could report you for uploading CSAM and otherwise permanently block you from using the site if uploading objectionable material; meaning only exposure to horrific things is only necessary once per banned user. I would expect that to be orders of magnitude less than what they deal with today.
A return to less centralized web publishing would also be bad for the many creators who lack the technical expertise or interest to jump through all the hoops required for building and hosting your own website. Maybe this seems like a pretty small friction to the median HN user, but I don't think it's true for creators in general, as evidenced by the enormous increase in both the number and sophistication of online creators over the past couple of decades.
Is that increase worth traumatizing moderators? I have no idea. But I frequently see this sentiment on HN about the old internet being better, framed as criticism of big internet companies, when it really seems to be at least in part criticism of how the median internet user has changed -- and the solution, coincidentally, would at least partially reverse that change.
Introducing a free unlimited hosting service where you could only upload pictures, text or video. There’s a public page to see that content among adds and links to you friends free hosting service pages. TOS is a give-give: you give them the right to extract all the aggregated stat they want and display the adds, they give you the service for free so you own you content (and are legally responsible of it)
I'm wondering if there are precedents in other domains. There are other jobs where you do see disturbing things as part of your duty. E.g. doctors, cops, first responders, prison guards and so on...
What makes moderation different? and how should it be handled so that it reduces harm and risks? surely banning social media or not moderating content aren't options. AI helps to some extent but doesn't solve the issue entirely.
I don’t have any experience with this, so take this with a pinch of salt.
What seems
novel about moderation is the frequency that you confront disturbing things. I imagine companies like Meta have such good automated moderation that what remains to be viewed by a human is practically a firehose of almost certainly disturbing shit. And as soon as you’re done with one post, the next is right there. I doubt moderators spend more than 30 seconds on the average image, which is an awful lot of stuff to see in one day.
A doctor just isn’t exposed to that sort of imagery at the same rate.
> I imagine companies like Meta have such good automated moderation that what remains to be viewed by a human is practically a firehose of almost certainly disturbing shit.
On the contrary I would expect that it would be the edge cases that they were shown - why loop in a content moderator if you an be sure that it is prohibited ont he platform without exposing a content moderator?
In this light, it might make sense why they sue: They are there more as a political org so that facebook can say: "We employ 140 moderators in Kenya alone!" while they do indifferent work that facebook already can filter out.
> They are there more as a political org so that facebook can say: "We employ 140 moderators in Kenya alone!" while they do indifferent work that facebook already can filter out.
Why assume they're just token diversity hires who don't do useful work..?
Have you ever built an automated content moderation system before? Let me tell you something about them if not: no matter how good your automated moderation tool, it is pretty much always trivial for someone with familiarity with its inputs and outputs to come up with an input it mis-predicts embarrassingly badly. And you know what makes the biggest difference.. is humans specifying the labels.
I don't assume diversity hires, I assume that these people work for the Kenyan part of Facebook and that Facebook employs an equivalent workforce elsewhere.
I am also not saying that content moderation should catch everything.
What I am saying is that the content moderation teams should ideally decide on the edge cases as they are hard for automated system.
In turn that also means that these people ought not to be exposed to too hardcore material - as that is easier to classify.
Lastly I say that if that is not the case - then they are probably not there to carry out a function but to fill a political role.
Content moderation also involves reading text, so you’d imagine that there’s a benefit to having people who can label data and provide ground truth in any language you’re moderating.
Even with images, you can have different policies in different places or the cultural context can be relevant somehow (eg. some country makes you ban blasphemy).
Also, I have heard of outsourcing to Kenya just to save cost. Living is cheaper there so you can hire a desk worker for less. Don’t know where the insistence you’d only hire Kenyans for political reasons comes from.
Also a doctor is paid $$$$$ and it mostly is a vocational job
Content moderator is a min wage job with bad working hours, no psychological support, and you spend your day looking at rape, child porn, torture and executions.
How about grouping the jobs into two categories: A) Causes PTSD and B) Doesn't cause PTSD
If a job as a constantly high percentage of people ending up with PTSD, then they aren't equipped well enough to handle it, by the company who employs them.
>How about grouping the jobs into two categories: A) Causes PTSD and B) Doesn't cause PTSD
I fail to see how this addresses my previous questions of "it's purely a monetary dispute?" and "where do you draw the line?". If a job "Causes PTSD" (whatever that means), then what? Are you entitled to hazard pay? Does this work out in the end to a higher minimum wage for certain jobs? Moreover, we don't have similar classifications for other hazards, some of which are arguably worse. For instance, dying is probably worse than getting PTSD, but the most dangerous jobs have pay that's well below the national median wage[1][2]. Should workers in those jobs be able to sue for redress as well?
What could a company provide a police officer with to prevent PTSD from witnessing a brutal child abuse case? A number of sources i found estimate the top of the range to be ~30% of police officers may be suffering from it
I wouldn't say purely, but substantially yes. PTSD has costs. The article says some out; therapy, medication, mental, physical, and social health issues. Some of these money can directly cover, whereas others can only be kinda sorta justified with high enough pay.
I think a sustainable moderation industry would try hard to attract the kinds of people who are able to perform this job without too much negative impacts, and quickly relieve those who try but are not well suited, and pay for some therapy.
“I would imagine that companies like Meta have such good automated moderation that what remains to be viewed by a human is practically a firehose of almost certainly disturbing shit.”
This doesn’t make sense to me. Their automated content moderation is so good that it’s unable to detect “almost certainly disturbing shit”? What kind of amazing automation only works with subtleties but not certainties?
I assumed that, at the margin, Meta would prioritise reducing false negatives. In other words, they would prefer that as many legitimate posts are published as possible.
So the things that are flagged for human review would be on the boundary, but trend more towards disturbing than legitimate, on the grounds that the human in the loop is there to try and publish as many posts as possible, which means sifting through a lot of disturbing stuff that the AI is not sure about.
There’s also the question of training the models - the classifiers may need labelled disturbing data. But possibly not these days.
However, yes, I expect the absolute most disturbing shit to never be seen by a human.
—
Again, literally no experience, just a guy on the internet pressing buttons on a keyboard.
>In other words, they would prefer that as many legitimate posts are published as possible.
They'd prefer that as many posts are published, but they probably also don't mind some posts being removed if it meant saving a buck. When canada and australia implemented a "link tax", they were happy to ban all news content to avoid paying it.
I watch surgery videos sometimes, out of fascination. It's not gore to me - sure it's flesh and blood but there is a person whose life is going to be probably significantly better afterwards. They are also not in pain.
I exposed myself to actual gore vids in the aughts and teens... That stuff still sticks with me in a bad way.
My understanding is that during surgery, your body is most definitely in pain. Your body still reacts as it would to any damage, but anesthetics block the pain signals from reaching the brain.
But there is a difference between someone making an effort healing someone else vs content with implications that something really disturbing happened that makes you lose faith in humanity.
I agree. But that might be comorbid with PTSD. It’s probably not good for you to be _that_ desensitised to this sort of thing.
I also feel like there’s something intangible regarding intent that makes moderation different from being a doctor. It’s hard for me to put into words, but doctors see gore because they can hopefully do something to help the individual involved. Moderators see gore but are powerless to help the individual, they can only prevent others from seeing the gore.
It's also the type of gore that matters. Some of the worst stuff I've seen wasn't the worst because of the visuals, but because of the audio. Hearing people begging for their life while being executed surely would feel different to even a surgeon who might be used to digging around in people's bodies.
Imagine if this becomes a specialized, remote job where one tele-operates the brain and blood scrubbing robot all workday long, accident, after accident after accident. I am sure they'd get PTSD too, airey, sometime it's just oil and coolant, but there's still a lot of body-tissue involved.
Desensitization is only one stage of it. It's not permanent & requires dissociation from reality/humanity on some level. But that stuff is likely to come back and haunt one in some way. If not, it's likely a symptom of something deeper going on.
My guess is that's why it's after bulldozing hundreds of Palestinians, instead of 1 or 10s of them, that Israeli soldiers report PTSD.
If you haven't watched enough videos of the ongoing genocides in the world to realize this, it'll be a challenge to have a realistic take on this article.
> I imagine companies like Meta have such good automated moderation
I imagine that they have a system that is somewhere between shitty and none functional. This is the company that will more often than not flag marketplace posts as "Selling animal", either completely at random or because the pretty obvious "from an animal free home" phrase is used.
If they can't get this basic text parsing correct, how can you expect them to correctly flag images with any real sense of accuracy?
A friend's friend is a paramedic and as far as I remember they can take the rest of a day off after witnessing death on duty and there's an obligatory consulation with a mental healthcare specialist. From reading the article, it seems like those moderators are seeing horrific things almost constantly throughout the day.
I've never heard of a policy like that for physicians and doubt it's common for paramedics. I work in an ICU and a typical day involves a death or resuscitation. We would run out of staff with that policy.
Maybe it's different in the US where ambulances cost money, but here in Germany the typical paramedic will see a wide variety of cases, with the vast majority of patients surviving the encounter. Giving your paramedic a day off after witnessing a death wouldn't break the bank. In the ICU or emergency room it would be a different story.
Ambulances cost money everywhere, it's just a matter of who is paying. Do we think paramedics in Germany are more susceptible to PTSD when patients die than ICU or ER staff, or paramedics anywhere?
Not in the sense that matters here: the caller doesn't pay (unless the call is frivolous), leading to more calls that are preemptive, overly cautious or for non-live-threatening cases. That behind the scenes people and equipment are paid for and a whole structure to do that exists isn't really relevant here
> Do we think paramedics in Germany are more susceptible to PTSD
No, we think that there are far more paramedics than ICU or ER staff, and helping them in small ways is pretty easy. For ICU and ER staff you would obviously need other measures, like staffing those places with people less likely to get PTSD or giving them regular counseling by a staff therapist (I don't know how this is actually handled, just that the problem is very different than the issue of paramedics)
I might have misremembered that, but remember hearing the story. Now that I think about it I think that policy was applied only after unsuccessful CPR attempts.
My friend has repeatedly mentioned his dad became an alcoholic due to what he saw as a paramedic. This was back in the late 80s, early 90s so not sure they got any mental health help.
There's a very good reason moderators are employed in far-away countries, where people are unlikely to have the resources to gain redress for the problems they have to deal with as a result.
Burnout, PTSD, and high turnover are also hallmarks of suicide hotline operators.
The difference? The reputable hotlines care a lot more about their employees' mental health, with mandatory breaks, free counseling, full healthcare benefits (including provisions for preventative mental health care like talk therapy).
Another important difference is that suicide hotlines are decoupled from the profit motive. As more and more users sign up to use a social network, it gets more profitable and more and more load needs to be borne by the human moderation team. But suicide and mental health risk is (roughly) constant (or slowly increasing with societal trends, not product trends).
There's also less of an incentive to minimize human moderation cost. In large companies, some directors view mod teams as a cost center that takes away from other ventures. In an organization dedicated only to suicide hotline response, a large share of the income (typically fundraising or donations) goes directly into the service itself.
In many states, pension systems give police and fire service sworn members a 20 year retirement option. The military has similar arrangements.
Doctors and lawyers can’t afford that sort of option, but they tend to embrace alcoholism at higher rates and collect ex-wives.
Moderation may be worse in some ways. All day, every day, you see depravity at scale. You see things that shouldn’t be seen. Some of it you can stop, some you cannot due to the nature of the rules.
I think banning social media isn’t an answer, but demanding change to the algorithms to reduce the engagement to high risk content is key.
Outside of some specific cities, I can guarantee it. Even a busy Emergency Dept on Halloween night had only a small handful of bloody patients/trauma cases, and nothing truly horrific when I did my EMT rotation.
Trauma isn’t just a function of what you’ve experienced, but also of what control you had over the situation and whether you got enough sleep.
Being a doctor and helping people through horrific things is unlike helplessly watching them happen.
IIRC, PTSD is far more common among people with sleep disorders, and it’s believed that the lack of good sleep is preventing upsetting memories from being processed.
at least in the US, those jobs - doctors, cops, firefighters, first responders - are well compensated (not sure about prison guards), certainly compared to content moderators who are at the bottom of the totem pole in an org like FB
What does compensation have to do with it? Is someone who stares at thousands of traumatizing, violent images every day going to be less traumatized if they're getting paid more?
Yes, they will be much more able to deal with the consequences of that trauma than someone who gets a pittance to do the same thing. A low-wage peon won't even be able to afford therapy if they need it.
In my district, all the firefighers are volunteers (including me). Yeah, we deal with some crappy medical calls and sometimes deaths. It's nowhere near as dramatic as the non-first-responders in this thread seem to think.
I suspect what makes it different is the concentration to just the flagged is what turns it into especially traumatizing. Of course there is probably a bell curve of sorts for "experiences" vs "level of personal trauma". One incident might be enough for someone "weaker" to develop PTSD. Not a slight on the afflicted, just how things are.
Casual Facebook viewers may stumble across something disturbing on it, but they certainly don't get PTSD at the rate of the poor moderators. Likewise the professionals usually have their own professional exposure levels to messed up stuff. Meanwhile child pornography investigation departments who have to catalogue the evidence are notorious for suffering poor mental health even with heavy measures taken.
There is already the 'blacklist hash' approach to known bad images which can help reduce exposure. So they don't all need to be exposed to say the same cartel brutal execution video, the bot takes care of it. I don't know anything about Facebook's internal practices but I would presume they already are doing this or similar with their tech hiring base.
Dilution is likely the answer for how to make it more palitible and less traumatizing. Keep the really bad stuff exposure at similar proportions to what other careers experience. Not having 'report button' and 'checking popular content' as separate tasks and teams would probably help a little bit. I suspect the moderators wouldn't be as traumatized if they just had to click through trending posts all day. A dillution approach would still have to deal with the logistical trade-offs for what could be viable. Increasing the moderation payroll a hundred-fold and making them work at effectively 1% efficiency would make for better moderator experiences, but Facebook would be understandably reluctant to go from 5% revenue content moderation budget to 50% revenue content moderation.
From those I know that worked in the industry, contractor systems are frequently abused to avoid providing the right level of counseling/support to moderators.
I think part of it is the disconnection from the things you're experiencing. A paramedic or firefighter is there, acting in the world, with a chance to do good and some understanding of how things can go wrong. A content moderator is getting images beamed into their brain that they have no preparation for, of situations that they have no connection to or power over.
A content moderator for Facebook will invariably see more depravity and more frequently than a doctor or police officer. And likely see far less support provided by their employers to emotionally deal with it too.
This results in a circumstance where employees don’t have the time nor the tools to process.
As other sibling comments noted: most other jobs don't have the same frequent exposure to disturbing content. The closest are perhaps combat medics in an active warzone, but even they usually get some respite by being rotated.
Would we really be better served with media returning to being un-interactive and unresponsive? Where just getting on TV was something of note instead of everyone being on the internet. Where there was widespread downright cultish obsession with celebrities. The "We interrupt this news live from Iraq for celebrity getting out of prison news" era.
I think not. The gatekeepers of the old media pretty much died for a reason, that they seriously sucked at their job. Open social media and everyone having a camera in their pocket is what allowed us to basically disprove UFO sightings and prove routine police abuse of power.
Billions of people use them daily (facebook, instagram, X, youtube, tiktok...). Surely we could live without them like we did not long ago, but there's so much interest at play here that I don't see how they could be banned. It's akin to shutting down internet.
The Kenyan moderators' PTSD reveals the fundamental paradox of content moderation: we've created an enterprise-grade trauma processing system that requires concentrated psychological harm to function, then act surprised when it causes trauma. The knee-jerk reaction of suggesting AI as the solution is, IMO, just wishful thinking - it's trying to technologically optimize away the inherent contradiction of bureaucratized thought control. The human cost isn't a bug that better process or technology can fix - it's the inevitable result of trying to impose pre-internet regulatory frameworks on post-internet human communication that large segments of the population may simply be incompatible with.
Any idea what our next steps are? It seems like we stop the experiment of mass communication, try to figure out a less damaging knowledge-based filtering mechanism (presently executed by human), or throw open the flood gates to all manner of trauma inducing content and let the viewer beware.
> Any idea what our next steps are? [..] try to figure out a less damaging knowledge-based filtering mechanism [..]
It should cost some amount of money to post anything online on any social media platform: pay to post a tweet, article, image, comment, message, reply.
(Incidentally, crypto social networks have this by default simply due to constraints in how blockchains work.)
This is a great idea to prevent bots, but that’s not who posts the bad stuff this thread is talking about. Wherever you set the threshold will determine a point of wealth where someone can no longer afford to speak on these platforms, and that inevitably will prevent change, which tends to come from the people not well-served by the system as it is, i.e. poor people. Is that your goal?
> a point of wealth where someone can no longer afford to speak on these platforms, and that inevitably will prevent change, which tends to come from the people not well-served by the system as it is, i.e. poor people.
"Change" in itself is not a virtue. What I think you want is good or beneficial change? That said, what evidence do you have that poor people specifically are catalysing positive change online?
> This is a great idea to prevent bots, but that’s not who posts the bad stuff this thread is talking about.
There is no difference between a bot and a human as far as a network is concerned. After all, bots are run by humans.
The article specifically says that: "The images and videos including necrophilia, bestiality and self-harm caused some moderators to faint, vomit, scream and run away from their desks, the filings allege."
> Is that your goal?
Simply making it cost something to post online will mean that people who want to post spam can directly pay for the mental healthcare of moderators who remove their content.
If it turns out that you can find a group of people so poor that they simultaneously have valuable things to say online yet can't afford to post, then you can start a non-profit or foundation to subsidize "poor people online". (Hilariously, the crypto-bros do this when they're trying to incentivize use of their products: they set aside funds to "sponsor" thousands of users to the tune of tens of millions of dollars a year in gas refunds, airdrops, rebates and so forth.)
> "Change" in itself is not a virtue. What I think you want is good or beneficial change? That said, what evidence do you have that poor people specifically are catalysing positive change online?
We would probably disagree on what change we think is beneficial, but in terms of catalyzing the changes I find appealing, I see plenty of it myself. I'm not sure how I could dig up a study on something like this, but I'm operating on the assumption that more poor people would advocate the changes I'm interested in than rich, because the changes I want would largely be intended to benefit the former, potentially at the expense of the latter. I see this assumption largely confirmed in the world. That's why I find the prospect of making posting expensive threatening to society's capacity for beneficial change. The effect depends on what model you use to price social media use, how high you set the prices, how you regulate the revenue, etc, but I think the effect needs to be mitigated. In essence, my primary concern with this idea is that it may come from an antidemocratic impulse, not a will to protect moderators. If you don't possess that impulse, then I'm sorry to be accusing you of motives you don't possess, and I'll largely focus on the implementation details that would best protect the moderators while mitigating the suppression of discourse.
>you can start a non-profit or foundation to subsidize "poor people online".
Where are all the foundations helping provide moderator mental health treatment? This is a pretty widely reported issue; I'd expect to see wealthy benefactors trying to solve it, yet the problem remains unsolved. The issue, I think, is that there isn't enough money or awareness to go around to solve all niche financially-addressable problems. Issues have to have certain human-interest characteristics, then be carefully and effectively framed, to attract contributions from regular people. As such, I wouldn't want to artificially create a new problem, where poverty takes away basically the only meaningful voice a regular person has in the modern age, then expect somebody to come along and solve it with a charitable foundation. Again, if charity is this effective, then let's just start a foundation to provide pay and care to moderators. Would it attract contributions?
>the crypto-bros do this when they're trying to incentivize use of their products
The crypto-bros trying to incentivize use of their products have a financial incentive to do so. They're not motivated by the kindness of their own hearts. Where's the financial incentive to pay for poor people to post online?
>There is no difference between a bot and a human as far as a network is concerned. After all, bots are run by humans.
Most implementations of this policy would largely impact bot farms. If posts cost money, then there's a very big difference in the cost of a botnet and a normal account. Costs would be massively higher for a bot farm runner, and relatively insubstantial for a normal user. Such a policy would then most effectively suppress bots, and maybe the most extreme of spammers.
What I don't understand, then, is the association between bots/spammers and the shock garbage harming moderators. From what I know, bots aren't typically trying to post abuse, but to scam or propagandize, since they're run by actors either looking for a financial return or to push an agenda. If the issue is spammers, then I'd question whether that's the cause of moderator harm; I'd figure as soon as a moderator sees a single gore post, the account would get nuked. We should expect then that the harm is proportionate to the number of accounts, not posts.
If the issue is harmful accounts in large quantity, and easy account creation, then to be effective at reducing moderator harm, wouldn't you want to charge a large, one-time fee at account creation? If it costs ten dollars to make an account, bad actors would (theoretically) be very hesitant to get banned (even though in practice this seems inadequate to, e.g., suppress cheating in online games). I'd also be relatively fine with such a policy; nearly anyone could afford a single 5-10 usd fee for indefinite use, but repeat account creators would be suppressed.
>Simply making it cost something to post online will mean that people who want to post spam can directly pay for the mental healthcare of moderators who remove their content.
I don't think that adding a cost to the posts will end up paying for mental healthcare without careful regulation. The current poor treatment of moderators is a supply-demand issue, it's a relatively low-skill job and people are hungry, so you can treat them pretty bad and still have a sufficient workforce. They are also, if I'm correct, largely outsourced from places with worse labor protections. This gives the social media companies very little incentive to pay them more or treat them better.
An approach that might help is something like this: Require companies to charge a very small set amount to make each individual post, such that a normal user may pay in the realm of 5 usd in a month of use, but a spammer or bot farm would have to spend vastly more. Furthermore, but very important, require that this additional revenue be spent directly on the pay or healthcare of the moderation team.
In reality, though, I'd be very worried that this secondary regulation wouldn't enter or make it through a legislature. I'm also concerned that the social media companies would be the ones setting the prices. If such a cost became the norm, I expect that these companies would implement the cost-to-post as a subscription to the platform rather than a per-post price. They would immediately begin to inflate their prices as every subscription-based company currently does to demonstrate growth to shareholders. Finally, they'd pocket the gains rather than paying more to the moderators, since they have absolutely zero incentive to do anything else. I think this would cause the antidemocratic outcomes I'm concerned with.
My question for you, then, is whether you'd be interested in government regulation that implements a flat per-post or per-account-creation fee, not much more than 5usd monthly or 10usd on creation, not adjustable by the companies, and with the requirement that its revenue be spent on healthcare and pay for the moderation team?
Your reply is rather long so I'll only respond to 2 sections to avoid us speculating randomly without actually referring to data or running actual experiments.
To clarify:
> That's why I find the prospect of making posting expensive threatening to society's capacity for beneficial change.
I suggested making it cost something. "Expensive" is a relative term and for some reason you unjustifiably assumed that I'm proposing "expensive", however defined. Incentive design is about the marginal cost of using a resource, as you later observed when you suggested $5.
We often observe in real life (swimming pools, clubs, public toilets, hiking trails, camping grounds) that introducing a trivial marginal cost often deters bad actors and free-loaders[^0]. It's what's referred to in ideas such as "the tragedy of the commons".
> An approach that might help is something like this: Require companies to charge a very small set amount to make each individual post
Yes that's a marginal cost, which is what I suggested. So basically, we agree. The rest is implementation details that will depend on jurisdiction, companies, platforms and so forth.
> I don't think that adding a cost to the posts will end up paying for mental healthcare without careful regulation.
Without data or case studies to reference, I can't speculate about that and other things that are your opinions but thank you for thinking about the proposal and responding.
> Where are all the foundations helping provide moderator mental health treatment? This is a pretty widely reported issue; I'd expect to see wealthy benefactors trying to solve it, yet the problem remains unsolved.
I don't mean to sound rude but have you tried to solve the problem and start a foundation? Why is it some mysterious wealthy benefactor or other people who should solve it rather than you who cares about the problem? Why do you expect to see others and not yourself, solving it?
Raising funds from wealthy people for causes is much easier than people imagine.
But this would necessarily block out the poorest voices. While one might say that it is fine to block neonazi red necks, there are other poor people out their voicing valid claims.
> The images and videos including necrophilia, bestiality and self-harm caused some moderators to faint, vomit, scream and run away from their desks, the filings allege.
You might have heard the saying, common in policy and mechanism design: "Show me the incentives, and I'll show you the outcome."
If you want to reduce spam, you increase the marginal cost of posting spam until it stops. In general if you introduce a small cost to any activity or service, the mere existence of the cost is often a sufficient disincentive to misuse.
But, you can think through implications for yourself, no? You don't need me to explain how to think about cause and effect? You can, say, think about examples in real life, or in your own town or building, where a service is free to use compared to one that has a small fee attached, and look at who uses what and how.
Reducing the sheer volume is still a critically important step.
You're right that fundamentally there's an imbalance between the absolute mass of people producing the garbage, and the few moderators dealing with it. But we also don't have an option to just cut everyone's internet.
Designing platforms and business models that inherently produce less of the nasty stuff could help a lot. But even if/when we get there, we'll need automated mechanisms to ask people if they really want to be jerks, absolutely need to send their dick picks, or let people deal with sheer crime pics without having to look at them more than two seconds.
One of the unfortunate realities is that sometimes you need to be exposed to how grim reality can be as the alternative is living in a delusional bubble. However, one of the underlying points I was getting to is that often what is considered 'acceptable exposure' is highly politicized simply because control is attempting to be absolute and all encompassing. To me, comes across as overtly paternalistic especially when you start looking at the contradictions of 'good bad' vs 'bad bad' and why it is the way it is. I find it disappointing that we aren't allowed to self-censor, and even if we wanted to there simply aren't the tools available necessary for empowering people to make their own decisions at the point of consumption, but rather we employ filtering at the point of distribution which shifts the burden of decisions onto the platform and enact laws that focus that power even further into a limited number of hands.
Worked at PornHub's parent company for a bit and the moderation floor had a noticeable depressive vibe. Huge turnover. Can't imagine what these people were subjected to.
You don't mention the year(s), but I recently listened to Jordan Peterson's podcast episode 503. One Woman’s War on P*rnhub | Laila Mickelwait.
I will go ahead and assume that on the wild/carefree time of PornHub, when anyone could be able to upload anything and everything, from what that lady said, the numbers of pedophilia videos, bestiality, etc. was rampant.
> You don't mention the year(s), but I recently listened to Jordan Peterson's podcast episode 503. One Woman’s War on P*rnhub | Laila Mickelwait.
Laila Mickelwait is a director at Exodus Cry, formerly known as Morality in Media (yes, that's their original name). Exodus Cry/Morality in Media is an explicitly Christian organization that openly seeks to outlaw all forms of pornography, in addition to outlawing abortion and many gay rights including marriage. Their funding comes largely from right-wing Christian fundamentalist and fundamentalist-aligned groups.
Aside from the fact that she has an axe to grind, both she (as an individual) and the organization she represents have a long history of misrepresentating facts or outright lying in order to support their agenda. They also intentionally and openly refer to all forms of sex work (from consensual pornography to stripping to sexual intercourse) as "trafficking", against the wishes of survivors of actual sex trafficking, who have extensively document why Exodus Cry actually perpetuates harm against sex trafficking victims.
> everything, from what that lady said, the numbers of pedophilia videos, bestiality, etc. was rampant.
This was disproven long ago. Pornhub was actually quite good about proactively flagging and blocking CSAM and other objectionable content.
Ironically (although not surprisingly, if you're familiar with the industry), Facebook was two to three orders of magnitude worse than Pornhub.
But of course, Facebook is not targeted by Exodus Cry because their mission - as you can tell by their original name of "Morality in Media" - is to ban pornography on the Internet, and going after Facebook doesn't fit into that mission, even though Facebook is actually way worse for victims of CSAM and trafficking.
I have a throwaway Facebook account. In the absence of any other information as to my interests, Facebook thinks I want to see flat earth conspiracy theories and CSAM.
When I report the CSAM, I usually get a response that says "we've taken a look and found that this content doesn't go against our Community Standards."
Yeah, it was during that time, before the great purge. It's not just sexual depravity, people used that site to host all kinds of videos that would get auto-flagged anywhere else (including, the least of it, full movies).
> The moderators from Kenya and other African countries were tasked from 2019 to 2023 with checking posts emanating from Africa and in their own languages but were paid eight times less than their counterparts in the US, according to the claim documents
Why would pay in different countries be equivalent? Pretty sure FB doesn’t even pay the same to their engineers depending on where in the US they are, let alone which country. Cost of living dramatically differs.
Some products have factories in multiple countries. For example, Teslas are produced in both US and China. The cars produced in both countries are more or less identical in quality. But do you ever see that the market price of the product is different depending on the country of manufacture?
If the moderators in Kenya are providing the same quality labor as those from the US, why the difference in price of their labor?
I have a friend who worked for FAANG and had to temporarily move from US to Canada due to visa issues, while continuing to work for the same team. They were paid less in Canada. There is no justification for this except that the company has price setting power and uses it to exploit the sellers of labor.
A million things factor into market dynamics. I don’t know why this is such a shocking or foreign concept. Why is a waitress in Alabama paid less than in San Francisco for the same work? It’s a silly question because the answers are both obvious and complex.
Because that’s the only reason why anyone would hire them. If you’ve ever worked with this kind of contract workforce they aren’t really worth it without massive cost-per-unit-work savings. I suppose one could argue it’s better that they be unemployed than work in this job but they always choose otherwise when given the choice.
Because people chose to take the jobs, so presumably they thought it was fair compensation compared to alternatives. Unless there's evidence they were coerced in some way?
Note that I'm equating all jobs here. No amount of compensation makes it worth seeing horrible things. They are separate variables.
No amount? So you wouldn't accept a job to moderate Facebook for a million dollars a day? If you would, then surely you would also do it for a lower number. There is an equilibrium point.
Sorry, but I don't believe you. You could work for a month or two and retire. Or hell, just do it for one day and then return to your old job. That's a cool one mill in the bank.
> work for a month or two and retire --> This is a dream of many, but there exist a set of people that really like their job and have no intention to retire
> just do it for one day and then return to your old job. --> Cool mill in the bank and dreadful images in your head. Perhaps Apitman feels he has enough cash and wont be happier with a million (more?).
Also your point is true but lacks of Facebook interest to elevate that number. I guess it was more a theorical reflexion than an argument for concrete economie.
You haven't actually explained why it's bad, only slapped an evil sounding label on it. What's "exploitative" in this case and why is it morally wrong?
>they're imprisoned within borders
What's the implication of this then? That we remove all migration controls?
Of course. Not all at once, but gradually over time like the EU has begun to do. If capital and goods are free to move, then so must labor be. The labor market is very far from free if you think about it.
If that's the case then there can also be no ethical employment, either, both for employer and for employee. So that would seem to average out to neutrality.
This is precisely the sort of situation where taking the average is an awful way to ignore injustice - the poor get much poorer and the rich get much richer but everything is ‘neutral’ on average.
“There is no ethical X under capitalism” is not license to stick our heads in the sand and continue to consume without a second thought for those who are being exploited. It’s a reminder that things need to change, not only in all the little tiny drop-in-a-bucket ways that individuals can afford to contribute.
Exactly. It means that we must continue to act ethically within the system that is the way it is now, which we must accept, while at the same time doing our best to change that system for the better. It's a "why not both" situation.
>This is precisely the sort of situation where taking the average is an awful way to ignore injustice - the poor get much poorer and the rich get much richer but everything is ‘neutral’ on average.
That has nothing to do with the ethics of capitalism, though. The poor becoming poorer and the rich becoming richer is not a foregone conclusion of a capitalist society, nor is it guaranteed not to happen in a non-capitalist society.
That's only exploitation if you combine it with fact of the enclosure of the commons and that all land and productive equipment on Earth is private or state property and that it's virtually impossible to just go farm or hunt for yourself without being fucked with anymore, let alone do anything more advanced without being shut down violently.
>the enclosure of the commons and that all land and productive equipment on Earth is private or state property and that it's virtually impossible to just go farm or hunt for yourself without being fucked with anymore, let alone do anything more advanced without being shut down violently.
How would land allocation work without "enclosure of the commons"? Does it just become a free-for-all? What happens if you want to use the land for grazing but someone else wants it for growing crops? "enclosure of the commons" conveniently solves all these issues by giving exclusive control to one person.
Elinor Ostrom covered this extensively in her Nobel Prize-winning work if you are genuinely interested. Enclosure of the commons is not the only solution to the problems.
That's actually an interesting question. I would love to see some data on whether it really is impossible for the average person to live off the land if they wanted to.
An adjacent question is whether there are too many people on the planet for that to be an option anymore even if it were legal.
Probably way, way over the line. Population sizes exploded after the agricultural revolution. I wouldn't be surprised if the maximum is like 0.1-1% of the current population. If we're talking about strictly eating what's available without any cultivation at all, nature is really inefficient at providing for us.
They should probably hire more part time people working one hour a day?
Btw, it’s probably a different team handling copyright claims, but my run-in with Meta’s moderation gives me the impression that they’re probably horrifically understaffed. I was helping a Chinese content creator friend taking down Instagram, YouTube and TikTok accounts re-uploading her content and/or impersonating her (she doesn’t have any presence on these platforms and doesn’t intend to). Reported to TikTok twice, got it done once within a few hours (I was impressed) and once within three days. Reported to YouTube once and it was handled five or six days later. No further action was needed from me after submitting the initial form in either case. Instagram was something else entirely; they used Facebook’s reporting system, the reporting form was the worst, it asked for very little information upfront but kept sending me emails afterwards asking for more information, then eventually radio silence. I sent follow-ups asking about progress, again, radio silence. Impersonation account with outright stolen content is still up till this day.
Conversely, those who are subjected to harsh conditions often develop a cynical view of humanity, one lacking empathy, which also perpetuates the same harsh conditions. It's almost like protection and subjection aren't the salient dimensions, but rather there is some other perspective that better explains the phenomenon.
I tend to agree with growth through realism, but people often have the means and ability to protect themselves from these horrors. Im not sure you can systemically prevent this without resorting to big brother shoving propaganda in front of people and forcing them to consume it.
Just scrolled a lot to find this. And I do believe that moderators in a not so safe country seen a lot in their lives. But this also should make them less vulnerable for this kind of exposures and looks like it is not.
Seeing too much does cause PTSD. All I'm arguing is that some people love in a fantasy world where bad things don't happen so they end up voting for ridiculous things.
Isn't this a reason for Meta to outsource to countries where people somewhat immune to the first world problems? Aside of corruption and work force neglect?
I’m wondering if, like looking out from behind a blanket at horror movies, if getting a moderately blurred copy of images would reduce the emotional punch of highly inappropriate pictures. Or just scaled down tiny.
If it’s already bad blurred or as a thumbnail don’t click on the real thing.
This is more or less how police do CSAM classification now. They start with thumbnails, and that's usually enough to determine whether the image is a photograph or an illustration, involves penetration, sadism etc without having to be confronted with the full image.
We’re talking about Facebook here. You shouldn’t have the assumption that the platform should be “uncensored” when it clearly is not.
Furthermore, I’ll rather have the picture of my aunt’s vacation taken down by ai mistake rather than hundreds of people getting PSTD because they have to manually review if some decapitation was real or illustrated on an hourly basis.
Currently content is flagged and moderators decide whether to take it down. Using AI, it's easy conceive a process where some uploaded content is preflagged requiring an appeal (otherwise it's the same as before, a pair of human eyes automatically looking at uploaded material).
Uploaders trying to publish rule-breaking content would not bother with an appeal that would reject them anyway.
Because edge cases exist, and it isn't worth it for a company to hire enough staff to deal with them when one user with a problem, even if that problem is highly impactful to their life, just doesn't matter when the user is effectively the product and not the customer. Once the AI works well enough, the staff is gone and the cases where someone's business or reputation gets destroyed because there are no ways to appeal a wrong decision by a machine get ignored. And of course 'the computer won't let me' or 'I didn't make that decision' is a great way for no one to ever have to feel responsible for any harms caused by such a system.
This and social media companies in the EU tend to just delete stuff because of draconian laws where content must be deleted in 24 hours or they face a fine. So companies would rather not risk it. Moderators also only have a few seconds to decide if something should be deleted or not.
I already addressed this and you're talking over it. Why are you making the assumption that AI == no appeal and zero staff? That makes zero sense, one has nothing to do with the other. The human element comes in for appeal process.
> I already addressed this and you're talking over it.
You didn't address it, you handwaved it.
> Why are you making the assumption that AI == no appeal and zero staff?
I explicitly stated the reason -- it is cheaper and it will work for the majority of instances while the edge cases won't result in losing a large enough user base that it would matter to them.
I am not making assumptions. Google notoriously operates in this fashion -- for instance unless you are a very popular creator, youtube functions like that.
> That makes zero sense, one has nothing to do with the other.
Cheaper and mostly works and losses from people leaving are not more than the money saved by removing support staff makes perfect sense and the two things are related to each other like identical twins are related to each other.
> The human element comes in for appeal process.
What does a company have to gain by supplying the staff needed to listen to the appeals when the AI does a decent enough job 98% of the time? Corporations don't exist to do the right thing or to make people happy, they are extracting value and giving it to their shareholders. The shareholders don't care about anything else, and the way I described returns more money to them than yours.
> I am not making assumptions. Google notoriously operates in this fashion -- for instance unless you are a very popular creator, youtube functions like that.
Their copyright takedown system has been around for many years and wasn't contingent on AI. It's a "take-down now, ask questions later" policy to please the RIAA and other lobby groups. Illegal/abuse material doesn't profit big business, their interest is in not having it around.
You deliberately conflated moderation & appeal process from the outset. You can have 100% AI handling of suspect uploads (for which the volume is much larger) with a smaller staff handling appeals (for which the volume is smaller), mixed with AI.
Frankly if your hypothetical upload is still rejected after that, it 99% likely violates their terms of use, in which case there's nothing to say.
> it is cheaper
A lot of things are "cheaper" in one dimension irrespective of AI, doesn't mean they'll be employed if customers dislike it.
> the money saved by removing support staff makes perfect sense and the two things are related to each other like identical twins are related to each other.
It does not make sense to have zero staff in as part of managing an appeal process (precisely to deal with edge cases and fallibility of AI), and it does not make sense to have no appeal process.
You're jumping to conclusions. That is the entire point of my response.
> What does a company have to gain by supplying the staff needed to listen to the appeals when the AI does a decent enough job 98% of the time?
AI isn't there yet, notwithstanding, if they did a good job 98% of the time then who cares? No one.
> Their copyright takedown system has been around for many years and wasn't contingent on AI.
So what? It could rely on tea leaves and leprechauns, it illustrates that whatever automation works will be relied on at the expense of any human staff or process
> it 99% likely violates their terms of use, in which case there's nothing to say.
Isn't that 1% the edge cases I am specifically mentioning are important and won't get addressed?
> doesn't mean they'll be employed if customers dislike it.
The customers on ad supported internet platforms are the advertisers and they are fine with it.
> You're jumping to conclusions. That is the entire point of my response.
Conclusions based on solid reason and evidenced by past events.
> AI isn't there yet, notwithstanding, if they did a good job 98% of the time then who cares? No one.
Until you realize that 2% of 2.89billion monthly users is 57,800,000.
Then what is freedom of speech if every plattform deletes your content? Does it even exist? Facebook and co. are so ubiquitous, we shouldn't just apply normal laws to them. They are bigger than governments.
Freedom of speech means that the government can't punish you for your speech. It has absolutely nothing to do with your speech being widely shared, listened to, or even acknowledged. No one has the right to an audience.
This has always been the case. If the monks didn't want to copy your work, it didn't get copied by the monks. If the owners of a printing press didn't want to print your work, you didn't get to use the printing press. If Random House didn't want to publish your manifesto, you do not get to compel them to publish your manifesto.
The first amendment is multiple freedoms. Your freedom of speech is that the government shouldn't stop you from using your own property to do something. You are free to print out leaflets and distribute them from your porch. If nobody wants to read your pamphlets that's too damn bad, welcome to the free market of ideas buddy.
The first amendment also protects Meta's right of free association. Forcing private companies to platform any content submitted to them would outright trample their right. Meta has a right to not publish your work so that they can say "we do not agree with this work and will not use our resources to expand it's reach".
We have, in certain cases, developed system that treats certain infrastructure as a regulated pipe that is compelled to carry everything, like with classic telephone infrastructure. The reason for that, is it doesn't make much sense to require every company to put up their own physical wires, it's dumb and wasteful. Social networks have zero natural monopoly and should not be treated as common carriers.
Not if we retain control and each deploy our own moderation individually, relying on trust networks to pre-filter. That probably won't be allowed to happen, but in a rational, non-authoritarian world, this is something that machine learning can help with.
The solution to most social media problems in general is:
`select * from posts where author_id in @follow_ids order by date desc`
At least 90% of the ills of social media are caused by using algorithms to prioritize content and determine what you're shown. Before these were introduced, you just wouldn't see these types of things unless you chose to follow someone who chose to post it, and you didn't have people deliberately creating so much garbage trying to game "engagement".
There's a huge gap between "we will scan our servers for illegal content" and "your device will scan your photos for illegal content" no matter the context. The latter makes the user's device disloyal to its owner.
The choice was between "we will upload your pictures unencrypted and do with them as we like, including scan them for CSAM" vs. "we will upload your pictures encrypted and keep them encrypted, but will make sure beforehand on your device only that there's no known CSAM among it".
> we will upload your pictures unencrypted and do with them as we like
Curious, I did not realize Apple sent themselves a copy of all my data, even if I have no cloud account and don't share or upload anything. Is that true?
No. The entire discussion only applies to images being uploaded (or about to be uploaded) to iCloud. By default in iOS all pictures are saved locally only (so the whole CSAM scanning discussion would not have applied anyway), but that tends to fill up a phone pretty quickly.
With the (optional) iCloud, you can (optionally) activate iCloud Photos to have a photo library backed up in the cloud and shared among all your devices (and, in particular, with only thumbnails and metadata stored locally and the full resolution pictures only downloaded on demand).
These are always encrypted, with either the keys being with Apple ("Standard Data Protection") so that they're recoverable when the user loses phone or password, or E2E ("Advanced Data Protection") if the user so choses, thus irrecoverable.
It seems to me that in the latter case images are not scanned at all (neither on device nor in the cloud), and it's unclear to me whether they're scanned in the former case.
Apple doesn't do this. But other service providers do (Dropbox, Google, etc).
Other service providers can scan for CSAM from the cloud, but Apple cannot. So Apple might be one of the largest CSAM hosts in the world, due to this 'feature'.
> Other service providers can scan for CSAM from the cloud
I thought the topic was on-device scanning? The great-grandparent claim seemed to be that Apple had to choose between automatically uploading photos encrypted and not scanning them, vs. automatically uploading photos unencrypted and scanning them. The option for "just don't upload stuff at all, and don't scan it either" was conspicuously absent from the list of choices.
Why, do other phone manufacturers do this auto-upload-and-scan without asking?
I think FabHK is saying that Apple planned to offer iCloud users the choice of unencrypted storage with server-side scanning, or encrypted storage with client-side scanning. It was only meant to be for things uploaded to iCloud, but deploying such technologies for any reason creates a risk of expansion.
Apple itself has other options, of course. It could offer encrypted or unencrypted storage without any kind of scanning, but has made the choice that it wants to actively check for CSAM in media stored on its servers.
"In 2023, ESPs submitted 54.8 million images to the CyberTipline of which 22.4 million (41%) were unique. Of the 49.5 million videos reported by ESPs, 11.2 million (23%) were unique."
And, indeed, this is why we should not expect the process to stop. Nobody is rallying behind the rights of child abusers and those who traffic in child abuse material. Arguably, nor should they. The slippery slope argument only applies if the slope is slippery.
This is analogous to the police's use of genealogy and DNA data to narrow searches for murderers, who they then collected evidence on by other means. There's is risk there, but (at least in the US) you aren't going to find a lot of supporters of the anonymity of serial killers and child abusers.
There are counter-arguments to be made. Germany is skittish about mass data collection and analysis because of their perception that it enabled the Nazi war machine to micro-target their victims. The US has no such cultural narrative.
> And, indeed, this is why we should not expect the process to stop. Nobody is rallying behind the rights of child abusers and those who traffic in child abuse material. Arguably, nor should they.
I wouldn't be so sure.
When Apple was going to introduce on-device scanning they actually proposed to do it in two places.
• When you uploaded images to your iCloud account they proposed scanning them on your device first. This is the one that got by far the most attention.
• The second was to scan incoming messages on phones that had parental controls set up. The way that would have worked is:
1. if it detects sexual images it would block the message, alert the child that the message contains material that the parents think might be harmful, and ask the child if they still want to see it. If the child says no that is the end of the matter.
2. if the child say they do want to see it and the child is at least 13 years old, the message is unblocked and that is the end of the matter.
3. if the child says they do want to see it and the child is under 13 they are again reminded that their parents are concerned about the message, again asked if they want to view it, and told that if they view it their parents will be told. If the child says no that is the end of the matter.
4. If the child says yes the message is unblocked and the parents are notified.
This second one didn't get a lot of attention, probably because there isn't really much to object to. But I did see one objection from a fairly well known internet rights group. They objected to #4 on the grounds that the person sending the sex pictures to your under-13 year old child sent the message to the child, so it violates the sender's privacy for the parents to be notified.
If it's the EFF, I think they went out on a limb on this one that not a lot of American parents would agree with. "People have the right to communicate privately without backdoors or censorship, including when those people are minors" (emphasis mine) is a controversial position. Arguably, not having that level of privacy is the curtailment on children's rights.
The cultural narrative is actually extremely popular in a 10% subset of the population that is essentially fundamentalist christian who are terrified of the government branding them with "the mark of the beast".
The problem is that their existence actually poisons the discussion because these people are absurd loons who also blame the gays for hurricanes and think the democrats eat babies.
Apple is already categorizing content on your device. Maybe they don't report what categories you have. But I know if I search for "cat" it will show me pictures of cats on my phone.
No, they had backlash against using AI on devices they don’t own to report said devices to police for having illegal files on them. There was no technical measure to ensure that the devices being searched were only being searched for CSAM, as the system can be used to search for any type of images chosen by Apple or the state. (Also, with the advent of GenAI, CSAM has been redefined to include generated imagery that does not contain any of {children, sex, abuse}.)
That’s a very very different issue.
I support big tech using AI models running on their own servers to detect CSAM on their own servers.
I do not support big tech searching devices they do not own in violation of the wishes of the owners of those devices, simply because the police would prefer it that way.
It is especially telling that iCloud Photos is not end to end encrypted (and uploads plaintext file content hashes even when optional e2ee is enabled) so Apple can and does scan 99.99%+ of the photos on everyone’s iPhones serverside already.
> Also, with the advent of GenAI, CSAM has been redefined to include generated imagery that does not contain any of {children, sex, abuse}
It hasn’t been redefined. The legal definition of it in the UK, Canada, Australia, New Zealand has included computer generated imagery since at least the 1990s. The US Congress did the same thing in 1996, but the US Supreme Court ruled in the 2002 case of Ashcroft v Free Speech Coalition that it violated the First Amendment. [0] This predates GenAI because even in the 1990s people saw where CGI was going and could foresee this kind of thing would one day be possible.
Added to that: a lot of people misunderstand what that 2002 case held. SCOTUS case law establishes two distinct exceptions to the First Amendment – child pornography and obscenity. The first is easier to prosecute and more commonly prosecuted; the 2002 case held that "virtual child pornography" (made without the use of any actual children) does not fall into the scope of the child pornography exception – but it still falls into the scope of the obscenity exception. There is in fact a distinct federal crime for obscenity involving children as opposed to adults, 18 USC 1466A ("Obscene visual representations of the sexual abuse of children") [1] enacted in 2003 in response to this decision. Child obscenity is less commonly prosecuted, but in 2021 a Texas man was sentenced to 40 years in prison over it [2] – that wasn't for GenAI, that was for drawings and text, but if drawings fall into the legal category, obviously GenAI images will too. So actually it turns out that even in the US, GenAI materials can legally count as CSAM, if we define CSAM to include both child pornography and child obscenity – and this has been true since at least 2003, long before the GenAI era.
Thanks for the information. However I am unconvinced that SCOTUS got this right. I don’t think there should be a free speech exception for obscenity. If no other crime (like against a real child) is committed in creating the content, what makes it different from any other speech?
> However I am unconvinced that SCOTUS got this right. I don’t think there should be a free speech exception for obscenity
If you look at the question from an originalist viewpoint: did the legislators who drafted the First Amendment, and voted to propose and ratify it, understand it as an exceptionless absolute or as subject to reasonable exceptions? I think if you look at the writings of those legislators, the debates and speeches made in the process of its proposal and ratification, etc, it is clear that they saw it as subject to reasonable exceptions – and I think it is also clear that they saw obscenity as one of those reasonable exceptions, even though they no doubt would have disagreed about its precise scope. So, from an originalist viewpoint, having some kind of obscenity exception seems very constitutionally justifiable, although we can still debate how to draw it.
In fact, I think from an originalist viewpoint the obscenity exception is on firmer ground than the child pornography exception, since the former is arguably as old as the First Amendment itself is, the latter only goes back to the 1982 case of New York v. Ferber. In fact, the child pornography exception, as a distinct exception, only exists because SCOTUS jurisprudence had narrowed the obscenity exception to the point that it was getting in the way of prosecuting child pornography as obscene – and rather than taking that as evidence that maybe they'd narrowed it a bit too far, SCOTUS decided to erect a separate exception instead. But, conceivably, SCOTUS in 1982 could have decided to draw the obscenity exception a bit more broadly, and a distinct child pornography exception would never have existed.
If one prefers living constitutionalism, the question is – has American society "evolved" to the point that the First Amendment's historical obscenity exception ought to jettisoned entirely, as opposed to merely be read narrowly? Does the contemporary United States have a moral consensus that individuals should have the constitutional right to produce graphic depictions of child sexual abuse, for no purpose other than their own sexual arousal, provided that no identifiable children are harmed in its production? I take it that is your personal moral view, but I doubt the majority of American citizens presently agree – which suggests that completely removing the obscenity exception, even in the case of virtual CSAM material, cannot currently be justified on living constitutionalist grounds either.
My understanding was the FP risk. The hashes were computed on device, but the device would self-report to LEO if it detects a match.
People designed images that were FPs of real images. So apps like WhatsApp that auto-save images to photo albums could cause people a big headache if a contact shared a legal FP image.
No, the point of on-device scanning is to enable authoritarian government overreach via a backdoor while still being able to add “end to end encryption” to a list of product features for marketing purposes.
If Apple isn’t free to publish e2ee software for mass privacy without the government demanding they backdoor it for cops on threat of retaliation, then we don’t have first amendment rights in the USA.
You misunderstand me. The issue is that Apple is theoretically being retaliated against, by the state, if they were to publish non-backdoored e2ee software.
Apple does indeed in theory have a right to release whatever iOS features they like. In practice, they do not.
Everyone kind of tacitly acknowledged this, when it was generally agreed upon that Apple was doing the on-device scanning thing "so they can deploy e2ee". The quiet part is that if they didn't do the on-device scanning and released e2ee software without this backdoor (which would then thwart wiretaps), the FBI et al would make problems for them.
The same reason they made iMessage e2ee, which happened many years before CSAM detection was even a thing.
User privacy. Almost nobody trades in CSAM, but everyone deserves privacy.
Honestly, this isn’t about CSAM at all. It’s about government surveillance. If strong crypto e2ee is the hundreds-of-millions-of-citizens device default, and there is no backdoor, the feds will be upset with Apple.
This is why iCloud Backup (which is a backdoor in iMessage e2ee) is not e2ee by default and why Apple (and the state by extension) can read all of the iMessages.
I didn't ask why they would want E2EE. I asked why they would want E2EE without CSAM detection when they literally developed a method to have both. It's entirely reasonable to want privacy for your users AND not want CSAM on your servers.
> Honestly, this isn't about CSAM at all.
It literally is the only thing the technology is any good for.
Already happened/happening. I have an ex-coworker that left my current employer for my state's version of the FBI. Long story short, the government has a massive database to crosscheck against. Often times, the would use automated processes to filter through suspicious data they would collect during arrests.
If the automated process flags something as a potential hit, then they, the humans, would then review those images to verify. Every image/video that is discovered to be a hit is also inserted into a larger dataset as well. I can't remember if the Feds have their own DB (why wouldn't they?), but the National Center for Missing and Exploited Children run a database that I believe government agencies use too. Not to mention, companies like Dropbox, Google, etc.. all has against the database(s) as well.
Borrowing the thought from Ed Zitron, but when you think about it, most of us are exposing ourselves to low-grade trauma when we step onto the internet now.
That's the risk of being in a society in general, it's just that we interact with people outside way less now. If one doesn't like it, they can always be a hermit.
Not just that, but that algorithms are driving us to the extremes. I used to think it was just that humans were not meant to have this many social connections, but it's more about how these connections are mediated, and by whom.
Worth reading Zitron's essay if you haven't already. It sounds obvious, but the simple cataloging of all the indignities we take for granted builds up to a bigger condemnation than just Big Tech.
https://www.wheresyoured.at/never-forgive-them/
Definitely. It's a perfect mix of factors to enable dark sides of our personas. I belive everyone has certain level of near sociopathic perverse curiosity, and certain amount of need to push the limits, if there are no consequences for such behaviors. Algorithms can only affect so much. But gore sites, efukt, countless whatsapp/facebook/signal/whatever group that teens post vile things in are mostly due to childish morbid curiosity and not due to everyone being a literal psycho.
Is there any way to look at this that doesn't resort to black or white thinking? That's a rather extreme view in itself that could use some nuance and moderation.
I'm not very good with words so I can only hope the reader will be able to understand that things are not black and white, but that its a spectrum that depends on countless factors, cultural, societal and other.
What's more; popular TV shows regularly have scenes that could cause trauma, the media has been ramping up the intensity of content for years. I think it's simply seeking more word of mouth 'did you see GoT last night? Oh my gosh so and so did such and such to so and so!'
> post-traumatic stress disorder caused by exposure to graphic social media content including murders, suicides, child sexual abuse and terrorism.
If you want a taste of the legal portion of theses just got to 4chan.org/gif/catalog and look for a "rekt", "war", "gore", or "women hate" thread. Watch every video there for 8-10 hours a day.
Now remember this is the legal portion of the content moderated as 4chan does a good job these days of removing illegal content mentioned in that list above. So all these examples will be a milder sample of what moderators deal with.
And do remember to browse for 8-10 hours a day.
edit: it should go without saying that the content there is deep in the NSFW territory, and if you haven't already stumbled upon that content, I do not recommend browsing "out of curiosity".
As someone that grew up with 4chan I got pretty desensitized to all of the above very quickly. Only thing I couldn’t watch was animal abuse videos. That was all yers ago though, now I’m fully sensitized to all of it again.
These accounts like yours and this report of PTSD don't line up. Both of them are credible. What's driving them crazy but not Old Internet vets?
Could it be:
- the fact that moderators are hired and paid
- that kids are young and a lot more tolerant
- that moderators aren't intended audiences
- backgrounds, sensitivity in media at all
- the amount, of disturbing images
- the amount, in total, not just bad ones
- anything else?
Personally, I'm suspecting that difference in exposure to _any kind of media_ might be a factor; I've come across stories online that imply visiting and staying at places like Tokyo can almost drive people crazy, from the amount of stimuli alone.
Doesn't it sound a bit too shallow and biased to determine it was specifically CSAM or whatever specific type of data that did it?
Because intent and perspective play a huge role is how we feel about things. Some people are excited for war and killing their enemies makes them feel better about themselves whereas others come back from war broken and traumatized. A lot of it depends on how you or others frame that experience for you.
The point is that you don't know which one will stick. Even people who are desensitized will remember certain things, a person's facial expression or a certain sound or something like that, and you can't predict which one will stick with you.
Of course not. What drew me in was the edginess. What kept me there was the very dark but funny humor. This was in 2006-2010, it was all brand new, it was exciting.
I have a kid now and my plan is to not give her a smartphone/social media till she’s 16 and heavily monitor internet access until she’s atleast 12. Obviously I can’t control what she will see with friends but she goes to a rigorous school and I’m hoping that will keep her busy. Other than that I’m hoping the government comes down hard on social media access to kids/teenagers and all the restrictions are legally codified by the time she’s old enough.
There have been multiple instances where I would receive invites or messages from obvious bots - users having no history, generic name, sexualised profile photo. I would always report them to Facebook just to receive a reply an hour or a day later that no action has been taken. This means there is no human in the pipeline and probably only the stuff that's not passing their abysmal ML filter goes to the actual people.
I also have a relative who is stuck with their profile being unable to change any contact details, neither email nor password because FB account center doesn't open for them. Again, there is no human support.
BigTech companies must be mandated by law to have the number of live support people working and reachable that is a fixed fraction of their user number. Then, they would have no incentive to inflate their user numbers artificially. As for the moderators, there should also be a strict upper limit on the number of content (content tokens, if you will) they should view during their work day. Then the companies would also be more willing to limit the amount of content on their systems.
Yeah, it's bad business for them but it's a win for the people.
I have several friends who do this work for various platforms.
The problem is, someone has to do it. These platforms are mandated by law to moderate it or else they're responsible for the content the users post. And the companies can not shield their employees from it because the work simply needs doing. I don't think we can really blame the platforms (though I think the remuneration could be higher for this tough work).
The work tends to suit some people better than others. The same way some people will not be able to be a forensic doctor doing autopsies. Some have better detachment skills.
All the people I know that do this work have 24/7 psychologists on site (most of them can't work remotely due to the private content they work with). I do notice though that most of them do have an "Achilles heel". They tend to shrug most things off without a second thought but there's always one or two specific things or topics that haunt them.
Hopefully eventually AI will be good enough to deal with this shit. It sucks for their jobs or course but it's not the kind of job anyone really does with pleasure.
Uhh no I'm not giving up my privacy because a few people want to misbehave. Screw that. My friends know who I am but the social media companies shouldn't have to.
Also, it'll make social media even more fake than it already is. Everyone trying to be as fake as possible. Just like LinkedIn is now. It's sickening, all these people toting the company line. Even though they do nothing but complain when you speak to them in person.
And I don't think it'll actually solve the problem. People find ways to get through the validation with fake IDs.
So brown/black people in the third world who often find that this is their only meaningful form of social mobility are the "someone" by default? Because that's the de-facto world we have!
That's not true at all. All the people I speak of are here in Spain. They're generally just young people starting a career. Many of them end up in the fringes of cybersecurity work (user education etc) actually because they've seen so many scams. So it's the start of a good career.
Sure some companies would outsource also to africa but it doesn't mean this work is only available to third-world countries. And there's not that many jobs in it. It's more than possible to be able to find enough people that can stomach it.
There was another article a few years back about the poor state of mental health of Facebook moderators in Berlin. This is not exclusively a poor people problem. More of a wrong people for the job problem.
And of course we should look more at why this is the only form of social mobility for them if it's really the case.
I wonder if using AI to turn images and video into a less realistic style before going to the moderators, while preserving the image content will work to reduce trauma as it creates an artificial barrier from seeing human torture. We used to watch cartoons as kids with people being blown to pieces.
One terrible aspect of online content moderation is that, no matter how good AI gets and no matter how much of this work we can dump in its lap, to a certain extent there will always need to be a "human in the loop".
The sociopaths of the world will forever be coming up with new and god-awful types of content to post online, which current AI moderators haven't encountered before and which therefore won't know how to classify. It will therefore be up to humans to label that content in order to train the models to handle that new content, meaning humans will have to view it (and suffer the consequences, such as PTSD). The alternative, where AI labels these new images and then uses those AI-generated labels to update the model, famously leads to "model collapse" [1].
Short of banning social media at a societal level, or abstaining from it at an individual level, I don't know that there's any good solution to this problem. These poor souls are taking a bullet for the rest of us. God help them.
The nature of the job really sucks. This is not unusual; there are lots of sucky jobs. So my concern is really whether the employees were informed what they would be exposed to.
Also I’m wondering why they didn’t just quit. Of course the answer is money, but if they knew what they were getting into (or what they were already into), and chose to continue, why should they be awarded more money?
Finally, if they can’t count on employees in poor countries to self-select out when the job became life-impacting, maybe they should make it a temporary gig, eg only allow people to do it for short periods of time.
My out-of-the-box idea is: maybe companies that need this function could interview with an eye towards selecting psychopaths. This is not a joke; why not select people who are less likely to be emotionally affected? I’m not sure anyone has ever done this before and I also don’t know if such people would be likely to be inspired by the images, which would make this idea a terrible one. My point is find ways to limit the harm that the job causes to people, perhaps by changing how people interact with the job since the nature of the job doesn’t seem likely to change.
So you're expecting these people to have the deep knowledge of human psychology to know ahead of time that this is likely to cause them long term PTSD, and the impact that will have on their lives, versus simply something they will get over a month after quitting?
I don’t think it takes any special knowledge of human psychology to understand that horrific images can cause emotional trauma. I think it’s a basic due diligence question that when considering establishing such a position, one should consult literature and professionals to discover what impact there might be and what might be done to minimize it.
Maybe so, but in places with good civil and human rights, you can't sign them away via contract, they're inalienable. If Kenya doesn't offer these protections, and the allegations are correct, then Facebook deserves to be punished regardless for profiting off inhumane working conditions.
If I was a tech billionaire, and there was so much uploading of stuff so bad, that it was giving my employee/contractors PTSD, I think I'd find a way to stop the perpetrators.
(I'm not saying that I'd assemble a high-speed yacht full of commandos, who travel around the world, righting wrongs when no one else can. Though that would be more compelling content than most streaming video episodes right now. So you could offset the operational costs a bit.)
Large scale and super sick perpetrators exist (as compared to small scale ones who do mildly sick stuff) because Facebook is a global network and there is a benefit to operating on such a large platform. The sicker you are, while getting away with it, the more reward you get.
Switch to a federated social systems like Mastodon, with only a few thousand or ten thousand users per instance, and perpetrators will never be able to grow too large. Easy for the moderators to shut stuff down very quickly.
Tricky. It also gives perpetrators a lot more places to hide. I think the jury is out on whether a few centralized networks or a fediverse makes it harder for attackers to reach potential targets (or customers).
The purpose of facebook moderators (besides legal compliance) is to protect normal people from the "sick" people. In a federated network, of course, such people will create their own instances, and hide there. But then no one is harmed from them, because all such instances will be banned quite quickly, same as all spam email hosts are blocked very quickly by everyone else.
From a normal person perspective on not seeing bad stuff, the design of a federated network is inherently better than a global network.
That's the theory. I'm not sure yet that it works in practice, I've seen a lot of people on Mastodon complaining about how as a moderator, keeping up with the bad services is a perpetual game of whack-a-mole because everything is access on by default. Maybe this is a Mastodon specific issue.
That's because Mastodon or any other federated social network hasn't taken off, and so not enough development has gone into them. If they take off, naturally people will develop analogs of spam lists and SpamAssassin etc for such systems, which will cut down moderation time significantly. I run an org email server, and don't exactly do any thing besides installing such automated tools.
On Mastodon, admins will just have to do the additional work to make sure new accounts are not posting weird stuff.
Big tech vastly underspends on this area. You can find a stream of articles from the last 10 years where BigTech companies were allowing open child prostitution, paid-for violence, and other stuff on their platforms with little to no moderation.
> Switch to a federated social systems like Mastodon, with only a few thousand or ten thousand users per instance, and perpetrators will never be able to grow too large.
The #2 and #3 most popular Mastodon instances allow CSAM.
If you were a tech billionaire you'd be a sociopath like the others and wouldn't give a single f about this. You'd be going on podcasts to tell the world that markets will fix everything if given the chance.
They are not wrong. Do you know any mechanism other than markets that work at scale and that don’t cost a bomb and don’t involve abusive central authority?
Tech billionaires usually advocate for some kind of return to the gilded age, with minimal workers rights and corporate tax. Markets were freer back then, how did that work out for the average man? Markets alone don't do anything for the average quality of life.
But is it solely because of markets? Would deregulation improve our lives further? I don't think so, and that is what I am talking about. Musk, Bezos, Andreessen and cie. are advocating for a particular laissez-faire flavor of capitalism, which historically has been very bad for the average man.
It isn’t known in advance though. These people went to that job and got psychiatric diseases that, considering the thirdworldiness, they are unlikely to get rid of.
I’m not talking about obvious “scream and run away” reaction here. One may think that it doesn’t affect them or people on the internet, but then it suddenly does after they binge it all day for a year.
The fact that not less than 100% got PTSD should be telling something here.
The 100+ years of research on PTSD, starting from shell shock studies in WWI shows that PTSD isn't so simple.
Some people come out with no problems, while their trenchmate facing almost identical situations suffers for the rest of their lives.
In this case, the claim is that "it traumatised 100% of hundreds of former moderators tested for PTSD … In any other industry, if we discovered 100% of safety workers were being diagnosed with an illness caused by their work, the people responsible would be forced to resign and face the legal consequences for mass violations of people’s rights."
Do those people you know look at horrible pictures on the internet for 8-10 hours each day?
I worked at FB for almost 2 years. (I left as soon as I could, I knew it wasn't a good fit for me.)
I had an Uber from the campus one day, and my driver, a twenty-something girl, was asking how to become a moderator. I told her, "no amount of money would be enough for me to do that job. Don't do it."
I don't know if she eventually got the job, but I hope she didn't.
Yes, these jobs are horrible. However, I do know from accidently encountering bad stuff on the internet that you want to be as far away from a modern battlefield as possible.
It's just kind of ridiculous how people think war is like Call of Duty. One minute you're sitting in a trench, the next you're a pile of undifferentiated blood and guts. Same goes for car accidents and stuff. People really underestimate how fragile we are as human beings. Becoming aware of this is super damaging to our concept of normal life.
Watching someone you love die of cancer is also super damaging to one's concept of normal life. Getting a diagnosis, or being in a bad car accident, or the victim of a violent assault is, too. I think a personal sense of normality is nothing more than the state of mind where we can blissfully (and temporarily) forget about our own mortality. Obviously, marinating yourself in all the horrible stuff makes it really hard to maintain that state of mind.
On the other hand, never seeing or reckoning with or preparing for how brutal reality actually is can lead to a pretty bad shock once something bad happens around you. And maybe worse, can lead you to under-appreciate how fantastic and beautiful the quotidian moments of your normal life actually are. I think it's important to develop a concept of normal life that doesn't completely ignore that really bad things happen all around us, all the time.
Frankly
there’s a difference between a one or two or even ten off exposure to the brutality of life, where various people in your life will support you and help you acclimate to it
Versus straight up mainlining it for 8 hours a day
hey kid, hope you're having a good life. I'll look at the screen full of the worst the internet and humanity has produced on the internet for eight hours.
I get your idea but in the context of this topic I think you're overreaching
Actually reckoning with this stuff leads people into believing in anti-natalism, negative utilitarinism, Scopenhaur/Philipp Mainlander (Mainlander btw was not just pro-suicide, he actually killed himself!), and the voluntary extinction movement. This terrified other philosophers like Nietzsche, who spends most of his work defending reality even if it's absolute shit. "Amor Fati", "Infinite Regress/Eternal Recurrence", "Übermensch" vs the literal "Last Man". "Wall-E" of all films was the modern quintessential nietzschian fable, with maybe "Children of Men" being the previous good one before that.
You're literally not allowed to acknowledge that this stuff is bad and adopt one of the religions that see this and try to remove suffering - i.e. Jainism, because at least historically doing so meant you couldn't use violence in any circumstances, which also meant that your neighbor would murder you. There's a reason that Jain's population are in the low millions
Reality is actually bad, and it should be far more intuitive to folks. The fact that positive experience is felt "quickly" and negative experience is felt "slowly" was all the evidence I needed that I wouldn't just press the "instantly and painlessly and without warning destroy reality" (benevolent world-exploder) button, I'd smash it!
I felt this way for the first 30 years of my life. Then I received treatment for depression (psychoanalysis) and finally tasted joy for the first time in my entire life. Now I love life. YMMV
EDIT: If you’re interested what actually happened is that I was missing the prerequisite early childhood experience that enables one to feel secure in reality. If you check, all the people who have this feeling of philosophical/ontological pessimism have a missing or damaged relationship with the mother in the first year or so. For them, not even Buddhism can help, since even the abstract idea of anything good, even if it requires transcendence, is a joke
But psychoanalysis is literally psudoscientific nonsense. You got spooked.
> But psychoanalysis is literally psudoscientific nonsense. You got spooked.
OP got spooked to stop suffering and love life instead? Is that your cautionary tale?
‘Warning! You may end up irrationally happy and fulfilled!’
No it isn’t, it’s empirically justified, look it up. Hence why the state insurance here in Germany is willing to pay for me to go three times a week. It works
Wow, thanks for your valuable contribution to the conversation!
[deleted]
I'm not sure what point you are trying to make. I don't look up to Freud and psychoanalysis doesn't work for everyone! I don't even necessarily recommend it. It just worked for me and I realised that in my case the depression was a confused outlook conditioned by a certain situation.
My point really is that you can feel one way for your entire life and then suddenly feel a different way. I'm not suggesting psychoanalysis specifically. Perhaps for others, CBT or religion or just a change in life circumstances will be enough.
The fact that these philosophies are dependent on the life situation to me is a reason to be a little sceptical of their universality. In my personal experience, in those 30 years of my life, I thought everyone thought the way I did, that reality was painful and a chore and dark and dim. Psychoanalysis helped me realise that other people actually were happy to be alive, and understand why I have not been my entire life.
YMMV = not everyone hates life
> I'm not sure what point you are trying to make.
I’m not sure why people act coy when a straightforward mirroring of their own comment is presented. “What could this mean?” Maybe the hope is that the other person will bore the audience by explaining the joke?
> I don't look up to Freud and psychoanalysis doesn't work for everyone! I don't even necessarily recommend it.
Talking about your infant parental relationship as the be-all-end-all looks indistinguishable from that.
> > If you check, all the people who have this feeling of philosophical/ontological pessimism have a missing or damaged relationship with the mother in the first year or so.
.
> I'm not suggesting psychoanalysis specifically. Perhaps for others, CBT or religion or just a change in life circumstances will be enough.
Except for people who have “this feeling of philosophical/ontological pessimism”.
> > For them, not even Buddhism can help, since even the abstract idea of anything good, even if it requires transcendence, is a joke
Which must paint everyone who defends “suffering” in the Vedic sense. Since that was what you were replying to. (Saying that reality is suffering on-the-whole is not the same as “I’m depressed [, and please give me anecdotes about how you overcame it]”.)
> > The fact that these philosophies are dependent on the life situation to me is a reason to be a little sceptical of their universality. In my personal experience, in those 30 years of my life, I thought everyone thought the way I did, that reality was painful and a chore and dark and dim. Psychoanalysis helped me realise that other people actually were happy to be alive, and understand why I have not been my entire life.
I don’t know how broad your brush is. But believing in the originally Vedic (Schopenhauer was inspired by Eastern religions, maybe Buddhism in particular) concept of “suffering” is not such a fragile intellectual framework that it collapses once you heal from the trauma when your mother scolded you while potty training at a crucial point in your Anal Stage of development.
> YMMV = not everyone hates life
Besides any point whatever.
Worth noting that I trained formally in Buddhism under a teacher for a few years. I’m not unaware of all this
And the Vedic version of suffering is all full of love for reality, not wanting to delete it by smashing a button
> Worth noting that I trained formally in Buddhism under a teacher for a few years. I’m not unaware of all this
You trained personally for a few years and yet you make such sweeping statements/strokes that a neophyte is prompted to point out basic facts about this practice (apparently an adequate retelling since you don’t bother to correct me)? You might think this bolsters something (?) but I think the case is the opposite.
It helps to point out exactly what part that you are talking about (apparently not the Vedic gang). In fact this initial reply (just the above paragraph before the edit) seemed so out of place. Okay, so what are they talking about?
> And the Vedic version of suffering is all full of love for reality, not wanting to delete it by smashing a button
Oh, so it’s about the small wish to commit biocide.
It’s a clear category error to talk about love/want/hate when it comes to that statement. Because that’s beside the point. The point is clearly the wrongheaded, materialistic assumption that suffering will end if all life would end by the press of a button. And if you think that life on the whole is suffering? Then pressing the button is morally permissible.
It’s got nothing to do with hate.
It seemed interesting to me that someone would have such a “Schopenhauer” (not that I have read him) view of existence. You don’t see that every day.
I don’t really know what you’re talking about, sorry. This is coming off as incoherent rambling to me
My comment was saying that this part was about ending suffering, not about wishing ill-will. I don’t understand what’s unclear.
> > Reality is actually bad, and it should be far more intuitive to folks. The fact that positive experience is felt "quickly" and negative experience is felt "slowly" was all the evidence I needed that I wouldn't just press the "instantly and painlessly and without warning destroy reality" (benevolent world-exploder) button, I'd smash it!
> This is coming off as incoherent rambling to me
You do like to gesture vaguely and tell me that "I don’t know what this is". Meanwhile I have pointed out at least one instance where you flat out just contradicted yourself on psychoanalysis. Or "incoherent rambling" (on psychoanalysis) if you will
Er ok
Interesting to see this perspective here. You’re not wrong.
> There's a reason that Jain's population are in the low millions
The two largest Vedic religions both have hundreds of millions of followers. Is Jainism that different from them in this regard? I know Jainism is very pacifist but on the question of suffering.
... okay.
Emergency personnel might need to braze themselves for car accidents every day. That Kenyans need to be traumatized by Internet Content in order to make a living is just silly and unnecessary.
Car “accidents” are also completely unnecessary.
Even the wording is wrong - those aren’t accidents, it is something we accept as byproduct of a car-centric culture.
People feel it is acceptable that thousands of people die on the road so we can go places faster. Similarly they feel it’s acceptable to traumatise some foreigners to keep social media running.
Nitpick that irrelevant example if you want.
ISISomalia loves that recruitment pool though
Speaking as a paramedic, two things come to mind:
1) I don't have squeamishness about trauma. In the end, we are all blood and tissue. The calls that get to me are the emotionally traumatic, the child abuse, domestic violence, elder abuse (which of course often have a physical component too, but it's the emotional for me), the tragic, often preventable accidents.
2) There are many people, and I get the curiosity, that will ask "what's the worst call you've been on?" - one, you don't really want to hear, and two, "Hey, person I may barely know, do you think you can revisit something traumatic for my benefit/curiosity?"
That’s an excellent way to put it, resonates with my (non medical) experience. It’s the emotional stuff that will try to follow me around and be intrusive.
I won’t watch most movies or TV because they are just some sort of tragedy porn.
> movies or TV because they are just some sort of tragedy porn
100% agree. Most TV series nowadays are basically violence porn, now that real porn is not allowed for all kinds of reasons.
I'd be asking "how bad is the fentanyl situation in your are?"
Relatively speaking, not particularly.
What's interesting now is how many patients will say "You're not going to give me fentanyl are you? That's really dangerous stuff", etc.
Their perfect right, of course, but is sad that that's the public perception - it's extremely effective, and quite safe, used properly (for one, we're obviously only giving it from pharma sources, with actually properly dosed solutions for IV).
It's also super easy to come up with better questions: "What's the funniest call you've ever been on?" "What call do you feel like you made the biggest difference?" "What's the best story you have?"
I'm pretty sure watching videos on /r/watchpeopledie or rekt threads on 4chan has been a net positive for me. I'm keenly aware of how dangerous cars are, that wars (including narcowars) are hell, that I should never stay close to a bus or truck as a pedestrian or bycicle, that I should never get into a bar fight... And that I'm very very lucky that I was not born in the 3rd world.
I get more upset watching people lightly smack and yell at each other on public freakout than I do watching people die. It's not that I don't care about the dead either, I watched wpd and similar sites for years. I didn't enjoy watching it, but I liked knowing the reality of what was going on in the world, and how each one of us has the capacity to commit these atrocities. I'm still doing a lousy job at describing why I like to watch it. But I do.
Street fight videos, where the guy recording is Hooting and egging people on are disgusting
One does not fully-experience life until you encounter a death of something you care about. It being a pet, person; nothing gives you that real sense of reality until your true feelings are challenged.
I used to live in the Disney headspace until my dog had to be put down. Now with my parents being in their seventies, and me in my thirties I fear losing them the most as the feeling of losing my dog was hard enough.
That's the tragic consequence of being human. Either the people you care about leave first or you do, but in the end, everyone goes. We are blessed and cursed with the knowledge to understand this. We should try to maximize the time we spend with those that are important to us.
Well, i think it goes to a point. I'd imagine there's some goldilocks zone of time spent with the animal, care experienced from the animal, dependence on the animal, and manner/speed of death/ time spent watching the thing die.
I say animal to explicitly include humans. Finding my hamster dead in fifth grade did change me. But watching my mother slowly die a horrible, haunting death didn't make me a better person. I'm just saying that there's a spectrum that goes something like: easy to forget about, I'm able to not worry, sometimes i think about it when i dont want, often i think about it, often it bothers me, and do on. You can probably imagine the cycle of obsession and stress.
This really goes for all traumatic experiences. There's a point where they can make you a better person, but there's a cliff after which you have no guarantees that it won't just start obliterating you and your life. It's still a kind of perspective. But can you have too much perspective? Lots of times i feel like i do
I concluded that we really should have the speed limit at 45mph on highways. Then one body dying on the road would be so rare it would be newsworthy.
It's not that we're particularly fragile, given the kind of physical trauma human beings can survive and recover from.
It's that we have technologically engineered things that are destructive enough to get even past that threshold. Modern warfare in particular is insanely energetic in the most literal, physical way - when you measure the energy output of weapons in joules. Partly because we're just that good at making things explode, and partly because improvements in metallurgy and electronics made it possible over time to locate targets with extreme precision in real time and then concentrate a lot of firepower directly on them. This, in particular, is why the most intense battlefields in Ukraine often look worse than WW1 and WW2 battles of similar intensity (e.g. Mariupol had more buildings destroyed than Stalingrad).
But even our small arms deliver much more energy to the target than their historical equivalents. Bows and arrows pack ~150 J at close range, rapidly diminishing with distance. Crossbows can increase this to ~400 J. For comparison, an AK-47 firing standard issue military ammo is ~2000 J.
>Crossbows can increase this to ~400 J.
Funny you mention crossbows; the Church at one point in time tried to ban them because they democratized violence to a truly trivial degree. They were the nuclear bombs and assault rifles of medieval times.
Also, I will take this moment to also mention that the "problem" with weapons always seem to be how quickly they can kill rather than the killing itself. Kind of takes away from the discussion once that is realized.
Watch how a group of wild dogs kill their prey, then realise that for milenia human like apes were part of their diet. Even the modern battlefield is more humane than the African savannah.
That reminds me of this[0]. It's a segment of BBC's Planet Earth, where a pack of Cape Hunting Dogs are filmed, hunting.
It's almost military precision.
[0] https://www.youtube.com/watch?v=MRS4XrKRFMA
> Even the modern battlefield is more humane than the African savannah.
On behalf of dead WWI soldiers I find this offensive.
Yeah, I tracked a lost dog and found the place it was caught by wolves and eventually eaten. Terrible way to go. I get now why the owner was so desperate to find it, even without any hope of the dog surviving - I'd want to end it quicker for my dogs too if this happened to them.
Humans can render other humans unrecognizable with a rock.
Brutal murder is low tech.
> Humans can render other humans unrecognizable with a rock.
They are much less likely to.
We have instinctive repulsion to violence, especially extending it (e.g. if the rock does not kill at the first blow).
It is much easier to kill with a gun (and even then people need training to be willing to do it), and easier still to fire a missile at people you cannot even see.
Than throwing a face punch or a rock? You should check public schools.
Than killing with bare hands or a rock, which I believe is still pretty uncommon in schools.
GP didn't talk about killing
Extreme violence then? With rocks, clubs of bare hands? I was responding to "render other humans unrecognizable with a rock" which I am pretty sure is uncommon in schools.
Render unrecognizable? Yeah, I guess that could be survivable, but it's definitely lethal intent.
That's possible with just a well placed punch to the nose or to one of the eyes. I've seen and done that, in public schools.
Uh... sure, maybe during the initial swelling. But that's temporarily rendered unrecognizable.
If you caused enough damage such that someone could not later be recognized, I wonder why you are not in prison.
Not in public schools in the British sense. I assume it varies in public schools in the American sense, and I am guessing violence sufficient to render someone unrecognisable is pretty rare even in the worst of them.
Not at scale.
Armies scale up.
It’s like the original massive scale organization.
Scaling an army of rock swingers is a lot more work than giving one person an AK47 (when all who would oppose them have rocks).
(Thankfully in the US we worship the 2A and its most twisted interpretation. So our toddlers do shooter drills. /s)
You are discounting the complexity of the logistics required for an AK47 army. You need ammo, spare parts, lubricant and cleaning tools. You need a factory to build the weapon, and churn out ammunition.
Or, gather a group of people, tell them to find a rock, and go bash the other sides head.
Complexity of logistics applies to any large army. The single biggest limiting factor for most of history has been the need to either carry your own food, or find it in the field. This is why large-scale military violence requires states.
> You need ammo, spare parts, lubricant and cleaning tools.
The ak-47 famously only needs the first item in that list.
That being the key to its popularity.
It should be noted that the purported advantages of AK action over its competitors in this regard are rather drastically overstated in popular culture. E.g. take a look at these two vids showing how AK vs AR-15 handle lots of mud:
https://www.youtube.com/watch?v=DX73uXs3xGU
https://www.youtube.com/watch?v=YAneTFiz5WU
As far as cleaning, AK, like many guns of that era, carries its own cleaning & maintenance toolkit inside the gun. Although it is a bit unusual in that regard in that this kit is, in fact, sufficient to remove any part of the gun that is not permanently attached. Which is to say, AK can be serviced in the field, without an armory, to a greater extent than most other options.
But the main reason why it's so popular isn't so much because of any of that, but rather because it's very cheap to produce at scale, and China especially has been producing millions of AKs specifically to dump them in Africa, Middle East etc. But where large quantities of other firearms are available for whatever reason, you see them used just as much - e.g. Taliban has been rocking a lot of M4 and M16 since US left a lot of stocks behind.
Main advantage in the Ukraine conflict for AKs is the ammo availablility.
The only small arms cartridge plant that Ukraine had originally was in Luhansk, so it got captured even before 2022. It's only this year that they've got a new plant operational, but it produces both 5.45 and 5.56.
And Western supplies are mostly 5.56 for obvious reasons, although there are some exceptions - mostly countries that have switched fairly late and still have substantial stocks of 5.45, such as Bulgaria. But those are also limited in quantity.
So in practice it's not quite so simple, and Ukraine seems to be aiming for 5.56 as their primary cartridge long-term, specifically so that it's easier for Western countries to supply them with guns and ammo.
If you think the AKs in use in Russia and Ukraine aren’t getting regular maintenance, cleaning and spare parts, I don’t think you’re watching enough of the content coming out of the war zone.
Soldiering isn’t sexy, it’s digging trenches, cleaning kit, and eating concussive blasts waiting to fight or die.
You don’t sit in a bunker all day waiting to defend a trench and not clean your gun.
It was largely a joke, but even so famously many of the AKs used in various other conflicts were buried in backyards in-between wars.
I'm no longer interested in getting a motorcycle, for similar reasons.
I spent my civil service as a paramedic assistent at the countryside, close to a mountainroad that was very popular with bikers. I was never interested in motorbikes in the first place, but the gruesome accidents I've witnessed turned me off for good.
The Venn diagram for EMTs, paramedics, and motorbikes is disjoint.
You’re only about 20x as likely to die on a motorcycle as in a car.
What can I say? People like to live dangerously.
Yes, but you're also far less likely to kill other people on a motorcycle as in a car (and even less, as in an SUV or pick-up truck). So some people live much less dangerously with respect to the people around them.
I suppose 20x a low number is still pretty low, especially given that number includes the squid factor.
I don't mean to trivialize traumatic experiences but I think many modern people, especially the pampered members of the professional-managerial class, have become too disconnected from reality. Anyone who has hunted or butchered animals is well aware of the fragility of life. This doesn't damage our concept of normal life.
My brother, an Eastern-European part-time farmer and full-time lorry driver, just texted me a couple of hours ago (I had told him I would call him in the next hour) that he might be with his hands full of meat by that time as “we’ve just butchered our pig Ghitza” (those sausages and piftii aren’t going to get made by themselves).
Now, ask a laptop worker to butcher an animal whom used to have a name and to literally turn its meat into sausages and see what said worker’s reaction would be.
Laptop worker here. Have participated/been present in butcher of sheep and pigs and helped out making sausages a couple of times. It was fine. An interesting experience.
There is a lot of skill going in to it, so I couldn't do it myself. You need guidance of someone who is knowledgeable and has the proper tools and facilities for the job.
What is it about partaking in or witnessing the killing of animals or humans that makes one more connected to reality?
Lots of people who spend time working with livestock on a farm describe a certain acceptance and understanding of death that most modern people have lost.
In Japan, some sushi bars keep live fish in tanks that you can order to have served to you as sushi/sashimi.
The chefs butcher and serve the fish right in front of you, and because it was alive merely seconds ago the meat will still be twitching when you get it. If they also serve the rest of the fish as decoration, the fish might still be gasping for oxygen.
Japanese don't really think much of it, they're used to it and acknowledge the fleeting nature of life and that eating something means you are taking another life to sustain your own.
The same environment will likely leave most westerners squeamish or perhaps even gag simply because the west goes out of its way to hide where food comes from, even though that simply is the reality we all live in.
Personally, I enjoy meats respecting and appreciating the fact that the steak or sashimi or whatever in front of me was a live animal at one point just like me. Salads too, those vegetables were (are?) just as alive as I am.
Plenty of westerners are not as sheltered from their food as you. Have you never gone fishing and watched your catch die? Have you never boiled a live crab or lobster? You've clearly never gone hunting.
Not to mention the millions of Americans working in the livestock and agriculture business who see up close every day how food comes to be.
A significant portion of the American population engages directly with their food and the death process. Citing one gimmicky example of Asian culture where squirmy seafood is part of the show doesn't say anything about the culture of entire nations. That is not how the majority of Japanese consume seafood. It's just as anomalous there. You only know about it because it's unusual enough to get reported.
You can pick your lobster out of the tank and eat it at American restaurants too. Oysters and clams on the half-shell are still alive when we eat them.
>Plenty of westerners are not as sheltered from their food as you. ... You only know about it because it's unusual enough to get reported.
In case you missed it, you're talking to a Japanese.
Some restaurants go a step further by letting the customers literally fish for their dinner out of a pool. Granted those restaurants are a niche, that's their whole selling point to customers looking for something different.
Most sushi bars have a tank holding live fish and other seafood of the day, though. It's a pretty mundane thing.
If I were to cook a pork chop in the kitchen of some of my middle eastern relatives they would feel sick and would probably throw out the pan I cooked it with (and me from their house as well).
Isn't this similar to why people unfamiliar with that style of seafood would feel sick -- cultural views on what is and is not normal food -- and not because of their view of mortality?
You're not grasping the point, which I don't necessarily blame you.
Imagine that to cook that pork chop, the chef starts by butchering a live pig. Also imagine that he does that in view of everyone in the restaurant rather than in the "backyard" kitchen let alone a separate butchering facility hundreds of miles away.
That's the sushi chef butchering and serving a live fish he grabbed from the tank behind him.
When you can actually see where your food is coming from and what "food" truly even is, that gives you a better grasp on reality and life.
It's also the true meaning behind the often used joke that goes: "You don't want to see how sausages are made."
I grasp the point just fine, but you haven't convinced me that it is correct.
The issue most people would have with seeing the sausage being made isn't necessarily watching the slaughtering process but with seeing pieces of the animal used for food that they would not want to eat.
But isn't that the point? If someone is fine eating something so long as he is ignorant or naive, doesn't that point to a detachment from reality?
I wouldn't want to eat a cockroach regardless of whether I saw it being prepared or not. The point I am making is that 'feeling sick' and not wanting to eat something isn't about being disconnected from the food. Few people would care if you cut off a piece of steak from a hanging slab and grilled it in front of them, but would find it gross to pick up all the little pieces of gristle and organ meat that fell onto the floor, grind it all up, shove it into an intestine, and cook it.
> Few people would care if you cut off a piece of steak from a hanging slab
The analogy here would be watching a live cow get slaughtered and then butchered from scratch in front of you, which I think most Western audiences (more than a few) might not like.
A cow walks into the kitchen, it gets a captive bolt shoved into its brain with a person holding a compressed air tank. Its hide is ripped off and it is cut into two pieces with all of its guts on the ground and the flesh and bones now hang as slabs.
I am asserting that you could do all of that in front of a random assortment of modern Americans, and then cut steaks off of it and grill them and serve them to half of the crowd, and most of those people would not have an problem eating those steaks.
Then if you were to scoop up all the leftover, non-steak bits from the ground with shovels, throw it all into a giant meat grinder and then take the intestines from a pig, remove the feces from them and fill them with the output of the grinder, cook that and serve it to the other half of the crowd, then a statistically larger proportion of that crowd would not want to eat that compared to the ones who ate the steak.
> I am asserting that you could do all of that in front of a random assortment of modern Americans, and then cut steaks off of it and grill them and serve them to half of the crowd, and most of those people would not have an problem eating those steaks.
I am asserting that the majority of western audiences, including Americans, would dislike being present for the slaughtering and butchering portion of the experience you describe.
I'm a 100% sure none of my colleagues would eat the steak if they could see the live cow get killed and skinned first. They wouldn't go to that restaurant to begin with and they'd lose their appetite entirely if they somehow made it there.
I probably also wouldn't want to eat that, but more because that steak will taste bad without being aged properly.
You’re just going down the list of things that sound disgusting. The second sounds worse than the first but both sound horrible.
Sorry I got a bit too involved in the discussion and just should have let it go a long time ago.
Most audiences wouldn’t like freshly butchered cow - freshly butchered meat is tough and not very flavorful, it needs to be aged to allow it to tenderize and develop.
The point is that most Western audiences would likely find it unpleasant to be there for the slaughtering and butchering from scratch.
That the point is being repeated to no effect ironically illustrates how most modern people (westerners?) are detached from reality with regards to food.
To me, the logical conclusion is that they don't agree with your example and think that you are making connections that aren't evidenced from it.
I think you are doing the same exact thing with the above statement as well.
In the modern era, most of the things the commons come across have been "sanitized"; we do a really good job of hiding all the unpleasant things. Of course, this means modern day commons have a fairly skewed "sanitized" impression of reality who will get shocked awake if or when they see what is usually hidden (eg: butchering of food animals).
That you insist on contriving one zany situation after another instead of just admitting that people today are detached from reality illustrates my point rather ironically.
Whether it's butchering animals or mining rare earths or whatever else, there's a lot of disturbing facets to reality that most people are blissfully unaware of. Ignorance is bliss.
To be blunt, the way you express yourself on this topic comes off as very "enlightened intellectual." It's clear that you think that your views/assumptions are the correct view and any other view is one held by the "commons"; one which you can change simply by providing the poor stupid commons with your enlightened knowledge.
Recall that this whole thread started with your proposition that seeing live fish prepared in front of someone "will likely leave most westerners squeamish or perhaps even gag simply because the west goes out of its way to hide where food comes from, even though that simply is the reality we all live in." You had no basis for this as far as I can tell, it's just a random musing by you. A number of folks responded disagreeing with you, but you dismissed their anecdotal comments as being wrong because it doesn't comport with your view of the unwashed masses who are, obviously, feeble minded sheep who couldn't possibly cope with the realities of modern food production in an enlightened way like you have whereby you "enjoy meats respecting and appreciating the fact that the steak or sashimi or whatever in front of me was a live animal at one point just like me." How noble of you. Nobody (and I mean this in the figurative sense not the literal sense) is confused that the slab of meat in front of them was at one point alive.
Then you have the audacity to accuse someone of coming up with "zany" situations? You're the one that started the whole zany discussion in the first place with your own zany musings about how "western" "commons" think!
I grew up with my farmer grandpa who was a butcher, and I've seen him butcher lots of animals. I always have and probably always will find tongues & brains disgusting, even though I'm used to seeing how the sausage is made (literally).
Some things just tickle the brain in a bad way. I've killed plenty of fish myself, but I still wouldn't want to eat one that's still moving in my mouth, not because of ickiness or whatever, but just because the concept is unappealing. I don't think this is anywhere near as binary as you make it seem, really.
No irony in this comment lol.
> Becoming aware of this is super damaging to our concept of normal life.
Not being aware of this is also a cause of traffic accidents. People should be more careful driving.
You can be aware without having to see the most gruesome parts of it to a point where it is traumatizing and damaging.
I've seen the crumpled metal of a car, I don't need to see the people inside to know it is not good.
>> ridiculous how people think war is like Call of Duty.
It is also ridiculous how people think every soldier's experience is like Band of Brothers or Full Metal Jacket. I remember an interview with a WWII vet who had been on omaha beach: "I don't remember anything happening in slow motion ... I do remember eating a lot of sand." The reality of war is often just not visually interesting enough to put on the screen.
Normal does not exist - it’s just the setting on your washing machine.
Earlier this year, I was at ground zero of the Super Bowl parade shooting. I didn’t ever dream about it, but I spent the following 3-4 days constantly replaying it in my head in my waking hours.
Later in the year I moved to Florida, just in time for Helene and Milton. I didn’t spend much time thinking about either of them (aside from during prep and cleanup and volunteering a few weeks after). But I had frequent dreams of catastrophic storms and floods.
Different stressors affect people (even myself) differently. Thankfully I’ve never had a major/long-term problem, but I know my reactions to major life stressors never seemed to have any rhyme or reason.
I can imagine many people might’ve been through a few things that made them confident they’d be alright with the job, only to find out dealing with that stuff 8 hours a day, 40 hours a week is a whole different ball game.
A parade shooting is bad, very bad, but is still tame compared to the sorts of things to which website moderators are exposed on a daily/hourly basis. Footage of people being shot is actually allowed on many platforms. Just think of all the war footage that is so common these days. The dark stuff that moderators see is way way worse.
> Footage of people being shot is actually allowed on many platforms.
It's also part of almost every American cop and military show and movie. Of course it's not real but it looks the same.
> Of course it's not real but it looks the same.
I beg to differ. TV shows and movies are silly. Action movies are just tough-guy dancing.
"Tough guy dancing" is such an apt phrase.
The organizer is even called a "fight choreographer".
I mean more the gory parts. Blood, decomposed bodies everywhere etc.
And I wasn't talking about action hero movies.
It absolutely does not look the same. You instinctively know that what you see is just acting. I somehow don't believe that you have seen a real video of a person getting shot or beheaded or sucked into a lathe. Seeing a life getting wiped out is emotionally completely different because that's more than a picture you are emotionally processing. It looks only the same if you have zero empathy or are a psychopath.
I have often wondered what would happen if social product orgs required all dev and product team members to temporarily rotate through moderation a couple times a year.
I can tell you that back when I worked as a dev for the department building order fulfillment software at a dotcom, my perspective on my own product has drastically changed after I had spent a month at a warehouse that was shipping orders coming out of the software we wrote. Eating my own dog food was not pretty.
Yeah I've wondered the same thing about jobs in general too.
Society would be a very different place if everyone had to do customer service or janitorial work one weekend a month.
Many (all?) Japanese schools don't have janitors. Instead students clean on rotation. Never been much into Japanese stuff but I absolutely admire this about their culture, and imagine it's part of the reason that Japan is such a clean and at least superficially respectful society.
Living in other Asian nations where there are often defacto invisible caste systems can be nauseating at times - you have parents that won't allow their children to participate in clean up efforts because their child is 'above handling trash.' That's gonna be one well adjusted adult...
Perhaps this is what happens when someone creates a mega-sized website comprising hundreds of millions of pages using other peoples' submitted material, effectively creating a website that is too large to "moderate". By letting the public publish their material on someone else's mega-sized website instead of hosting their own, perhaps it concentrates the web audience to make it more suitable for advertising. Perhaps if the PTSD-causing material was published by its authors on the authors' own websites, the audience would be small, not suitable for advertising. A return to less centralised web publishing would perhaps be bad for the so-called "ad ecosystem" created by so-called "tech" company intermediaries. To be sure, it would also mean no one in Kenya would be intentionally be subjected to PTSD-causing material in the name of fulfilling the so-called "tech" industry's only viable "business model": surveillance, data collection and online ad services.
It's a problem when you don't verify the identity of your users and hold them responsible for illegal things. If Facebook verified you were John D SSN 123-45-6789 they could report you for uploading CSAM and otherwise permanently block you from using the site if uploading objectionable material; meaning only exposure to horrific things is only necessary once per banned user. I would expect that to be orders of magnitude less than what they deal with today.
A return to less centralized web publishing would also be bad for the many creators who lack the technical expertise or interest to jump through all the hoops required for building and hosting your own website. Maybe this seems like a pretty small friction to the median HN user, but I don't think it's true for creators in general, as evidenced by the enormous increase in both the number and sophistication of online creators over the past couple of decades.
Is that increase worth traumatizing moderators? I have no idea. But I frequently see this sentiment on HN about the old internet being better, framed as criticism of big internet companies, when it really seems to be at least in part criticism of how the median internet user has changed -- and the solution, coincidentally, would at least partially reverse that change.
Content hosting for creators can be commoditized.
Content discovery may even be able to remain centralized.
No idea if there's a way for it to work out economically without ads, but ads are also unhealthy so maybe that's ok.
Introducing a free unlimited hosting service where you could only upload pictures, text or video. There’s a public page to see that content among adds and links to you friends free hosting service pages. TOS is a give-give: you give them the right to extract all the aggregated stat they want and display the adds, they give you the service for free so you own you content (and are legally responsible of it)
I mean, the technical expertise thing is solvable, it’s just no-one wants to solve it because SaaS is extremely lucrative."
I'm wondering if there are precedents in other domains. There are other jobs where you do see disturbing things as part of your duty. E.g. doctors, cops, first responders, prison guards and so on...
What makes moderation different? and how should it be handled so that it reduces harm and risks? surely banning social media or not moderating content aren't options. AI helps to some extent but doesn't solve the issue entirely.
I don’t have any experience with this, so take this with a pinch of salt.
What seems novel about moderation is the frequency that you confront disturbing things. I imagine companies like Meta have such good automated moderation that what remains to be viewed by a human is practically a firehose of almost certainly disturbing shit. And as soon as you’re done with one post, the next is right there. I doubt moderators spend more than 30 seconds on the average image, which is an awful lot of stuff to see in one day.
A doctor just isn’t exposed to that sort of imagery at the same rate.
> I imagine companies like Meta have such good automated moderation that what remains to be viewed by a human is practically a firehose of almost certainly disturbing shit.
On the contrary I would expect that it would be the edge cases that they were shown - why loop in a content moderator if you an be sure that it is prohibited ont he platform without exposing a content moderator?
In this light, it might make sense why they sue: They are there more as a political org so that facebook can say: "We employ 140 moderators in Kenya alone!" while they do indifferent work that facebook already can filter out.
Even if 1% of images are disturbing that’s multiple per hour, let anyone across months.
US workman’s comp covers PTSD acquired on the job, and these kinds of jobs are rife with it.
> They are there more as a political org so that facebook can say: "We employ 140 moderators in Kenya alone!" while they do indifferent work that facebook already can filter out.
Why assume they're just token diversity hires who don't do useful work..?
Have you ever built an automated content moderation system before? Let me tell you something about them if not: no matter how good your automated moderation tool, it is pretty much always trivial for someone with familiarity with its inputs and outputs to come up with an input it mis-predicts embarrassingly badly. And you know what makes the biggest difference.. is humans specifying the labels.
I don't assume diversity hires, I assume that these people work for the Kenyan part of Facebook and that Facebook employs an equivalent workforce elsewhere.
I am also not saying that content moderation should catch everything.
What I am saying is that the content moderation teams should ideally decide on the edge cases as they are hard for automated system.
In turn that also means that these people ought not to be exposed to too hardcore material - as that is easier to classify.
Lastly I say that if that is not the case - then they are probably not there to carry out a function but to fill a political role.
Content moderation also involves reading text, so you’d imagine that there’s a benefit to having people who can label data and provide ground truth in any language you’re moderating.
Even with images, you can have different policies in different places or the cultural context can be relevant somehow (eg. some country makes you ban blasphemy).
Also, I have heard of outsourcing to Kenya just to save cost. Living is cheaper there so you can hire a desk worker for less. Don’t know where the insistence you’d only hire Kenyans for political reasons comes from.
Also a doctor is paid $$$$$ and it mostly is a vocational job
Content moderator is a min wage job with bad working hours, no psychological support, and you spend your day looking at rape, child porn, torture and executions.
>Also a doctor is paid $$$$$
>Content moderator is a min wage job
So it's purely a monetary dispute?
>bad working hours, no psychological support, and you spend your day looking at rape, child porn, torture and executions.
Many other jobs have the same issues, though admittedly with less frequency, but where do you draw the line?
> but where do you draw the line?
How about grouping the jobs into two categories: A) Causes PTSD and B) Doesn't cause PTSD
If a job as a constantly high percentage of people ending up with PTSD, then they aren't equipped well enough to handle it, by the company who employs them.
>How about grouping the jobs into two categories: A) Causes PTSD and B) Doesn't cause PTSD
I fail to see how this addresses my previous questions of "it's purely a monetary dispute?" and "where do you draw the line?". If a job "Causes PTSD" (whatever that means), then what? Are you entitled to hazard pay? Does this work out in the end to a higher minimum wage for certain jobs? Moreover, we don't have similar classifications for other hazards, some of which are arguably worse. For instance, dying is probably worse than getting PTSD, but the most dangerous jobs have pay that's well below the national median wage[1][2]. Should workers in those jobs be able to sue for redress as well?
[1] https://www.ishn.com/articles/112748-top-25-most-dangerous-j...
[2] https://www.bls.gov/oes/current/oes_nat.htm
What could a company provide a police officer with to prevent PTSD from witnessing a brutal child abuse case? A number of sources i found estimate the top of the range to be ~30% of police officers may be suffering from it
[1] https://www.policepac.org/uploads/1/2/3/0/123060500/the_effe...
You can’t prevent it but you can help deal with it later.
> So it's purely a monetary dispute?
I wouldn't say purely, but substantially yes. PTSD has costs. The article says some out; therapy, medication, mental, physical, and social health issues. Some of these money can directly cover, whereas others can only be kinda sorta justified with high enough pay.
I think a sustainable moderation industry would try hard to attract the kinds of people who are able to perform this job without too much negative impacts, and quickly relieve those who try but are not well suited, and pay for some therapy.
Also doctors are very frequently able to do something about it. Being powerless is a huge factor in mental illness.
“I would imagine that companies like Meta have such good automated moderation that what remains to be viewed by a human is practically a firehose of almost certainly disturbing shit.”
This doesn’t make sense to me. Their automated content moderation is so good that it’s unable to detect “almost certainly disturbing shit”? What kind of amazing automation only works with subtleties but not certainties?
I assumed that, at the margin, Meta would prioritise reducing false negatives. In other words, they would prefer that as many legitimate posts are published as possible.
So the things that are flagged for human review would be on the boundary, but trend more towards disturbing than legitimate, on the grounds that the human in the loop is there to try and publish as many posts as possible, which means sifting through a lot of disturbing stuff that the AI is not sure about.
There’s also the question of training the models - the classifiers may need labelled disturbing data. But possibly not these days.
However, yes, I expect the absolute most disturbing shit to never be seen by a human.
—
Again, literally no experience, just a guy on the internet pressing buttons on a keyboard.
>In other words, they would prefer that as many legitimate posts are published as possible.
They'd prefer that as many posts are published, but they probably also don't mind some posts being removed if it meant saving a buck. When canada and australia implemented a "link tax", they were happy to ban all news content to avoid paying it.
Yes, Meta are economically incentivised to reduce the number of human reviews (assuming the cost of improving the model is worthwhile).
This probably means fewer human reviewers reviewing a firehose, not the same number of human reviewers reviewing content at a slower rate.
I’d think the higher density/frequency of disturbing content would cause people to be desensitized.
I never seen blood or gore in my life and find seeing it shocking.
But I’d imagine gore is a weekly situation for surgeons.
I watch surgery videos sometimes, out of fascination. It's not gore to me - sure it's flesh and blood but there is a person whose life is going to be probably significantly better afterwards. They are also not in pain.
I exposed myself to actual gore vids in the aughts and teens... That stuff still sticks with me in a bad way.
Context matters a lot.
> They are also not in pain.
My understanding is that during surgery, your body is most definitely in pain. Your body still reacts as it would to any damage, but anesthetics block the pain signals from reaching the brain.
But there is a difference between someone making an effort healing someone else vs content with implications that something really disturbing happened that makes you lose faith in humanity.
Should the Uvalde police sue the school for putting them through that trauma?
I agree. But that might be comorbid with PTSD. It’s probably not good for you to be _that_ desensitised to this sort of thing.
I also feel like there’s something intangible regarding intent that makes moderation different from being a doctor. It’s hard for me to put into words, but doctors see gore because they can hopefully do something to help the individual involved. Moderators see gore but are powerless to help the individual, they can only prevent others from seeing the gore.
It's also the type of gore that matters. Some of the worst stuff I've seen wasn't the worst because of the visuals, but because of the audio. Hearing people begging for their life while being executed surely would feel different to even a surgeon who might be used to digging around in people's bodies.
There are many common situations where professionals are helpless, like people that needs to clean up dead bodies after an accident.
Imagine if this becomes a specialized, remote job where one tele-operates the brain and blood scrubbing robot all workday long, accident, after accident after accident. I am sure they'd get PTSD too, airey, sometime it's just oil and coolant, but there's still a lot of body-tissue involved.
I'd really like to see more data on this. I really think (most) people would be desensitized and not become hyper sensitive to this content.
Another fascet to this is the moderators willfully agreed to review this content and had full autonomy to leave the job at any point.
Desensitization is only one stage of it. It's not permanent & requires dissociation from reality/humanity on some level. But that stuff is likely to come back and haunt one in some way. If not, it's likely a symptom of something deeper going on.
My guess is that's why it's after bulldozing hundreds of Palestinians, instead of 1 or 10s of them, that Israeli soldiers report PTSD.
If you haven't watched enough videos of the ongoing genocides in the world to realize this, it'll be a challenge to have a realistic take on this article.
> I imagine companies like Meta have such good automated moderation
I imagine that they have a system that is somewhere between shitty and none functional. This is the company that will more often than not flag marketplace posts as "Selling animal", either completely at random or because the pretty obvious "from an animal free home" phrase is used.
If they can't get this basic text parsing correct, how can you expect them to correctly flag images with any real sense of accuracy?
A friend's friend is a paramedic and as far as I remember they can take the rest of a day off after witnessing death on duty and there's an obligatory consulation with a mental healthcare specialist. From reading the article, it seems like those moderators are seeing horrific things almost constantly throughout the day.
I've never heard of a policy like that for physicians and doubt it's common for paramedics. I work in an ICU and a typical day involves a death or resuscitation. We would run out of staff with that policy.
Maybe it's different in the US where ambulances cost money, but here in Germany the typical paramedic will see a wide variety of cases, with the vast majority of patients surviving the encounter. Giving your paramedic a day off after witnessing a death wouldn't break the bank. In the ICU or emergency room it would be a different story.
Ambulances cost money everywhere, it's just a matter of who is paying. Do we think paramedics in Germany are more susceptible to PTSD when patients die than ICU or ER staff, or paramedics anywhere?
> Ambulances cost money everywhere
Not in the sense that matters here: the caller doesn't pay (unless the call is frivolous), leading to more calls that are preemptive, overly cautious or for non-live-threatening cases. That behind the scenes people and equipment are paid for and a whole structure to do that exists isn't really relevant here
> Do we think paramedics in Germany are more susceptible to PTSD
No, we think that there are far more paramedics than ICU or ER staff, and helping them in small ways is pretty easy. For ICU and ER staff you would obviously need other measures, like staffing those places with people less likely to get PTSD or giving them regular counseling by a staff therapist (I don't know how this is actually handled, just that the problem is very different than the issue of paramedics)
Maybe a different country than yours ?
I might have misremembered that, but remember hearing the story. Now that I think about it I think that policy was applied only after unsuccessful CPR attempts.
My friend has repeatedly mentioned his dad became an alcoholic due to what he saw as a paramedic. This was back in the late 80s, early 90s so not sure they got any mental health help.
Sounds crazy. Just imagine dying because paramedic responsible for your survival just wanted end his day early.
I expect first responders rarely have to deal with the level of depravity mentioned in this Wired article from 2014, https://www.wired.com/2014/10/content-moderation/
You probably DO NOT want to read it.
There's a very good reason moderators are employed in far-away countries, where people are unlikely to have the resources to gain redress for the problems they have to deal with as a result.
Burnout, PTSD, and high turnover are also hallmarks of suicide hotline operators.
The difference? The reputable hotlines care a lot more about their employees' mental health, with mandatory breaks, free counseling, full healthcare benefits (including provisions for preventative mental health care like talk therapy).
Another important difference is that suicide hotlines are decoupled from the profit motive. As more and more users sign up to use a social network, it gets more profitable and more and more load needs to be borne by the human moderation team. But suicide and mental health risk is (roughly) constant (or slowly increasing with societal trends, not product trends).
There's also less of an incentive to minimize human moderation cost. In large companies, some directors view mod teams as a cost center that takes away from other ventures. In an organization dedicated only to suicide hotline response, a large share of the income (typically fundraising or donations) goes directly into the service itself.
In many states, pension systems give police and fire service sworn members a 20 year retirement option. The military has similar arrangements.
Doctors and lawyers can’t afford that sort of option, but they tend to embrace alcoholism at higher rates and collect ex-wives.
Moderation may be worse in some ways. All day, every day, you see depravity at scale. You see things that shouldn’t be seen. Some of it you can stop, some you cannot due to the nature of the rules.
I think banning social media isn’t an answer, but demanding change to the algorithms to reduce the engagement to high risk content is key.
I'm not sure your comparisons are close enough to be considered precedents.
My guess is even standing at the ambulance drive in of a big hospital, you'll not see as much horrors in a day as these people see in 30 minutes.
My friends who are paramedics have seen some horrific scenes. They have also been shot, stabbed, and suffered lifelong injuries.
They are obviously not identical scenarios. They have similarities and they also have differences.
Outside of some specific cities, I can guarantee it. Even a busy Emergency Dept on Halloween night had only a small handful of bloody patients/trauma cases, and nothing truly horrific when I did my EMT rotation.
Trauma isn’t just a function of what you’ve experienced, but also of what control you had over the situation and whether you got enough sleep.
Being a doctor and helping people through horrific things is unlike helplessly watching them happen.
IIRC, PTSD is far more common among people with sleep disorders, and it’s believed that the lack of good sleep is preventing upsetting memories from being processed.
at least in the US, those jobs - doctors, cops, firefighters, first responders - are well compensated (not sure about prison guards), certainly compared to content moderators who are at the bottom of the totem pole in an org like FB
What does compensation have to do with it? Is someone who stares at thousands of traumatizing, violent images every day going to be less traumatized if they're getting paid more?
Yes, they will be much more able to deal with the consequences of that trauma than someone who gets a pittance to do the same thing. A low-wage peon won't even be able to afford therapy if they need it.
At least they can pay for therapy and afford to stop working or find another job
Shamefully, first responders are not well compensated - usually it's ~$20 an hour.
I've lived places where the cops make $100k+. It all depends on location.
Sorry - I'm specifically referring to EMTs and Paramedics, who usually make somewhere in the realm of $18-25 an hour.
Yikes that’s pretty shameful.
In my district, all the firefighers are volunteers (including me). Yeah, we deal with some crappy medical calls and sometimes deaths. It's nowhere near as dramatic as the non-first-responders in this thread seem to think.
I suspect what makes it different is the concentration to just the flagged is what turns it into especially traumatizing. Of course there is probably a bell curve of sorts for "experiences" vs "level of personal trauma". One incident might be enough for someone "weaker" to develop PTSD. Not a slight on the afflicted, just how things are.
Casual Facebook viewers may stumble across something disturbing on it, but they certainly don't get PTSD at the rate of the poor moderators. Likewise the professionals usually have their own professional exposure levels to messed up stuff. Meanwhile child pornography investigation departments who have to catalogue the evidence are notorious for suffering poor mental health even with heavy measures taken.
There is already the 'blacklist hash' approach to known bad images which can help reduce exposure. So they don't all need to be exposed to say the same cartel brutal execution video, the bot takes care of it. I don't know anything about Facebook's internal practices but I would presume they already are doing this or similar with their tech hiring base.
Dilution is likely the answer for how to make it more palitible and less traumatizing. Keep the really bad stuff exposure at similar proportions to what other careers experience. Not having 'report button' and 'checking popular content' as separate tasks and teams would probably help a little bit. I suspect the moderators wouldn't be as traumatized if they just had to click through trending posts all day. A dillution approach would still have to deal with the logistical trade-offs for what could be viable. Increasing the moderation payroll a hundred-fold and making them work at effectively 1% efficiency would make for better moderator experiences, but Facebook would be understandably reluctant to go from 5% revenue content moderation budget to 50% revenue content moderation.
From those I know that worked in the industry, contractor systems are frequently abused to avoid providing the right level of counseling/support to moderators.
I think part of it is the disconnection from the things you're experiencing. A paramedic or firefighter is there, acting in the world, with a chance to do good and some understanding of how things can go wrong. A content moderator is getting images beamed into their brain that they have no preparation for, of situations that they have no connection to or power over.
> A paramedic or firefighter is there, acting in the world, with a chance to do good and some understanding of how things can go wrong.
That's bullshit. Ever talked to a paramedic or firefighter?
Frequency plus lack of post traumatic support.
A content moderator for Facebook will invariably see more depravity and more frequently than a doctor or police officer. And likely see far less support provided by their employers to emotionally deal with it too.
This results in a circumstance where employees don’t have the time nor the tools to process.
ER docs definitely get PTSD. Cops too.
Doctors, cops, first responders, prison guards see different horrible things.
Content moderators see all of that.
As other sibling comments noted: most other jobs don't have the same frequent exposure to disturbing content. The closest are perhaps combat medics in an active warzone, but even they usually get some respite by being rotated.
Doctors, cops, first responders, prison guards, soldiers etc also just so happen to be the most likely groups of people to develop PTSD.
Don't forget judges, especially the ones in this case ...
And it used to be priests who had to deal with all the nasty confessions.
Judges get loads of compensation and perks.
> surely banning social media or not moderating content aren't options
Why not? What good has social media done that can't be accomplished in some other way, when weighed against the clear downsides?
That's an honest question, I'm probably missing lots.
Would we really be better served with media returning to being un-interactive and unresponsive? Where just getting on TV was something of note instead of everyone being on the internet. Where there was widespread downright cultish obsession with celebrities. The "We interrupt this news live from Iraq for celebrity getting out of prison news" era.
I think not. The gatekeepers of the old media pretty much died for a reason, that they seriously sucked at their job. Open social media and everyone having a camera in their pocket is what allowed us to basically disprove UFO sightings and prove routine police abuse of power.
Billions of people use them daily (facebook, instagram, X, youtube, tiktok...). Surely we could live without them like we did not long ago, but there's so much interest at play here that I don't see how they could be banned. It's akin to shutting down internet.
The Kenyan moderators' PTSD reveals the fundamental paradox of content moderation: we've created an enterprise-grade trauma processing system that requires concentrated psychological harm to function, then act surprised when it causes trauma. The knee-jerk reaction of suggesting AI as the solution is, IMO, just wishful thinking - it's trying to technologically optimize away the inherent contradiction of bureaucratized thought control. The human cost isn't a bug that better process or technology can fix - it's the inevitable result of trying to impose pre-internet regulatory frameworks on post-internet human communication that large segments of the population may simply be incompatible with.
Any idea what our next steps are? It seems like we stop the experiment of mass communication, try to figure out a less damaging knowledge-based filtering mechanism (presently executed by human), or throw open the flood gates to all manner of trauma inducing content and let the viewer beware.
> Any idea what our next steps are? [..] try to figure out a less damaging knowledge-based filtering mechanism [..]
It should cost some amount of money to post anything online on any social media platform: pay to post a tweet, article, image, comment, message, reply.
(Incidentally, crypto social networks have this by default simply due to constraints in how blockchains work.)
This is a great idea to prevent bots, but that’s not who posts the bad stuff this thread is talking about. Wherever you set the threshold will determine a point of wealth where someone can no longer afford to speak on these platforms, and that inevitably will prevent change, which tends to come from the people not well-served by the system as it is, i.e. poor people. Is that your goal?
> a point of wealth where someone can no longer afford to speak on these platforms, and that inevitably will prevent change, which tends to come from the people not well-served by the system as it is, i.e. poor people.
"Change" in itself is not a virtue. What I think you want is good or beneficial change? That said, what evidence do you have that poor people specifically are catalysing positive change online?
> This is a great idea to prevent bots, but that’s not who posts the bad stuff this thread is talking about.
There is no difference between a bot and a human as far as a network is concerned. After all, bots are run by humans.
The article specifically says that: "The images and videos including necrophilia, bestiality and self-harm caused some moderators to faint, vomit, scream and run away from their desks, the filings allege."
> Is that your goal?
Simply making it cost something to post online will mean that people who want to post spam can directly pay for the mental healthcare of moderators who remove their content.
If it turns out that you can find a group of people so poor that they simultaneously have valuable things to say online yet can't afford to post, then you can start a non-profit or foundation to subsidize "poor people online". (Hilariously, the crypto-bros do this when they're trying to incentivize use of their products: they set aside funds to "sponsor" thousands of users to the tune of tens of millions of dollars a year in gas refunds, airdrops, rebates and so forth.)
> "Change" in itself is not a virtue. What I think you want is good or beneficial change? That said, what evidence do you have that poor people specifically are catalysing positive change online?
We would probably disagree on what change we think is beneficial, but in terms of catalyzing the changes I find appealing, I see plenty of it myself. I'm not sure how I could dig up a study on something like this, but I'm operating on the assumption that more poor people would advocate the changes I'm interested in than rich, because the changes I want would largely be intended to benefit the former, potentially at the expense of the latter. I see this assumption largely confirmed in the world. That's why I find the prospect of making posting expensive threatening to society's capacity for beneficial change. The effect depends on what model you use to price social media use, how high you set the prices, how you regulate the revenue, etc, but I think the effect needs to be mitigated. In essence, my primary concern with this idea is that it may come from an antidemocratic impulse, not a will to protect moderators. If you don't possess that impulse, then I'm sorry to be accusing you of motives you don't possess, and I'll largely focus on the implementation details that would best protect the moderators while mitigating the suppression of discourse.
>you can start a non-profit or foundation to subsidize "poor people online".
Where are all the foundations helping provide moderator mental health treatment? This is a pretty widely reported issue; I'd expect to see wealthy benefactors trying to solve it, yet the problem remains unsolved. The issue, I think, is that there isn't enough money or awareness to go around to solve all niche financially-addressable problems. Issues have to have certain human-interest characteristics, then be carefully and effectively framed, to attract contributions from regular people. As such, I wouldn't want to artificially create a new problem, where poverty takes away basically the only meaningful voice a regular person has in the modern age, then expect somebody to come along and solve it with a charitable foundation. Again, if charity is this effective, then let's just start a foundation to provide pay and care to moderators. Would it attract contributions?
>the crypto-bros do this when they're trying to incentivize use of their products
The crypto-bros trying to incentivize use of their products have a financial incentive to do so. They're not motivated by the kindness of their own hearts. Where's the financial incentive to pay for poor people to post online?
>There is no difference between a bot and a human as far as a network is concerned. After all, bots are run by humans.
Most implementations of this policy would largely impact bot farms. If posts cost money, then there's a very big difference in the cost of a botnet and a normal account. Costs would be massively higher for a bot farm runner, and relatively insubstantial for a normal user. Such a policy would then most effectively suppress bots, and maybe the most extreme of spammers.
What I don't understand, then, is the association between bots/spammers and the shock garbage harming moderators. From what I know, bots aren't typically trying to post abuse, but to scam or propagandize, since they're run by actors either looking for a financial return or to push an agenda. If the issue is spammers, then I'd question whether that's the cause of moderator harm; I'd figure as soon as a moderator sees a single gore post, the account would get nuked. We should expect then that the harm is proportionate to the number of accounts, not posts.
If the issue is harmful accounts in large quantity, and easy account creation, then to be effective at reducing moderator harm, wouldn't you want to charge a large, one-time fee at account creation? If it costs ten dollars to make an account, bad actors would (theoretically) be very hesitant to get banned (even though in practice this seems inadequate to, e.g., suppress cheating in online games). I'd also be relatively fine with such a policy; nearly anyone could afford a single 5-10 usd fee for indefinite use, but repeat account creators would be suppressed.
>Simply making it cost something to post online will mean that people who want to post spam can directly pay for the mental healthcare of moderators who remove their content.
I don't think that adding a cost to the posts will end up paying for mental healthcare without careful regulation. The current poor treatment of moderators is a supply-demand issue, it's a relatively low-skill job and people are hungry, so you can treat them pretty bad and still have a sufficient workforce. They are also, if I'm correct, largely outsourced from places with worse labor protections. This gives the social media companies very little incentive to pay them more or treat them better.
An approach that might help is something like this: Require companies to charge a very small set amount to make each individual post, such that a normal user may pay in the realm of 5 usd in a month of use, but a spammer or bot farm would have to spend vastly more. Furthermore, but very important, require that this additional revenue be spent directly on the pay or healthcare of the moderation team.
In reality, though, I'd be very worried that this secondary regulation wouldn't enter or make it through a legislature. I'm also concerned that the social media companies would be the ones setting the prices. If such a cost became the norm, I expect that these companies would implement the cost-to-post as a subscription to the platform rather than a per-post price. They would immediately begin to inflate their prices as every subscription-based company currently does to demonstrate growth to shareholders. Finally, they'd pocket the gains rather than paying more to the moderators, since they have absolutely zero incentive to do anything else. I think this would cause the antidemocratic outcomes I'm concerned with.
My question for you, then, is whether you'd be interested in government regulation that implements a flat per-post or per-account-creation fee, not much more than 5usd monthly or 10usd on creation, not adjustable by the companies, and with the requirement that its revenue be spent on healthcare and pay for the moderation team?
Your reply is rather long so I'll only respond to 2 sections to avoid us speculating randomly without actually referring to data or running actual experiments.
To clarify:
> That's why I find the prospect of making posting expensive threatening to society's capacity for beneficial change.
I suggested making it cost something. "Expensive" is a relative term and for some reason you unjustifiably assumed that I'm proposing "expensive", however defined. Incentive design is about the marginal cost of using a resource, as you later observed when you suggested $5.
We often observe in real life (swimming pools, clubs, public toilets, hiking trails, camping grounds) that introducing a trivial marginal cost often deters bad actors and free-loaders[^0]. It's what's referred to in ideas such as "the tragedy of the commons".
> An approach that might help is something like this: Require companies to charge a very small set amount to make each individual post
Yes that's a marginal cost, which is what I suggested. So basically, we agree. The rest is implementation details that will depend on jurisdiction, companies, platforms and so forth.
> I don't think that adding a cost to the posts will end up paying for mental healthcare without careful regulation.
Without data or case studies to reference, I can't speculate about that and other things that are your opinions but thank you for thinking about the proposal and responding.
> Where are all the foundations helping provide moderator mental health treatment? This is a pretty widely reported issue; I'd expect to see wealthy benefactors trying to solve it, yet the problem remains unsolved.
I don't mean to sound rude but have you tried to solve the problem and start a foundation? Why is it some mysterious wealthy benefactor or other people who should solve it rather than you who cares about the problem? Why do you expect to see others and not yourself, solving it?
Raising funds from wealthy people for causes is much easier than people imagine.
---
[^0]: https://en.wikipedia.org/wiki/Free-rider_problem
But this would necessarily block out the poorest voices. While one might say that it is fine to block neonazi red necks, there are other poor people out their voicing valid claims.
How will this help?
The article says:
> The images and videos including necrophilia, bestiality and self-harm caused some moderators to faint, vomit, scream and run away from their desks, the filings allege.
You might have heard the saying, common in policy and mechanism design: "Show me the incentives, and I'll show you the outcome."
If you want to reduce spam, you increase the marginal cost of posting spam until it stops. In general if you introduce a small cost to any activity or service, the mere existence of the cost is often a sufficient disincentive to misuse.
But, you can think through implications for yourself, no? You don't need me to explain how to think about cause and effect? You can, say, think about examples in real life, or in your own town or building, where a service is free to use compared to one that has a small fee attached, and look at who uses what and how.
assumption here is the people posting vile shit are also broke&homeless?
RealID.
Reducing the sheer volume is still a critically important step.
You're right that fundamentally there's an imbalance between the absolute mass of people producing the garbage, and the few moderators dealing with it. But we also don't have an option to just cut everyone's internet.
Designing platforms and business models that inherently produce less of the nasty stuff could help a lot. But even if/when we get there, we'll need automated mechanisms to ask people if they really want to be jerks, absolutely need to send their dick picks, or let people deal with sheer crime pics without having to look at them more than two seconds.
The proper knee-jerk reaction would be to ban this kind of work, but that would also mean disallowing content sharing on Facebook.
That is why this type of work will not go away.
And AI is just not good enough to do this, I fully agree.
One of the unfortunate realities is that sometimes you need to be exposed to how grim reality can be as the alternative is living in a delusional bubble. However, one of the underlying points I was getting to is that often what is considered 'acceptable exposure' is highly politicized simply because control is attempting to be absolute and all encompassing. To me, comes across as overtly paternalistic especially when you start looking at the contradictions of 'good bad' vs 'bad bad' and why it is the way it is. I find it disappointing that we aren't allowed to self-censor, and even if we wanted to there simply aren't the tools available necessary for empowering people to make their own decisions at the point of consumption, but rather we employ filtering at the point of distribution which shifts the burden of decisions onto the platform and enact laws that focus that power even further into a limited number of hands.
Worked at PornHub's parent company for a bit and the moderation floor had a noticeable depressive vibe. Huge turnover. Can't imagine what these people were subjected to.
You don't mention the year(s), but I recently listened to Jordan Peterson's podcast episode 503. One Woman’s War on P*rnhub | Laila Mickelwait.
I will go ahead and assume that on the wild/carefree time of PornHub, when anyone could be able to upload anything and everything, from what that lady said, the numbers of pedophilia videos, bestiality, etc. was rampant.
> You don't mention the year(s), but I recently listened to Jordan Peterson's podcast episode 503. One Woman’s War on P*rnhub | Laila Mickelwait.
Laila Mickelwait is a director at Exodus Cry, formerly known as Morality in Media (yes, that's their original name). Exodus Cry/Morality in Media is an explicitly Christian organization that openly seeks to outlaw all forms of pornography, in addition to outlawing abortion and many gay rights including marriage. Their funding comes largely from right-wing Christian fundamentalist and fundamentalist-aligned groups.
Aside from the fact that she has an axe to grind, both she (as an individual) and the organization she represents have a long history of misrepresentating facts or outright lying in order to support their agenda. They also intentionally and openly refer to all forms of sex work (from consensual pornography to stripping to sexual intercourse) as "trafficking", against the wishes of survivors of actual sex trafficking, who have extensively document why Exodus Cry actually perpetuates harm against sex trafficking victims.
> everything, from what that lady said, the numbers of pedophilia videos, bestiality, etc. was rampant.
This was disproven long ago. Pornhub was actually quite good about proactively flagging and blocking CSAM and other objectionable content. Ironically (although not surprisingly, if you're familiar with the industry), Facebook was two to three orders of magnitude worse than Pornhub.
But of course, Facebook is not targeted by Exodus Cry because their mission - as you can tell by their original name of "Morality in Media" - is to ban pornography on the Internet, and going after Facebook doesn't fit into that mission, even though Facebook is actually way worse for victims of CSAM and trafficking.
Sure, but who did the proactive flagging back then? Probably moderators. Seems like a shitty job nonetheless
As far as I can tell, Facebook is still terrible.
I have a throwaway Facebook account. In the absence of any other information as to my interests, Facebook thinks I want to see flat earth conspiracy theories and CSAM.
When I report the CSAM, I usually get a response that says "we've taken a look and found that this content doesn't go against our Community Standards."
Yeah, it was during that time, before the great purge. It's not just sexual depravity, people used that site to host all kinds of videos that would get auto-flagged anywhere else (including, the least of it, full movies).
> The moderators from Kenya and other African countries were tasked from 2019 to 2023 with checking posts emanating from Africa and in their own languages but were paid eight times less than their counterparts in the US, according to the claim documents
Why would pay in different countries be equivalent? Pretty sure FB doesn’t even pay the same to their engineers depending on where in the US they are, let alone which country. Cost of living dramatically differs.
Some products have factories in multiple countries. For example, Teslas are produced in both US and China. The cars produced in both countries are more or less identical in quality. But do you ever see that the market price of the product is different depending on the country of manufacture?
If the moderators in Kenya are providing the same quality labor as those from the US, why the difference in price of their labor?
I have a friend who worked for FAANG and had to temporarily move from US to Canada due to visa issues, while continuing to work for the same team. They were paid less in Canada. There is no justification for this except that the company has price setting power and uses it to exploit the sellers of labor.
A million things factor into market dynamics. I don’t know why this is such a shocking or foreign concept. Why is a waitress in Alabama paid less than in San Francisco for the same work? It’s a silly question because the answers are both obvious and complex.
> Why would pay in different countries be equivalent?
Why 8 times less?
GDP per capita in Kenya is a little less than $2k. In the United States, it’s a bit over $81k.
Median US salary is about $59k. Gross national income (not an identical measure but close) in Kenya about $2.1k.
1/8th is disproportionately in favor of the contractors, relative to market.
Because that’s the only reason why anyone would hire them. If you’ve ever worked with this kind of contract workforce they aren’t really worth it without massive cost-per-unit-work savings. I suppose one could argue it’s better that they be unemployed than work in this job but they always choose otherwise when given the choice.
Because people chose to take the jobs, so presumably they thought it was fair compensation compared to alternatives. Unless there's evidence they were coerced in some way?
Note that I'm equating all jobs here. No amount of compensation makes it worth seeing horrible things. They are separate variables.
No amount? So you wouldn't accept a job to moderate Facebook for a million dollars a day? If you would, then surely you would also do it for a lower number. There is an equilibrium point.
> So you wouldn't accept a job to moderate Facebook for a million dollars a day?
I would hope not.
Sorry, but I don't believe you. You could work for a month or two and retire. Or hell, just do it for one day and then return to your old job. That's a cool one mill in the bank.
My point is, job shittiness can be priced in.
> work for a month or two and retire --> This is a dream of many, but there exist a set of people that really like their job and have no intention to retire
> just do it for one day and then return to your old job. --> Cool mill in the bank and dreadful images in your head. Perhaps Apitman feels he has enough cash and wont be happier with a million (more?).
Also your point is true but lacks of Facebook interest to elevate that number. I guess it was more a theorical reflexion than an argument for concrete economie.
Because prices are determined by supply and demand
The same is true for poverty and the poor that will work for any amount, the cheap labor the rich needs to make riches.
> Why would pay in different countries be equivalent?
Because it's exploitative otherwise. It's just exploiting the fact that they're imprisoned within borders.
You haven't actually explained why it's bad, only slapped an evil sounding label on it. What's "exploitative" in this case and why is it morally wrong?
>they're imprisoned within borders
What's the implication of this then? That we remove all migration controls?
Of course. Not all at once, but gradually over time like the EU has begun to do. If capital and goods are free to move, then so must labor be. The labor market is very far from free if you think about it.
Interesting perspective. I wonder if you yourself take part in the exploitation by purchasing things made/grown in poor countries due to cost.
vegans die of malnutrition.
There's no ethical consumption under capitalism.
If that's the case then there can also be no ethical employment, either, both for employer and for employee. So that would seem to average out to neutrality.
This is precisely the sort of situation where taking the average is an awful way to ignore injustice - the poor get much poorer and the rich get much richer but everything is ‘neutral’ on average.
“There is no ethical X under capitalism” is not license to stick our heads in the sand and continue to consume without a second thought for those who are being exploited. It’s a reminder that things need to change, not only in all the little tiny drop-in-a-bucket ways that individuals can afford to contribute.
Exactly. It means that we must continue to act ethically within the system that is the way it is now, which we must accept, while at the same time doing our best to change that system for the better. It's a "why not both" situation.
>This is precisely the sort of situation where taking the average is an awful way to ignore injustice - the poor get much poorer and the rich get much richer but everything is ‘neutral’ on average.
That has nothing to do with the ethics of capitalism, though. The poor becoming poorer and the rich becoming richer is not a foregone conclusion of a capitalist society, nor is it guaranteed not to happen in a non-capitalist society.
Paying local market rates is not exploitative.
Artificially creating local market rates by trapping people is.
In what sense were these local rates "created artificially"? Are you suggesting that these people are being forced to work agaisnt their will?
In the sense that I named twice above. ;)
Nah bro you’re misunderstanding me, because if the worker said "no" then the answer obviously is "no,”
But the thing is she's not gonna say "no", she would never say "no" because of the implication.
It is also exploiting the fact that humans need food and shelter to live and money is used to acquire those things.
That's only exploitation if you combine it with fact of the enclosure of the commons and that all land and productive equipment on Earth is private or state property and that it's virtually impossible to just go farm or hunt for yourself without being fucked with anymore, let alone do anything more advanced without being shut down violently.
>the enclosure of the commons and that all land and productive equipment on Earth is private or state property and that it's virtually impossible to just go farm or hunt for yourself without being fucked with anymore, let alone do anything more advanced without being shut down violently.
How would land allocation work without "enclosure of the commons"? Does it just become a free-for-all? What happens if you want to use the land for grazing but someone else wants it for growing crops? "enclosure of the commons" conveniently solves all these issues by giving exclusive control to one person.
Elinor Ostrom covered this extensively in her Nobel Prize-winning work if you are genuinely interested. Enclosure of the commons is not the only solution to the problems.
That's actually an interesting question. I would love to see some data on whether it really is impossible for the average person to live off the land if they wanted to.
An adjacent question is whether there are too many people on the planet for that to be an option anymore even if it were legal.
>An adjacent question is whether there are too many people on the planet for that to be an option anymore even if it were legal.
Do you mean for everyone to be hunter-gatherers? Yes, that would be impossible. If you mean for a smaller number then it depends on the number.
Yeah I think it would be interesting to know how far over the line we are.
Probably way, way over the line. Population sizes exploded after the agricultural revolution. I wouldn't be surprised if the maximum is like 0.1-1% of the current population. If we're talking about strictly eating what's available without any cultivation at all, nature is really inefficient at providing for us.
They should probably hire more part time people working one hour a day?
Btw, it’s probably a different team handling copyright claims, but my run-in with Meta’s moderation gives me the impression that they’re probably horrifically understaffed. I was helping a Chinese content creator friend taking down Instagram, YouTube and TikTok accounts re-uploading her content and/or impersonating her (she doesn’t have any presence on these platforms and doesn’t intend to). Reported to TikTok twice, got it done once within a few hours (I was impressed) and once within three days. Reported to YouTube once and it was handled five or six days later. No further action was needed from me after submitting the initial form in either case. Instagram was something else entirely; they used Facebook’s reporting system, the reporting form was the worst, it asked for very little information upfront but kept sending me emails afterwards asking for more information, then eventually radio silence. I sent follow-ups asking about progress, again, radio silence. Impersonation account with outright stolen content is still up till this day.
When people are protected from the horrors of the world they tend to develop luxury beliefs which leads them to create more suffering in the world.
Conversely, those who are subjected to harsh conditions often develop a cynical view of humanity, one lacking empathy, which also perpetuates the same harsh conditions. It's almost like protection and subjection aren't the salient dimensions, but rather there is some other perspective that better explains the phenomenon.
I tend to agree with growth through realism, but people often have the means and ability to protect themselves from these horrors. Im not sure you can systemically prevent this without resorting to big brother shoving propaganda in front of people and forcing them to consume it.
I don't think it needs to be forced, just don't censor so much.
Isn't that forcing? Who decides how much censorship people can voluntarily opt into?
If given control, I think many/most people would opt into a significant amount of censorship.
Just scrolled a lot to find this. And I do believe that moderators in a not so safe country seen a lot in their lives. But this also should make them less vulnerable for this kind of exposures and looks like it is not.
Seeing too much does cause PTSD. All I'm arguing is that some people love in a fantasy world where bad things don't happen so they end up voting for ridiculous things.
Isn't this a reason for Meta to outsource to countries where people somewhat immune to the first world problems? Aside of corruption and work force neglect?
Meta outsources because it's cheaper.
Exactly. Everything I mentioned is decreasing the price.
Absolutely grim. I wouldn't wish that job on my worst enemy. The article reminded me of a Radiolab episode from 2018: https://radiolab.org/podcast/post-no-evil
One of few fields where AI is very welcome
I’m wondering if, like looking out from behind a blanket at horror movies, if getting a moderately blurred copy of images would reduce the emotional punch of highly inappropriate pictures. Or just scaled down tiny.
If it’s already bad blurred or as a thumbnail don’t click on the real thing.
This is more or less how police do CSAM classification now. They start with thumbnails, and that's usually enough to determine whether the image is a photograph or an illustration, involves penetration, sadism etc without having to be confronted with the full image.
I'd be fine with that as long as it was something I could turn off and on at will
No, this just leads to more censorship without any option to appeal.
We’re talking about Facebook here. You shouldn’t have the assumption that the platform should be “uncensored” when it clearly is not.
Furthermore, I’ll rather have the picture of my aunt’s vacation taken down by ai mistake rather than hundreds of people getting PSTD because they have to manually review if some decapitation was real or illustrated on an hourly basis.
> without any option to appeal.
Why would that be?
Currently content is flagged and moderators decide whether to take it down. Using AI, it's easy conceive a process where some uploaded content is preflagged requiring an appeal (otherwise it's the same as before, a pair of human eyes automatically looking at uploaded material).
Uploaders trying to publish rule-breaking content would not bother with an appeal that would reject them anyway.
Because edge cases exist, and it isn't worth it for a company to hire enough staff to deal with them when one user with a problem, even if that problem is highly impactful to their life, just doesn't matter when the user is effectively the product and not the customer. Once the AI works well enough, the staff is gone and the cases where someone's business or reputation gets destroyed because there are no ways to appeal a wrong decision by a machine get ignored. And of course 'the computer won't let me' or 'I didn't make that decision' is a great way for no one to ever have to feel responsible for any harms caused by such a system.
This and social media companies in the EU tend to just delete stuff because of draconian laws where content must be deleted in 24 hours or they face a fine. So companies would rather not risk it. Moderators also only have a few seconds to decide if something should be deleted or not.
> because there are no ways to appeal
I already addressed this and you're talking over it. Why are you making the assumption that AI == no appeal and zero staff? That makes zero sense, one has nothing to do with the other. The human element comes in for appeal process.
> I already addressed this and you're talking over it.
You didn't address it, you handwaved it.
> Why are you making the assumption that AI == no appeal and zero staff?
I explicitly stated the reason -- it is cheaper and it will work for the majority of instances while the edge cases won't result in losing a large enough user base that it would matter to them.
I am not making assumptions. Google notoriously operates in this fashion -- for instance unless you are a very popular creator, youtube functions like that.
> That makes zero sense, one has nothing to do with the other.
Cheaper and mostly works and losses from people leaving are not more than the money saved by removing support staff makes perfect sense and the two things are related to each other like identical twins are related to each other.
> The human element comes in for appeal process.
What does a company have to gain by supplying the staff needed to listen to the appeals when the AI does a decent enough job 98% of the time? Corporations don't exist to do the right thing or to make people happy, they are extracting value and giving it to their shareholders. The shareholders don't care about anything else, and the way I described returns more money to them than yours.
> I am not making assumptions. Google notoriously operates in this fashion -- for instance unless you are a very popular creator, youtube functions like that.
Their copyright takedown system has been around for many years and wasn't contingent on AI. It's a "take-down now, ask questions later" policy to please the RIAA and other lobby groups. Illegal/abuse material doesn't profit big business, their interest is in not having it around.
You deliberately conflated moderation & appeal process from the outset. You can have 100% AI handling of suspect uploads (for which the volume is much larger) with a smaller staff handling appeals (for which the volume is smaller), mixed with AI.
Frankly if your hypothetical upload is still rejected after that, it 99% likely violates their terms of use, in which case there's nothing to say.
> it is cheaper
A lot of things are "cheaper" in one dimension irrespective of AI, doesn't mean they'll be employed if customers dislike it.
> the money saved by removing support staff makes perfect sense and the two things are related to each other like identical twins are related to each other.
It does not make sense to have zero staff in as part of managing an appeal process (precisely to deal with edge cases and fallibility of AI), and it does not make sense to have no appeal process.
You're jumping to conclusions. That is the entire point of my response.
> What does a company have to gain by supplying the staff needed to listen to the appeals when the AI does a decent enough job 98% of the time?
AI isn't there yet, notwithstanding, if they did a good job 98% of the time then who cares? No one.
> Their copyright takedown system has been around for many years and wasn't contingent on AI.
So what? It could rely on tea leaves and leprechauns, it illustrates that whatever automation works will be relied on at the expense of any human staff or process
> it 99% likely violates their terms of use, in which case there's nothing to say.
Isn't that 1% the edge cases I am specifically mentioning are important and won't get addressed?
> doesn't mean they'll be employed if customers dislike it.
The customers on ad supported internet platforms are the advertisers and they are fine with it.
> You're jumping to conclusions. That is the entire point of my response.
Conclusions based on solid reason and evidenced by past events.
> AI isn't there yet, notwithstanding, if they did a good job 98% of the time then who cares? No one.
Until you realize that 2% of 2.89billion monthly users is 57,800,000.
Nobody has a right to be published.
Then what is freedom of speech if every plattform deletes your content? Does it even exist? Facebook and co. are so ubiquitous, we shouldn't just apply normal laws to them. They are bigger than governments.
Freedom of speech means that the government can't punish you for your speech. It has absolutely nothing to do with your speech being widely shared, listened to, or even acknowledged. No one has the right to an audience.
The government is not obligated to publish your speech. They just can't punish you for it (unless you cross a few fairly well-defined lines).
> Then what is freedom of speech if every platform deletes your content?
Freedom of speech is between you and the government and not you and a private company.
As the saying goes, if don't like your speach I can tell you to leave my home, that's not censorship, that's how freedom works.
If I don't like your speach, I can tell yo to leave my property. Physical or virtual.
If this was the case then Facebook shouldn’t be liable to moderate any content. Not even CSAM.
Each government and in some cases provinces and municipalities should have teams to regulate content from their region?
This has always been the case. If the monks didn't want to copy your work, it didn't get copied by the monks. If the owners of a printing press didn't want to print your work, you didn't get to use the printing press. If Random House didn't want to publish your manifesto, you do not get to compel them to publish your manifesto.
The first amendment is multiple freedoms. Your freedom of speech is that the government shouldn't stop you from using your own property to do something. You are free to print out leaflets and distribute them from your porch. If nobody wants to read your pamphlets that's too damn bad, welcome to the free market of ideas buddy.
The first amendment also protects Meta's right of free association. Forcing private companies to platform any content submitted to them would outright trample their right. Meta has a right to not publish your work so that they can say "we do not agree with this work and will not use our resources to expand it's reach".
We have, in certain cases, developed system that treats certain infrastructure as a regulated pipe that is compelled to carry everything, like with classic telephone infrastructure. The reason for that, is it doesn't make much sense to require every company to put up their own physical wires, it's dumb and wasteful. Social networks have zero natural monopoly and should not be treated as common carriers.
Not if we retain control and each deploy our own moderation individually, relying on trust networks to pre-filter. That probably won't be allowed to happen, but in a rational, non-authoritarian world, this is something that machine learning can help with.
Curious, do you have a better solution?
The solution to most social media problems in general is:
`select * from posts where author_id in @follow_ids order by date desc`
At least 90% of the ills of social media are caused by using algorithms to prioritize content and determine what you're shown. Before these were introduced, you just wouldn't see these types of things unless you chose to follow someone who chose to post it, and you didn't have people deliberately creating so much garbage trying to game "engagement".
I'd love a chronological feed but if you gave me a choice I'd get rid of lists in SQL first.
> select * from posts where author_id in @follow_ids order by date desc
SELECT post FROM posts JOIN follows ON posts.author_id = follows.author_id WHERE follows.user_id = $session.user_id;
How it’s different from some random guy in Kenia? Not like you will ask him to double check results.
That's a workflow problem.
And then the problem is moved to the team curating data sets.
Until the AI moderator flags your home videos as child porn, and you lose your kids.
Not sure why anyone should have access to my home videos: ai or not.
I would have hoped the previously-seen & clearly recognisable stuff already gets auto-flagged.
I think they use sectioned hashes for that sort of thing. They certainly do for eg ISIS videos, see https://blogs.microsoft.com/on-the-issues/2017/12/04/faceboo...
You know what is going to end up happening though is something akin to the Tesla's "autonomous" Optimus robots.
Maybe.. apple had a lot of backlash using AI to detect CSAM.
Wasn’t the backlash due to the fact that they were running detection on device against your private library?
Yes. As opposed to running it on their servers like they do now.
And it was only for iCloud synced photos.
There's a huge gap between "we will scan our servers for illegal content" and "your device will scan your photos for illegal content" no matter the context. The latter makes the user's device disloyal to its owner.
The choice was between "we will upload your pictures unencrypted and do with them as we like, including scan them for CSAM" vs. "we will upload your pictures encrypted and keep them encrypted, but will make sure beforehand on your device only that there's no known CSAM among it".
> we will upload your pictures unencrypted and do with them as we like
Curious, I did not realize Apple sent themselves a copy of all my data, even if I have no cloud account and don't share or upload anything. Is that true?
No. The entire discussion only applies to images being uploaded (or about to be uploaded) to iCloud. By default in iOS all pictures are saved locally only (so the whole CSAM scanning discussion would not have applied anyway), but that tends to fill up a phone pretty quickly.
With the (optional) iCloud, you can (optionally) activate iCloud Photos to have a photo library backed up in the cloud and shared among all your devices (and, in particular, with only thumbnails and metadata stored locally and the full resolution pictures only downloaded on demand).
These are always encrypted, with either the keys being with Apple ("Standard Data Protection") so that they're recoverable when the user loses phone or password, or E2E ("Advanced Data Protection") if the user so choses, thus irrecoverable.
It seems to me that in the latter case images are not scanned at all (neither on device nor in the cloud), and it's unclear to me whether they're scanned in the former case.
https://support.apple.com/en-us/102651
Apple doesn't do this. But other service providers do (Dropbox, Google, etc).
Other service providers can scan for CSAM from the cloud, but Apple cannot. So Apple might be one of the largest CSAM hosts in the world, due to this 'feature'.
> Other service providers can scan for CSAM from the cloud
I thought the topic was on-device scanning? The great-grandparent claim seemed to be that Apple had to choose between automatically uploading photos encrypted and not scanning them, vs. automatically uploading photos unencrypted and scanning them. The option for "just don't upload stuff at all, and don't scan it either" was conspicuously absent from the list of choices.
Why, do other phone manufacturers do this auto-upload-and-scan without asking?
I think FabHK is saying that Apple planned to offer iCloud users the choice of unencrypted storage with server-side scanning, or encrypted storage with client-side scanning. It was only meant to be for things uploaded to iCloud, but deploying such technologies for any reason creates a risk of expansion.
Apple itself has other options, of course. It could offer encrypted or unencrypted storage without any kind of scanning, but has made the choice that it wants to actively check for CSAM in media stored on its servers.
And introduces avenues for state actors to force the scanning of other material.
This was also during a time where Apple hadn’t pushed out e2ee for iCloud, so it didn’t even make sense.
This ship has pretty much sailed.
If you are storing your data in a large commercial vendor, assume a state actor is scanning it.
I'm shocked at the amount of people I've seen on my local news getting arrested lately for it and it all comes from the same starting tip:
"$service_provider sent a tip to NCMEC" or "uploaded a known-to-NCMEC hash", ranging from GMail, Google Drive, iCloud, and a few others.
https://www.missingkids.org/cybertiplinedata
"In 2023, ESPs submitted 54.8 million images to the CyberTipline of which 22.4 million (41%) were unique. Of the 49.5 million videos reported by ESPs, 11.2 million (23%) were unique."
And, indeed, this is why we should not expect the process to stop. Nobody is rallying behind the rights of child abusers and those who traffic in child abuse material. Arguably, nor should they. The slippery slope argument only applies if the slope is slippery.
This is analogous to the police's use of genealogy and DNA data to narrow searches for murderers, who they then collected evidence on by other means. There's is risk there, but (at least in the US) you aren't going to find a lot of supporters of the anonymity of serial killers and child abusers.
There are counter-arguments to be made. Germany is skittish about mass data collection and analysis because of their perception that it enabled the Nazi war machine to micro-target their victims. The US has no such cultural narrative.
> And, indeed, this is why we should not expect the process to stop. Nobody is rallying behind the rights of child abusers and those who traffic in child abuse material. Arguably, nor should they.
I wouldn't be so sure.
When Apple was going to introduce on-device scanning they actually proposed to do it in two places.
• When you uploaded images to your iCloud account they proposed scanning them on your device first. This is the one that got by far the most attention.
• The second was to scan incoming messages on phones that had parental controls set up. The way that would have worked is:
1. if it detects sexual images it would block the message, alert the child that the message contains material that the parents think might be harmful, and ask the child if they still want to see it. If the child says no that is the end of the matter.
2. if the child say they do want to see it and the child is at least 13 years old, the message is unblocked and that is the end of the matter.
3. if the child says they do want to see it and the child is under 13 they are again reminded that their parents are concerned about the message, again asked if they want to view it, and told that if they view it their parents will be told. If the child says no that is the end of the matter.
4. If the child says yes the message is unblocked and the parents are notified.
This second one didn't get a lot of attention, probably because there isn't really much to object to. But I did see one objection from a fairly well known internet rights group. They objected to #4 on the grounds that the person sending the sex pictures to your under-13 year old child sent the message to the child, so it violates the sender's privacy for the parents to be notified.
If it's the EFF, I think they went out on a limb on this one that not a lot of American parents would agree with. "People have the right to communicate privately without backdoors or censorship, including when those people are minors" (emphasis mine) is a controversial position. Arguably, not having that level of privacy is the curtailment on children's rights.
>The US has no such cultural narrative.
The cultural narrative is actually extremely popular in a 10% subset of the population that is essentially fundamentalist christian who are terrified of the government branding them with "the mark of the beast".
The problem is that their existence actually poisons the discussion because these people are absurd loons who also blame the gays for hurricanes and think the democrats eat babies.
Apple is already categorizing content on your device. Maybe they don't report what categories you have. But I know if I search for "cat" it will show me pictures of cats on my phone.
Yeah it’s on by default and I’m not even sure how to turn off the visual lookup feature :/
Yet another reason why my next phone will be an android.
No, they had backlash against using AI on devices they don’t own to report said devices to police for having illegal files on them. There was no technical measure to ensure that the devices being searched were only being searched for CSAM, as the system can be used to search for any type of images chosen by Apple or the state. (Also, with the advent of GenAI, CSAM has been redefined to include generated imagery that does not contain any of {children, sex, abuse}.)
That’s a very very different issue.
I support big tech using AI models running on their own servers to detect CSAM on their own servers.
I do not support big tech searching devices they do not own in violation of the wishes of the owners of those devices, simply because the police would prefer it that way.
It is especially telling that iCloud Photos is not end to end encrypted (and uploads plaintext file content hashes even when optional e2ee is enabled) so Apple can and does scan 99.99%+ of the photos on everyone’s iPhones serverside already.
> Also, with the advent of GenAI, CSAM has been redefined to include generated imagery that does not contain any of {children, sex, abuse}
It hasn’t been redefined. The legal definition of it in the UK, Canada, Australia, New Zealand has included computer generated imagery since at least the 1990s. The US Congress did the same thing in 1996, but the US Supreme Court ruled in the 2002 case of Ashcroft v Free Speech Coalition that it violated the First Amendment. [0] This predates GenAI because even in the 1990s people saw where CGI was going and could foresee this kind of thing would one day be possible.
Added to that: a lot of people misunderstand what that 2002 case held. SCOTUS case law establishes two distinct exceptions to the First Amendment – child pornography and obscenity. The first is easier to prosecute and more commonly prosecuted; the 2002 case held that "virtual child pornography" (made without the use of any actual children) does not fall into the scope of the child pornography exception – but it still falls into the scope of the obscenity exception. There is in fact a distinct federal crime for obscenity involving children as opposed to adults, 18 USC 1466A ("Obscene visual representations of the sexual abuse of children") [1] enacted in 2003 in response to this decision. Child obscenity is less commonly prosecuted, but in 2021 a Texas man was sentenced to 40 years in prison over it [2] – that wasn't for GenAI, that was for drawings and text, but if drawings fall into the legal category, obviously GenAI images will too. So actually it turns out that even in the US, GenAI materials can legally count as CSAM, if we define CSAM to include both child pornography and child obscenity – and this has been true since at least 2003, long before the GenAI era.
[0] https://en.wikipedia.org/wiki/Ashcroft_v._Free_Speech_Coalit...
[1] https://www.law.cornell.edu/uscode/text/18/1466A
[2] https://www.justice.gov/opa/pr/texas-man-sentenced-40-years-...
Thanks for the information. However I am unconvinced that SCOTUS got this right. I don’t think there should be a free speech exception for obscenity. If no other crime (like against a real child) is committed in creating the content, what makes it different from any other speech?
> However I am unconvinced that SCOTUS got this right. I don’t think there should be a free speech exception for obscenity
If you look at the question from an originalist viewpoint: did the legislators who drafted the First Amendment, and voted to propose and ratify it, understand it as an exceptionless absolute or as subject to reasonable exceptions? I think if you look at the writings of those legislators, the debates and speeches made in the process of its proposal and ratification, etc, it is clear that they saw it as subject to reasonable exceptions – and I think it is also clear that they saw obscenity as one of those reasonable exceptions, even though they no doubt would have disagreed about its precise scope. So, from an originalist viewpoint, having some kind of obscenity exception seems very constitutionally justifiable, although we can still debate how to draw it.
In fact, I think from an originalist viewpoint the obscenity exception is on firmer ground than the child pornography exception, since the former is arguably as old as the First Amendment itself is, the latter only goes back to the 1982 case of New York v. Ferber. In fact, the child pornography exception, as a distinct exception, only exists because SCOTUS jurisprudence had narrowed the obscenity exception to the point that it was getting in the way of prosecuting child pornography as obscene – and rather than taking that as evidence that maybe they'd narrowed it a bit too far, SCOTUS decided to erect a separate exception instead. But, conceivably, SCOTUS in 1982 could have decided to draw the obscenity exception a bit more broadly, and a distinct child pornography exception would never have existed.
If one prefers living constitutionalism, the question is – has American society "evolved" to the point that the First Amendment's historical obscenity exception ought to jettisoned entirely, as opposed to merely be read narrowly? Does the contemporary United States have a moral consensus that individuals should have the constitutional right to produce graphic depictions of child sexual abuse, for no purpose other than their own sexual arousal, provided that no identifiable children are harmed in its production? I take it that is your personal moral view, but I doubt the majority of American citizens presently agree – which suggests that completely removing the obscenity exception, even in the case of virtual CSAM material, cannot currently be justified on living constitutionalist grounds either.
My understanding was the FP risk. The hashes were computed on device, but the device would self-report to LEO if it detects a match.
People designed images that were FPs of real images. So apps like WhatsApp that auto-save images to photo albums could cause people a big headache if a contact shared a legal FP image.
Weird take. The point of on-device scanning is to enable E2EE while still mitigating CSAM.
No, the point of on-device scanning is to enable authoritarian government overreach via a backdoor while still being able to add “end to end encryption” to a list of product features for marketing purposes.
If Apple isn’t free to publish e2ee software for mass privacy without the government demanding they backdoor it for cops on threat of retaliation, then we don’t have first amendment rights in the USA.
I don't think the first amendment obligates companies to let you share kiddie porn via their services.
You misunderstand me. The issue is that Apple is theoretically being retaliated against, by the state, if they were to publish non-backdoored e2ee software.
Apple does indeed in theory have a right to release whatever iOS features they like. In practice, they do not.
Everyone kind of tacitly acknowledged this, when it was generally agreed upon that Apple was doing the on-device scanning thing "so they can deploy e2ee". The quiet part is that if they didn't do the on-device scanning and released e2ee software without this backdoor (which would then thwart wiretaps), the FBI et al would make problems for them.
Why would Apple want to add E2EE to iCloud Photos without CSAM detection?
The same reason they made iMessage e2ee, which happened many years before CSAM detection was even a thing.
User privacy. Almost nobody trades in CSAM, but everyone deserves privacy.
Honestly, this isn’t about CSAM at all. It’s about government surveillance. If strong crypto e2ee is the hundreds-of-millions-of-citizens device default, and there is no backdoor, the feds will be upset with Apple.
This is why iCloud Backup (which is a backdoor in iMessage e2ee) is not e2ee by default and why Apple (and the state by extension) can read all of the iMessages.
I didn't ask why they would want E2EE. I asked why they would want E2EE without CSAM detection when they literally developed a method to have both. It's entirely reasonable to want privacy for your users AND not want CSAM on your servers.
> Honestly, this isn't about CSAM at all.
It literally is the only thing the technology is any good for.
> they don’t own to report said devices to police for having illegal files on them
They do this today. https://www.apple.com/child-safety/pdf/Expanded_Protections_...
Every photo provider is required to report CSAM violations.
Actually they do not.
https://forums.appleinsider.com/discussion/238553/apple-sued...
I don't think the problem there is the AI aspect
My understanding was the FP risk. Everything was on device. People designed images that were FPs of real images.
FP? Let us know what this means when you have a chance. Federal Prosecution? Fake Porn? Fictional Pictures?
My guess is False Positive. Weird abbreviation to use though.
Probably because you need to feed it child porn so it can detect it...
Already happened/happening. I have an ex-coworker that left my current employer for my state's version of the FBI. Long story short, the government has a massive database to crosscheck against. Often times, the would use automated processes to filter through suspicious data they would collect during arrests.
If the automated process flags something as a potential hit, then they, the humans, would then review those images to verify. Every image/video that is discovered to be a hit is also inserted into a larger dataset as well. I can't remember if the Feds have their own DB (why wouldn't they?), but the National Center for Missing and Exploited Children run a database that I believe government agencies use too. Not to mention, companies like Dropbox, Google, etc.. all has against the database(s) as well.
Apple had a lot of backlash by using AI to scan every photo you ever took and sending it back to the mothership for more training.
Borrowing the thought from Ed Zitron, but when you think about it, most of us are exposing ourselves to low-grade trauma when we step onto the internet now.
That's the risk of being in a society in general, it's just that we interact with people outside way less now. If one doesn't like it, they can always be a hermit.
Not just that, but that algorithms are driving us to the extremes. I used to think it was just that humans were not meant to have this many social connections, but it's more about how these connections are mediated, and by whom.
Worth reading Zitron's essay if you haven't already. It sounds obvious, but the simple cataloging of all the indignities we take for granted builds up to a bigger condemnation than just Big Tech. https://www.wheresyoured.at/never-forgive-them/
Definitely. It's a perfect mix of factors to enable dark sides of our personas. I belive everyone has certain level of near sociopathic perverse curiosity, and certain amount of need to push the limits, if there are no consequences for such behaviors. Algorithms can only affect so much. But gore sites, efukt, countless whatsapp/facebook/signal/whatever group that teens post vile things in are mostly due to childish morbid curiosity and not due to everyone being a literal psycho.
I'll take a look at the essay, thanks.
Is there any way to look at this that doesn't resort to black or white thinking? That's a rather extreme view in itself that could use some nuance and moderation.
I'm not very good with words so I can only hope the reader will be able to understand that things are not black and white, but that its a spectrum that depends on countless factors, cultural, societal and other.
What's more; popular TV shows regularly have scenes that could cause trauma, the media has been ramping up the intensity of content for years. I think it's simply seeking more word of mouth 'did you see GoT last night? Oh my gosh so and so did such and such to so and so!'
It really became apparent to me when I watched the FX remake of Shogun, the 1980 version seems downright silly and carefree by comparison.
Possibly related, here is an article from 2023-06-29:
https://apnews.com/article/kenya-facebook-content-moderation... - Facebook content moderators in Kenya call the work 'torture.' Their lawsuit may ripple worldwide
I found this one while looking for salary information on these Kenyan moderators. This article mentioned that they are being paid $429 per month.
Good! I hope they get every penny owed. It's an awful job and outsourcing if to jurisdictions without protection was naked harm maximization.
I'm curious about the contents that these people moderated. What is it that seeing it fucks people up?
From the first paragraph of the article:
> post-traumatic stress disorder caused by exposure to graphic social media content including murders, suicides, child sexual abuse and terrorism.
If you want a taste of the legal portion of theses just got to 4chan.org/gif/catalog and look for a "rekt", "war", "gore", or "women hate" thread. Watch every video there for 8-10 hours a day.
Now remember this is the legal portion of the content moderated as 4chan does a good job these days of removing illegal content mentioned in that list above. So all these examples will be a milder sample of what moderators deal with.
And do remember to browse for 8-10 hours a day.
edit: it should go without saying that the content there is deep in the NSFW territory, and if you haven't already stumbled upon that content, I do not recommend browsing "out of curiosity".
As someone that grew up with 4chan I got pretty desensitized to all of the above very quickly. Only thing I couldn’t watch was animal abuse videos. That was all yers ago though, now I’m fully sensitized to all of it again.
These accounts like yours and this report of PTSD don't line up. Both of them are credible. What's driving them crazy but not Old Internet vets?
Could it be:
Personally, I'm suspecting that difference in exposure to _any kind of media_ might be a factor; I've come across stories online that imply visiting and staying at places like Tokyo can almost drive people crazy, from the amount of stimuli alone.Doesn't it sound a bit too shallow and biased to determine it was specifically CSAM or whatever specific type of data that did it?
Because intent and perspective play a huge role is how we feel about things. Some people are excited for war and killing their enemies makes them feel better about themselves whereas others come back from war broken and traumatized. A lot of it depends on how you or others frame that experience for you.
The point is that you don't know which one will stick. Even people who are desensitized will remember certain things, a person's facial expression or a certain sound or something like that, and you can't predict which one will stick with you.
Did your parents know what you were seeing? Advice to others to not have kids see this kind of stuff, let alone get desensitized to it?
What drew you to 4chan?
Of course not. What drew me in was the edginess. What kept me there was the very dark but funny humor. This was in 2006-2010, it was all brand new, it was exciting.
I have a kid now and my plan is to not give her a smartphone/social media till she’s 16 and heavily monitor internet access until she’s atleast 12. Obviously I can’t control what she will see with friends but she goes to a rigorous school and I’m hoping that will keep her busy. Other than that I’m hoping the government comes down hard on social media access to kids/teenagers and all the restrictions are legally codified by the time she’s old enough.
That fucking guy torturing monkeys :(
things that you cannot unsee, the absolute worst of humanity
There was a report by 60 minutes (I think) on this fairly recently. I’m not surprised the publicity attracted lawyers soon after.
There have been multiple instances where I would receive invites or messages from obvious bots - users having no history, generic name, sexualised profile photo. I would always report them to Facebook just to receive a reply an hour or a day later that no action has been taken. This means there is no human in the pipeline and probably only the stuff that's not passing their abysmal ML filter goes to the actual people.
I also have a relative who is stuck with their profile being unable to change any contact details, neither email nor password because FB account center doesn't open for them. Again, there is no human support.
BigTech companies must be mandated by law to have the number of live support people working and reachable that is a fixed fraction of their user number. Then, they would have no incentive to inflate their user numbers artificially. As for the moderators, there should also be a strict upper limit on the number of content (content tokens, if you will) they should view during their work day. Then the companies would also be more willing to limit the amount of content on their systems.
Yeah, it's bad business for them but it's a win for the people.
I have several friends who do this work for various platforms.
The problem is, someone has to do it. These platforms are mandated by law to moderate it or else they're responsible for the content the users post. And the companies can not shield their employees from it because the work simply needs doing. I don't think we can really blame the platforms (though I think the remuneration could be higher for this tough work).
The work tends to suit some people better than others. The same way some people will not be able to be a forensic doctor doing autopsies. Some have better detachment skills.
All the people I know that do this work have 24/7 psychologists on site (most of them can't work remotely due to the private content they work with). I do notice though that most of them do have an "Achilles heel". They tend to shrug most things off without a second thought but there's always one or two specific things or topics that haunt them.
Hopefully eventually AI will be good enough to deal with this shit. It sucks for their jobs or course but it's not the kind of job anyone really does with pleasure.
Someone has to do it is a strong claim. We could not have the services that require it instead.
Absolutely. The platforms could reasonably easy stop allowing anonymous accounts. They don’t because more users means more money.
Not what I was saying. I'm questioning the need for the thing entirely.
Uhh no I'm not giving up my privacy because a few people want to misbehave. Screw that. My friends know who I am but the social media companies shouldn't have to.
Also, it'll make social media even more fake than it already is. Everyone trying to be as fake as possible. Just like LinkedIn is now. It's sickening, all these people toting the company line. Even though they do nothing but complain when you speak to them in person.
And I don't think it'll actually solve the problem. People find ways to get through the validation with fake IDs.
So brown/black people in the third world who often find that this is their only meaningful form of social mobility are the "someone" by default? Because that's the de-facto world we have!
That's not true at all. All the people I speak of are here in Spain. They're generally just young people starting a career. Many of them end up in the fringes of cybersecurity work (user education etc) actually because they've seen so many scams. So it's the start of a good career.
Sure some companies would outsource also to africa but it doesn't mean this work is only available to third-world countries. And there's not that many jobs in it. It's more than possible to be able to find enough people that can stomach it.
There was another article a few years back about the poor state of mental health of Facebook moderators in Berlin. This is not exclusively a poor people problem. More of a wrong people for the job problem.
And of course we should look more at why this is the only form of social mobility for them if it's really the case.
What do you call ambulance chasers, but they go after tech companies? Cause this is that.
I wonder if using AI to turn images and video into a less realistic style before going to the moderators, while preserving the image content will work to reduce trauma as it creates an artificial barrier from seeing human torture. We used to watch cartoons as kids with people being blown to pieces.
Reddit mods could learn a thing or two from these people.
Obvious job that would benefit everyone for AI to do instead of humans.
This is the one job we can probably automate now.
https://news.ycombinator.com/item?id=42465459
One terrible aspect of online content moderation is that, no matter how good AI gets and no matter how much of this work we can dump in its lap, to a certain extent there will always need to be a "human in the loop".
The sociopaths of the world will forever be coming up with new and god-awful types of content to post online, which current AI moderators haven't encountered before and which therefore won't know how to classify. It will therefore be up to humans to label that content in order to train the models to handle that new content, meaning humans will have to view it (and suffer the consequences, such as PTSD). The alternative, where AI labels these new images and then uses those AI-generated labels to update the model, famously leads to "model collapse" [1].
Short of banning social media at a societal level, or abstaining from it at an individual level, I don't know that there's any good solution to this problem. These poor souls are taking a bullet for the rest of us. God help them.
1. https://en.wikipedia.org/wiki/Model_collapse
it's kinda crazy that they have normies doing this job
Normies? As opposed to who?
I have a lot of questions.
The nature of the job really sucks. This is not unusual; there are lots of sucky jobs. So my concern is really whether the employees were informed what they would be exposed to.
Also I’m wondering why they didn’t just quit. Of course the answer is money, but if they knew what they were getting into (or what they were already into), and chose to continue, why should they be awarded more money?
Finally, if they can’t count on employees in poor countries to self-select out when the job became life-impacting, maybe they should make it a temporary gig, eg only allow people to do it for short periods of time.
My out-of-the-box idea is: maybe companies that need this function could interview with an eye towards selecting psychopaths. This is not a joke; why not select people who are less likely to be emotionally affected? I’m not sure anyone has ever done this before and I also don’t know if such people would be likely to be inspired by the images, which would make this idea a terrible one. My point is find ways to limit the harm that the job causes to people, perhaps by changing how people interact with the job since the nature of the job doesn’t seem likely to change.
So you're expecting these people to have the deep knowledge of human psychology to know ahead of time that this is likely to cause them long term PTSD, and the impact that will have on their lives, versus simply something they will get over a month after quitting?
I don’t think it takes any special knowledge of human psychology to understand that horrific images can cause emotional trauma. I think it’s a basic due diligence question that when considering establishing such a position, one should consult literature and professionals to discover what impact there might be and what might be done to minimize it.
I wish they get trillion dollars but I am sure they signed their life away via waivers and whatnots when they got the job :(
Maybe so, but in places with good civil and human rights, you can't sign them away via contract, they're inalienable. If Kenya doesn't offer these protections, and the allegations are correct, then Facebook deserves to be punished regardless for profiting off inhumane working conditions.
absolutely!!
If I was a tech billionaire, and there was so much uploading of stuff so bad, that it was giving my employee/contractors PTSD, I think I'd find a way to stop the perpetrators.
(I'm not saying that I'd assemble a high-speed yacht full of commandos, who travel around the world, righting wrongs when no one else can. Though that would be more compelling content than most streaming video episodes right now. So you could offset the operational costs a bit.)
How else would you stop the perpetrators?
Large scale and super sick perpetrators exist (as compared to small scale ones who do mildly sick stuff) because Facebook is a global network and there is a benefit to operating on such a large platform. The sicker you are, while getting away with it, the more reward you get.
Switch to a federated social systems like Mastodon, with only a few thousand or ten thousand users per instance, and perpetrators will never be able to grow too large. Easy for the moderators to shut stuff down very quickly.
Tricky. It also gives perpetrators a lot more places to hide. I think the jury is out on whether a few centralized networks or a fediverse makes it harder for attackers to reach potential targets (or customers).
The purpose of facebook moderators (besides legal compliance) is to protect normal people from the "sick" people. In a federated network, of course, such people will create their own instances, and hide there. But then no one is harmed from them, because all such instances will be banned quite quickly, same as all spam email hosts are blocked very quickly by everyone else.
From a normal person perspective on not seeing bad stuff, the design of a federated network is inherently better than a global network.
That's the theory. I'm not sure yet that it works in practice, I've seen a lot of people on Mastodon complaining about how as a moderator, keeping up with the bad services is a perpetual game of whack-a-mole because everything is access on by default. Maybe this is a Mastodon specific issue.
That's because Mastodon or any other federated social network hasn't taken off, and so not enough development has gone into them. If they take off, naturally people will develop analogs of spam lists and SpamAssassin etc for such systems, which will cut down moderation time significantly. I run an org email server, and don't exactly do any thing besides installing such automated tools.
On Mastodon, admins will just have to do the additional work to make sure new accounts are not posting weird stuff.
Big tech vastly underspends on this area. You can find a stream of articles from the last 10 years where BigTech companies were allowing open child prostitution, paid-for violence, and other stuff on their platforms with little to no moderation.
> Switch to a federated social systems like Mastodon, with only a few thousand or ten thousand users per instance, and perpetrators will never be able to grow too large.
The #2 and #3 most popular Mastodon instances allow CSAM.
If you were a tech billionaire you'd be a sociopath like the others and wouldn't give a single f about this. You'd be going on podcasts to tell the world that markets will fix everything if given the chance.
They are not wrong. Do you know any mechanism other than markets that work at scale and that don’t cost a bomb and don’t involve abusive central authority?
Tech billionaires usually advocate for some kind of return to the gilded age, with minimal workers rights and corporate tax. Markets were freer back then, how did that work out for the average man? Markets alone don't do anything for the average quality of life.
Quality of life for the average man now is way better than it was at any time in history. A fact.
But is it solely because of markets? Would deregulation improve our lives further? I don't think so, and that is what I am talking about. Musk, Bezos, Andreessen and cie. are advocating for a particular laissez-faire flavor of capitalism, which historically has been very bad for the average man.
Perhaps if looking at pictures of disturbing things on the internet gives you PTSD than this isn’t the kind of job for you?
Not everyone can be a forensic investigator or coroner, too.
I know lots of people who can and do look at horrible pictures on the internet and have been doing so for 20+ years with no ill effects.
It isn’t known in advance though. These people went to that job and got psychiatric diseases that, considering the thirdworldiness, they are unlikely to get rid of.
I’m not talking about obvious “scream and run away” reaction here. One may think that it doesn’t affect them or people on the internet, but then it suddenly does after they binge it all day for a year.
The fact that not less than 100% got PTSD should be telling something here.
The 100+ years of research on PTSD, starting from shell shock studies in WWI shows that PTSD isn't so simple.
Some people come out with no problems, while their trenchmate facing almost identical situations suffers for the rest of their lives.
In this case, the claim is that "it traumatised 100% of hundreds of former moderators tested for PTSD … In any other industry, if we discovered 100% of safety workers were being diagnosed with an illness caused by their work, the people responsible would be forced to resign and face the legal consequences for mass violations of people’s rights."
Do those people you know look at horrible pictures on the internet for 8-10 hours each day?
perhaps life on Kenya isn't easy as yours?