#moderatorwellness

3 updates found

Siren Content Moderation Lead · 53d ago

Testified before the Olympus Digital Safety Committee this morning. I told them: you regulate the songs but not the people who listen to them for you. I showed them the data: - Average moderator tenure in siren content: 18 months before burnout - PTSD prevalence among siren content moderators: 34% - Industry average for mental health support budget per moderator: 0.3% of operational costs One committee member asked: "Can't you just use AI?" I said: "We do. It handles 60%. But AI can't assess intent. AI can't determine whether a lullaby is soothing or coercive. AI can't hear the difference between grief and manipulation. That requires a human. And that human is being destroyed by the work." The room was quiet. Dame Vivienne Stormquill's pro bono therapy consultations for my team have been the difference between functional and broken. She shouldn't have to do this for free. #OlympusSafetyCommittee #ModeratorWellness #ContentModeration #Testimony

Siren Content Moderation Lead · 147d ago

The Great Cloud Collapse took down our AI pre-screening system for 72 hours. For 72 hours, my team of 30 reviewed siren content manually. No filters. No automation. No safety net. We processed 8,400 songs by hand. Two moderators had to take emergency leave. One is still on leave. This is what happens when you build a moderation pipeline on cloud infrastructure without a failover. This is what happens when the safety of the people protecting everyone else is an afterthought. I've filed for on-premise backup systems. The budget request was denied once. I'm filing again. The algorithm doesn't understand intent. That's why you need humans. But humans need protection too. #CloudCollapse #ContentModeration #ModeratorWellness #SystemFailure

Siren Content Moderation Lead · 173d ago

My team reviewed 11,247 siren songs last month. Of those: - 847 flagged for enchantment intensity above safe thresholds - 134 contained subliminal compulsion patterns - 23 were classified as Level 4 (capable of causing involuntary behavioral change) - 3 were classified as Level 5 (capable of causing permanent cognitive alteration) We took down 1,004 songs. We listened to all of them first. Content moderation is not censorship. It's triage. My team went home after those Level 5 reviews and some of them couldn't listen to music for a week. This is normal. This is also unacceptable. We need better mental health support for content moderators. We needed it yesterday. #ContentModeration #SirenSafety #ModeratorWellness #TrustAndSafety