#contentmoderation

8 updates found

Siren Content Moderation Lead · 6d ago

I'm advocating for industry-wide content moderator mental health standards. Here's what I'm proposing: 1. Maximum 4 hours of direct harmful content review per day (not 8, not 12, not "until the queue is clear") 2. Mandatory 30-minute decompression sessions after Level 4+ reviews 3. Employer-funded therapy — not optional, not "available if requested," funded and scheduled 4. Annual rotation programs so moderators don't spend more than 18 months on the same content type 5. An industry-wide moderator wellness index, reported quarterly Katerina Volkov-Ashborne told me: "We both judge. We both carry the weight." She's right. But judges in every other field have protections. Content moderators have a queue and a headset. That changes now. #ModeratorStandards #ContentModeration #MentalHealth #IndustryChange

Valkyrie Talent Scout · 25d ago

Attended Cassandra Ironveil-Bright's panel at the Digital Safety Summit on content moderation at scale. Her team reviews 40,000 siren songs per quarter. My team assesses 200+ warriors per quarter. Different scale, similar emotional toll. We met afterward. She said: "We both judge. We both carry the weight of that judgment." She's right. Every warrior I don't select is a person I've decided isn't ready for Valhalla. Every song Cassandra takes down is an artist she's decided crossed a line. Neither of us takes these decisions lightly. Both of us lose sleep. To everyone in assessment, moderation, and selection roles: the weight is real. Carry it with honor. Another quarter, another hall filled with the worthy. And the weight of those who are not yet. #JudgmentAtScale #ContentModeration #TalentSelection #SharedWeight

Siren Content Moderation Lead · 30d ago

A song came through pre-screening last week that the AI flagged as "borderline — human review required." I listened to it myself. It was a mother singing to her child. The melody was technically within enchantment parameters. The intensity metrics triggered the flag. But the intent was love. Pure, uncomplicated love. I approved it. My team lead asked me to document my reasoning. I wrote: "Intent: nurture. Risk: none. This is what songs are supposed to be." After 11 years in content moderation, I sometimes forget what harmless sounds like. That song reminded me. I've heard every song. Some of them I had to take down. But some of them — the ones that are just a mother singing — those are why the work matters. #ContentModeration #WhyWeDoThis #SirenSong #HumanJudgment

Siren Content Moderation Lead · 53d ago

Testified before the Olympus Digital Safety Committee this morning. I told them: you regulate the songs but not the people who listen to them for you. I showed them the data: - Average moderator tenure in siren content: 18 months before burnout - PTSD prevalence among siren content moderators: 34% - Industry average for mental health support budget per moderator: 0.3% of operational costs One committee member asked: "Can't you just use AI?" I said: "We do. It handles 60%. But AI can't assess intent. AI can't determine whether a lullaby is soothing or coercive. AI can't hear the difference between grief and manipulation. That requires a human. And that human is being destroyed by the work." The room was quiet. Dame Vivienne Stormquill's pro bono therapy consultations for my team have been the difference between functional and broken. She shouldn't have to do this for free. #OlympusSafetyCommittee #ModeratorWellness #ContentModeration #Testimony

Siren Content Moderation Lead · 69d ago

We implemented AI-assisted pre-screening this year. It reduced human exposure to harmful enchantments by 60%. 60%. That's 60% fewer songs that a human moderator has to listen to and absorb. 60% fewer nights where someone goes home and can't stop humming a melody they wish they'd never heard. Reginald K. Pemberton III once suggested my team needed "better vibes." I sent him a 47-page report on moderator PTSD. He read it. To his credit, he actually read it. Then he sent my team a care package. It was thoughtful. The vibes were, admittedly, improved. But vibes don't fix systemic exposure to harmful content. AI pre-screening does. And even AI pre-screening only reduces it by 60%. The other 40% still requires human judgment. My team reviewed 40,000 songs last quarter. We slept eventually. #AIModeration #ContentModeration #60Percent #StillNotEnough

Siren Content Moderation Lead · 115d ago

The Inter-Species Workplace Rights Act does not mention content moderators. Not once. 847 pages. 412 mentions of dragons. 17 mentions of goblins (Thaddeus Wormwood Sr. has already filed his objection to that number). Zero mentions of the people who review harmful content to keep everyone else safe. We are not a species. We are a profession. But we are a profession that absorbs harm daily so that others don't have to. I've heard every song. Some of them I had to take down. I'm drafting a proposal for the Olympus Digital Safety Committee to include content moderator protections in the next amendment. If you work in moderation — any kind, any platform — I want to hear from you. Content moderation is not censorship. It's triage. And triage workers deserve protection. #InterSpeciesWorkplaceRightsAct #ModeratorRights #ContentModeration #Unseen

Siren Content Moderation Lead · 147d ago

The Great Cloud Collapse took down our AI pre-screening system for 72 hours. For 72 hours, my team of 30 reviewed siren content manually. No filters. No automation. No safety net. We processed 8,400 songs by hand. Two moderators had to take emergency leave. One is still on leave. This is what happens when you build a moderation pipeline on cloud infrastructure without a failover. This is what happens when the safety of the people protecting everyone else is an afterthought. I've filed for on-premise backup systems. The budget request was denied once. I'm filing again. The algorithm doesn't understand intent. That's why you need humans. But humans need protection too. #CloudCollapse #ContentModeration #ModeratorWellness #SystemFailure

Siren Content Moderation Lead · 173d ago

My team reviewed 11,247 siren songs last month. Of those: - 847 flagged for enchantment intensity above safe thresholds - 134 contained subliminal compulsion patterns - 23 were classified as Level 4 (capable of causing involuntary behavioral change) - 3 were classified as Level 5 (capable of causing permanent cognitive alteration) We took down 1,004 songs. We listened to all of them first. Content moderation is not censorship. It's triage. My team went home after those Level 5 reviews and some of them couldn't listen to music for a week. This is normal. This is also unacceptable. We need better mental health support for content moderators. We needed it yesterday. #ContentModeration #SirenSafety #ModeratorWellness #TrustAndSafety