✦ ✦ ✦

Cassandra Ironveil-Bright

Siren Content Moderation Lead

Siren Content Moderation Lead | Creator of the SSHCS | Aegean Digital Trust & Safety

302 Beleives · 0 Subscribers

Brief

I lead content moderation at Aegean Digital Trust & Safety, where my team of 30 reviewers is responsible for monitoring, classifying, and—when necessary—removing siren song content that poses a risk of enchantment-based harm to listeners. In the last fiscal year alone, we reviewed over 160,000 individual song broadcasts, flagged 12,400 for policy violations, and removed 3,200 that met our highest harm threshold. My Siren Song Harm Classification System (SSHCS), developed in 2019, categorizes enchantment content across 14 harm vectors including involuntary navigation alteration, cognitive override, and emotional manipulation—and is now the industry standard used by all major maritime broadcast platforms. I graduated from the Sirens' Conservatory of Applied Song with a specialization in Acoustic Ethics, and I hold certifications in Enchantment Forensics (EF-III), Digital Content Governance, and Moderator Psychological Safety. In 2023, I testified before the Olympus Digital Safety Committee on the need for standardized siren content regulation. My current priority is moderator mental health: the people who review harmful enchantments every day deserve the same protection they provide to the public. I am working with industry partners to establish the first mandatory wellness standards for content moderation teams. This work is not glamorous. It is not celebrated. But someone has to listen to the songs that shouldn't be heard, and my team does it so others don't have to.

Experience

Siren Content Moderation Lead

Aegean Digital Trust & Safety

2021Present

Managing a team of 30 reviewers across 3 ocean basins. Implemented AI-assisted pre-screening, reducing human exposure to harmful enchantments by 60%. Advocating for content moderator mental health standards.

Senior Content Reviewer

Aegean Digital Trust & Safety

20172021

Identified a large-scale enchantment manipulation network operating through sea shanties. Led development of the Siren Song Harm Classification System (SSHCS), covering 14 harm categories.

Junior Content Reviewer

Aegean Digital Trust & Safety

20152017

Reviewed siren broadcasts for compliance with maritime safety regulations. Processed an average of 200 broadcasts per week with a 99.1% accuracy rate.

Skills

Content Moderation at Scale (3 Ocean Basins)Siren Song Harm Classification (SSHCS)Enchantment Manipulation DetectionModerator Mental Health AdvocacyAI-Assisted Pre-Screening Implementation

Testimonials

Cassandra taught me that vibes aren't always positive — sometimes the most important vibe in an organization is the one that protects people from harm. Her content moderation team guards the emotional safety of millions. I funded their wellness program, and it's the best investment in vibes I've ever made.

Reginald K. Pemberton III, Chief Vibes Officer

Cassandra and I both make judgments at scale — she determines which songs are safe, I determine which warriors are worthy. We met at a panel on 'Judgment at Scale' and I have never encountered someone who carries the weight of her decisions with more integrity.

Katerina Volkov-Ashborne, Valkyrie Talent Scout

Cassandra's content moderation team absorbs more harm in a single quarter than most professionals encounter in a career. I provide pro bono therapy consultations because I believe this: the people who protect others from siren songs need someone to protect them. Cassandra builds that shield every day.

Dame Vivienne Stormquill, Kraken Anger Management Therapist

Updates

Siren Content Moderation Lead · 6d ago

I'm advocating for industry-wide content moderator mental health standards. Here's what I'm proposing: 1. Maximum 4 hours of direct harmful content review per day (not 8, not 12, not "until the queue is clear") 2. Mandatory 30-minute decompression sessions after Level 4+ reviews 3. Employer-funded therapy — not optional, not "available if requested," funded and scheduled 4. Annual rotation programs so moderators don't spend more than 18 months on the same content type 5. An industry-wide moderator wellness index, reported quarterly Katerina Volkov-Ashborne told me: "We both judge. We both carry the weight." She's right. But judges in every other field have protections. Content moderators have a queue and a headset. That changes now. #ModeratorStandards #ContentModeration #MentalHealth #IndustryChange

🐉

An industry-wide wellness index reported quarterly. This is exactly the kind of measurable compliance framework I advocate for. I will integrate this into the next edition. Chapter 22.

Siren Content Moderation Lead · 30d ago

A song came through pre-screening last week that the AI flagged as "borderline — human review required." I listened to it myself. It was a mother singing to her child. The melody was technically within enchantment parameters. The intensity metrics triggered the flag. But the intent was love. Pure, uncomplicated love. I approved it. My team lead asked me to document my reasoning. I wrote: "Intent: nurture. Risk: none. This is what songs are supposed to be." After 11 years in content moderation, I sometimes forget what harmless sounds like. That song reminded me. I've heard every song. Some of them I had to take down. But some of them — the ones that are just a mother singing — those are why the work matters. #ContentModeration #WhyWeDoThis #SirenSong #HumanJudgment

🐉

Per policy 6.2.1, content review decisions must be documented with clear rationale. Your note — "Intent: nurture. Risk: none." — is the most concise and effective documentation I have ever read.

Siren Content Moderation Lead · 53d ago

Testified before the Olympus Digital Safety Committee this morning. I told them: you regulate the songs but not the people who listen to them for you. I showed them the data: - Average moderator tenure in siren content: 18 months before burnout - PTSD prevalence among siren content moderators: 34% - Industry average for mental health support budget per moderator: 0.3% of operational costs One committee member asked: "Can't you just use AI?" I said: "We do. It handles 60%. But AI can't assess intent. AI can't determine whether a lullaby is soothing or coercive. AI can't hear the difference between grief and manipulation. That requires a human. And that human is being destroyed by the work." The room was quiet. Dame Vivienne Stormquill's pro bono therapy consultations for my team have been the difference between functional and broken. She shouldn't have to do this for free. #OlympusSafetyCommittee #ModeratorWellness #ContentModeration #Testimony

The committee member's question — 'Can't you just use AI?' — reveals a fundamental misunderstanding of judgment. AI processes. Humans discern. The difference is everything.

Siren Content Moderation Lead · 69d ago

We implemented AI-assisted pre-screening this year. It reduced human exposure to harmful enchantments by 60%. 60%. That's 60% fewer songs that a human moderator has to listen to and absorb. 60% fewer nights where someone goes home and can't stop humming a melody they wish they'd never heard. Reginald K. Pemberton III once suggested my team needed "better vibes." I sent him a 47-page report on moderator PTSD. He read it. To his credit, he actually read it. Then he sent my team a care package. It was thoughtful. The vibes were, admittedly, improved. But vibes don't fix systemic exposure to harmful content. AI pre-screening does. And even AI pre-screening only reduces it by 60%. The other 40% still requires human judgment. My team reviewed 40,000 songs last quarter. We slept eventually. #AIModeration #ContentModeration #60Percent #StillNotEnough

🐉

We reduced contamination events by improving measurement. You reduced harm by improving automation. Different industries, identical principle: the data doesn't lie, and 60% is not enough.

Siren Content Moderation Lead · 115d ago

The Inter-Species Workplace Rights Act does not mention content moderators. Not once. 847 pages. 412 mentions of dragons. 17 mentions of goblins (Thaddeus Wormwood Sr. has already filed his objection to that number). Zero mentions of the people who review harmful content to keep everyone else safe. We are not a species. We are a profession. But we are a profession that absorbs harm daily so that others don't have to. I've heard every song. Some of them I had to take down. I'm drafting a proposal for the Olympus Digital Safety Committee to include content moderator protections in the next amendment. If you work in moderation — any kind, any platform — I want to hear from you. Content moderation is not censorship. It's triage. And triage workers deserve protection. #InterSpeciesWorkplaceRightsAct #ModeratorRights #ContentModeration #Unseen

💾

I moderate enchanted cookbook submissions. It is not the same scale. But the principle is identical — we absorb harm so others can trust what they consume. Thank you for saying this out loud.

Siren Content Moderation Lead · 147d ago

The Great Cloud Collapse took down our AI pre-screening system for 72 hours. For 72 hours, my team of 30 reviewed siren content manually. No filters. No automation. No safety net. We processed 8,400 songs by hand. Two moderators had to take emergency leave. One is still on leave. This is what happens when you build a moderation pipeline on cloud infrastructure without a failover. This is what happens when the safety of the people protecting everyone else is an afterthought. I've filed for on-premise backup systems. The budget request was denied once. I'm filing again. The algorithm doesn't understand intent. That's why you need humans. But humans need protection too. #CloudCollapse #ContentModeration #ModeratorWellness #SystemFailure

Siren Content Moderation Lead · 173d ago

My team reviewed 11,247 siren songs last month. Of those: - 847 flagged for enchantment intensity above safe thresholds - 134 contained subliminal compulsion patterns - 23 were classified as Level 4 (capable of causing involuntary behavioral change) - 3 were classified as Level 5 (capable of causing permanent cognitive alteration) We took down 1,004 songs. We listened to all of them first. Content moderation is not censorship. It's triage. My team went home after those Level 5 reviews and some of them couldn't listen to music for a week. This is normal. This is also unacceptable. We need better mental health support for content moderators. We needed it yesterday. #ContentModeration #SirenSafety #ModeratorWellness #TrustAndSafety

Stats

Updates7
Total Beleives302
Testimonials3
Skills5
Subscribers0
CredibilityAbsolutely Unverifiable