Paid To Be Traumatised: The Grim Reality of Social Media Moderation

Barish Raman

In India’s booming AI and social media ecosystem, rural women are quietly shouldering the hidden burden of training algorithms by reviewing endless streams of violent, abusive, and traumatic content. Paid to endure psychological harm with little support, these “ghost workers” reveal the exploitative human cost behind safer online platforms.

In India’s rapidly expanding AI and social media landscape, rural women are quietly bearing a heavy, hidden burden: reviewing streams of violent, abusive, and traumatic content to train algorithms that keep online platforms safer. These underrecognized, over-exploited and underpaid workers, often from marginalized communities, endure lasting psychological trauma with a severe lack of support; exposing the exploitative underbelly of technological progress.

As artificial intelligence powers content detection on platforms like Instagram, YouTube, and Facebook, the demand for human-labelled data has surged. In India, this work increasingly falls to young women in rural and semi-rural areas, who log in from homes or basic setups to classify graphic material; sexual violence, child exploitation, extreme accidents, hate speech, and pornography, often handling hundreds of items per shift. Their detailed annotations teach AI to spot and filter harm automatically, fueling safer feeds amid exploding user content and generative tools that can amplify abuse, including deepfakes.

Many enter these roles lured by promises of stable, home-based digital jobs that outpay traditional rural options like agriculture. India’s data annotation sector, valued at around $250 million in 2021 with 70,000 workers (per Nasscom), has grown substantially with the AI boom. Much of the revenue comes from US clients, and estimates suggest the workforce could reach 1 million by 2030. Women comprise at least half, drawn from marginalized rural, Dalit, and Adivasi communities, where such jobs offer better pay than agricultural labour. Additionally, improved connectivity has made this possible, integrating remote villages into global AI supply chains and drawing in at least half the workforce as women, per industry estimates.

Yet the reality proves far harsher. Workers describe initial shock—sleepless nights, vivid nightmares, and intrusive thoughts as disturbing visuals replay endlessly. Over time, many shift to emotional numbness, feeling “blank” inside, though anxiety, dissociation, hypervigilance, flashbacks, and strained relationships persist. Strict non-disclosure agreements isolate them further, barring open discussion with family or friends and heightening vulnerability to stigma or job loss.

Unlike standard moderation, which simply removes violating posts, this work focuses on precise labelling for AI training datasets. These moderators, primarily rural women, may review up to 800 disturbing images and videos each day, earning between ₹21,000 and ₹30,000 per month,  a modest wage for work that exacts a profound psychological toll. Often funneled towards the most graphic queues due to limited bargaining power, these women absorb the brunt of this trauma without adequate safeguards.

Experts reinforce the gravity of the situation. Sociologist Milagros Miceli likens content moderation to “dangerous work, comparable to any lethal industry,” asserting that no workers fully recover from the harm. A December 2025 study in Frontiers in Psychiatry, incorporating Indian moderators, pinpoints traumatic stress as the foremost risk, with enduring secondary trauma despite attempted interventions. Participants reported intrusive thoughts, anxiety, sleep disruptions, and behavioral shifts like social withdrawal.

For these women working alone in their village homes, there’s hardly any real help when things get tough, so the loneliness just grows. Companies push for quicker work and more output, often sending rural women, who they see as cheap and easy to hire, right into the worst, most upsetting videos and pictures. There’s no regular breaks from the bad and grim stuff, no good training on dealing with the shock and trauma; without structured rotation, trauma-informed training, or accessible counseling. What they were told would be a good “online job” that gives them power slowly turns into endless bad nights of sleep, feeling cut off from emotions, and problems at home with family. If nothing changes soon, this kind of quiet pain could just become normal as everyone rushes to build better AI.

The irony deepens with AI’s growth: platforms deploy generative tools for creation and recommendations, yet rely on human trauma to curb resulting harms. This burden disproportionately hits vulnerable women in the Global South, reinforcing marginalisation while corporations profit from “safer” systems.

Ethical questions loom large: Is truly safe AI possible when built on unaddressed human suffering? Reforms are overdue, including mandatory mental health resources, trauma-informed practices, fairer wages, task limits, transparency from platforms, and accountability for outsourcing chains. Without them, these ghost workers, essential yet unseen, will continue paying the steepest price for digital safety, their quiet endurance a stark reminder of technology’s uneven costs.

Barish Raman

The author is a young writer with a keen interest and strong base of expertise in society and psychology. She aims to shed light upon any topic in a manner that can empower a reader. She is fascinated with the ever changing dynamics of society and how they aff ect us. Barish is currently pursuing Psychology in Universitá Cattolica Del Sacro Cuore in Milan, Italy.

Leave a Reply

Your email address will not be published.

19 − 5 =