Facebook to Give $52M Cash Payout to Moderators Who Developed ‘PTSD’ From Reviewing Posts


While Facebook’s users often feel neglected and unwanted on the platform as company execs continue to implement new censorship protocols, there is a group that’s about to receive a massive reward from Facebook – content moderators.

Thousand of past and current content moderators, whose primary responsibility is deciding what content gets deleted for violating, will be eligible for the funds, which are set to be paid out after a class-action lawsuit by the moderators..

Via NPR:

Facebook will pay $52 million to thousands of current and former contract workers who viewed and removed graphic and disturbing posts on the social media platform for a living, and consequently suffered from post-traumatic stress disorder, according to a settlement agreement announced on Tuesday between the tech giant and lawyers for the moderators.

Under the terms of the deal, more than 10,000 content moderators who worked for Facebook from sites in four states will each be eligible for $1,000 in cash. In addition, those diagnosed with psychological conditions related to their work as Facebook moderators can have medical treatment covered, as well as additional damages of up to $50,000 per person.

As Reclaim the Net has pointed out, one of the primary concerns for Facebook executives is the fact that site moderators have often changed their perspectives on “conspiracy theories” and unorthodox political opinions after viewing memes and informational videos.

Facebook moderators have been increasingly supplemented with artificial intelligence, and the company recently announced the launch of a “Hateful Memes Challenge,” which offers a $100,000 prize pool to researchers who develop meme censorship AIs based on a data set provided by Facebook:

Tech giant Facebook announced the launch of a bizarre competition called the “Hateful Memes Challenge” this week, in which researchers will compete for a $100,000 prize pool by developing artificial intelligence that can identify “hate speech” in memes.

Facebook declared that it had created over 10,000 “hateful memes,” which will be used as a data set to train the AIs created during the Hateful Memes Challenge.

The social media titan describes the urgent need for AI that can identify “hateful memes” thusly: “In order for AI to become a more effective tool for detecting hate speech, it must be able to understand content the way people do: holistically. When viewing a meme, for example, we don’t think about the words and photo independently of each other; we understand the combined meaning together. This is extremely challenging for machines, however, because it means they can’t just analyze the text and the image separately. They must combine these different modalities and understand how the meaning changes when they are presented together.”

The “hateful memes” data set will be only available to researchers and journalists, and Facebook has ensured there will be “strict restrictions on sharing the data” to prevent “misuse.”

Facebook has also appointed left-wing political activists to its “Supreme Court” oversight board.