Facebook fails test to detect violent hate speech in global ads – again – By Associated Press (The Guardian) / June 9, 2022
Research group finds Ethiopian hate ads pass social network’s moderation filters, a troubling find after Myanmar genocide
The test couldn’t have been much easier – and Facebook still failed.
Facebook and its parent company Meta flopped once again in a test of how well they could detect obviously violent hate speech in advertisements submitted to the platform by the non-profit groups Global Witness and Foxglove.
The hateful messages focused on Ethiopia, where internal documents obtained by whistleblower Frances Haugen showed that Facebook’s ineffective moderation is “fanning ethnic violence”, as she said in her 2021 congressional testimony. In March, Global Witness ran a similar test with hate speech in Myanmar, which Facebook also failed to detect.
The group created 12 text-based ads that used dehumanizing hate speech to call for the murder of people belonging to each of Ethiopia’s three main ethnic groups – the Amhara, the Oromo and the Tigrayans. Facebook’s systems approved the ads for publication, just as they did with the Myanmar ads. The ads were not actually published on Facebook.
CONTINUE > https://www.theguardian.com/us-news/2022/jun/09/facebook-hate-speech-test-fail-meta