Image source: Flickr
In an attempt to increase transparency, Facebook has reported key updates about the company’s detection exercises and the measured outcomes in its fourth content moderation report. The social giant revealed it shut down 3.2 billion fake accounts from April to September this year. The world’s biggest social network estimates that about 5% of its 2.45 billion user accounts are fake.
For the first time, the report details data on self-injury. Facebook claims it removes content that depicts or encourages self-injury, including graphic imagery and real-time depictions. In the third quarter of the year, the platform removed 2.5 million pieces of such content, of which 97.3% were detected proactively.
There has also been a shift in the data collection and measurement of terrorist propaganda. In the latest iteration of the report, the gathered data include actions taken against all terrorist organizations. Findings indicated that the rate at which such content is proactively detected on Facebook is 98.5%. This is also the first report to include data from Instagram in areas such as illicit firearm and drug sales, and terrorist propaganda.
Within the hate speech realm, the detection techniques include text and image matching, alongside machine-learning classifiers that consider factors like language, as well as reactions and comments to a post. In Q2 2019, technological improvements led to removing some posts automatically, but only when content is was either identical or near-identical to text or images previously removed by the content review team as violating policy. This resulted in the proactive rate to climb to 80%, from 68% in the last report.
“While we are pleased with this progress, these technologies are not perfect and we know that mistakes can still happen”, admits Facebook’s VP Integrity, Guy Rosen. Considering that Facebook’s track record on privacy isn’t very good, the industry awaits to see what technologies and policies the platform will put in place to amend its tarnished reputation.