Facebook deletes 3.2bn fake accounts and millions of child abuse posts

Facebook Inc took down 3.2bn fake accounts in Q3 and Q4/2019, plus millions of posts showing child abuse and suicide. This is according to its most recent content moderation report released yesterday. That’s over double the number of fake accounts removed during the same timeframe in 2018 (1.55bn). So either Facebook is getting better at detecting these, or the number of fake accounts being created is rising, or both.

More and more, disinformation researchers are seeing Instagram as a hot bed for fake news. Proactive detection of violating content was lower on Instagram than on Facebook. The company proactively detected content linked with terrorist organizations 98.5% of the time on Facebook and 92.2% of the time on Instagram. During Q3/2019 it removed over 11.6m posts of child nudity and sexual exploitation of children on Facebook, but only 754,000 posts on Instagram.

On the one hand, it’s great to see Facebook deleting these unethical accounts and posts. But on the other hand, it still raises a number of questions: if Facebook knew the accounts were fake and the posts were inappropriate, did it remove them immediately, or did it wait? If it waited, was this because it made money out of them? And once it knew about them, what did Facebook do about them (other than deleting them)? Find out more at https://www.reuters.com/article/us-facebook-enforcement/facebook-removes-32-billion-fake-accounts-millions-of-child-abuse-posts-idUSKBN1XN2B2

Leave a Reply

Your email address will not be published. Required fields are marked *