EmailDoor.com – Email Marketing Support
No, it’s not just your relatives and old high school friends sharing fake coronavirus cures and anti-mask propaganda on Facebook.
The social media giant admitted the mind-boggling scale of COVID-19 misinformation flooding its platforms Tuesday in a statement to reporters. And, in perhaps what won’t come as a surprise to the people who rely on Facebook and Instagram to get their daily news, the sites are drowning in it.
For starters, the company confirmed it removed more than 7 million “pieces of harmful COVID-19 misinformation” from both Facebook and Instagram in the months spanning April to June. Examples of which include posts pushing “exaggerated cures” and “fake preventative measures.”
That might include, although Facebook didn’t specify, incorrect and possibly dangerous claims that hydroxychloroquine is a cure for COVID-19. Or, perhaps, suggestions to inject oneself with bleach? Oh yeah, and then there’s the false claim — made by Donald Trump — that children are “almost immune” to COVID-19.
Facebook didn’t stop there. The company put “warning labels” on 98 million “pieces of COVID-19 misinformation on Facebook” in the same three months.
There are, of course, other forms of misinformation on Facebook and Instagram as well. For example, in the U.S. alone, from March to July Facebook removed 110,000 pieces of content “that could mislead people about voting or try to intimidate them so they don’t vote.”
But back to the coronavirus.
“Today’s report shows the impact of COVID-19 on our content moderation and demonstrates that, while our technology for identifying and removing violating content is improving, there will continue to be areas where we rely on people to both review content and train our technology,” reads a statement from Emily Cain, a policy communications manager at Facebook, emailed to reporters.
Putting firm numbers to what many assumed — that Facebook has a coronavirus misinformation problem — was just one of the many announcements made by the company Tuesday. Facebook also published three posts to its Newsroom page, all addressing varying aspects of its community standards and content review policies.
SEE ALSO: Facebook removes Trump post for spreading COVID-19 misinformation
The first post announced Facebook’s sixth . This report attempts to clearly communicate how Facebook enforced its community standards from April through June of this year. The second post tries to explain how Facebook reviews content (as opposed to what content had been reviewed). The third post, meanwhile, announces Facebook’s intention to launch an ostensibly independent, third-party audit of its annual community standards enforcement reports.
As Tuesday’s news makes clear, Facebook has either found or made itself the arbiter of public discourse for much of the internet. At least when it comes to COVID-19 misinformation, it claims to be trying to do the right thing.
UPDATE: Aug. 11, 2020, 11:50 a.m. PDT: After we followed up, a Facebook spokesperson clarified that its email to reporters “was missing a bit of context.” The “110 thousand pieces of content in the US” Facebook wrote it removed for violating misinformation policies is specifically in reference to “content that could mislead people about voting or try to intimidate them so they don’t vote.”
The story has been updated to reflect that fact.