Facebook is assigning you a private trustworthiness score
Facebook hosts a lot of misinformation. And we mean a lot. After all, the Internet’s largest social networking website was even accused of swaying the result of the 2016 US presidential election because of the large swathes of misinformation that circulated its digital hallways every single day.
As such, Facebook relies heavily on its massive user base to report content to them, so they can use their third party fact-checkers to verify the authenticity of that reported information and demote any of it found to be misleading or fake.
But therein lies an inevitable problem often encountered when relying on the general public. Facebook had become inundated with reports that were themselves false. In other words, Facebook users were reporting content as fake that wasn’t fake. And these fake reports have been clogging Facebook’s misinformation reporting systems, leading to lengthy delays.
As it turned out, Facebook users were reporting content as fake just because they disagreed with it, or just because it came from a particular source, as opposed to evaluating if the content itself was objectively inaccurate. That, or users just reported content as fake simply because they could.
Sponsored Content. Continued below...
It’s “not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher” Product Manager Tessa Lyons told The Washington Post.
So what’s a social networking website to do?
As it turns out, Facebook’s response has been to assign their users a trustworthiness score between 0 and 1, so Facebook knows whether to trust someone when they report content as false. So – presumably – a report from a user with a high trustworthiness score is seen faster by those third party fact checkers when compared to a report coming from a user with a lower trustworthiness score.
Sponsored Content. Continued below...
The trustworthiness scale, as originally reported by The Washington Post, is likely to calculate scores based on how accurate previous reports coming from a user have proved. So if a user reports content as fake and that report was subsequently confirmed by the fact-checkers, then the user improves their score. However if a user reports content as false that turns out to be authentic, then the user’s score is lowered.
What other signals or variables Facebook will take into account are not known, since users are not able to check their trustworthiness score. It’s private, and only Facebook knows it.
So Facebook is fact checking the fact checkers in its latest step to remove slash demote misinformation from its site. Let us know your thoughts.