Can the wisdom of the masses help solve the problem of trust in social media?


The study found that with only eight laymen, there was no statistically significant difference between crowd performance and a given fact checker. Once the group reached 22 people, they actually started to be significantly better than the fact checkers. (These numbers describe the results when laymen are told of the source of the article. When they don’t know the source, the performance of the crowd will be slightly worse.) Perhaps most importantly, for the classification as “political” because of these Stories are where fact checkers are most likely to disagree.Political fact check is real hard.

It seems impossible for a random group of people to surpass the job of a well-trained fact checker—especially based solely on knowing the title, first sentence, and publication. But this is the whole idea behind the wisdom of the crowd: gather enough people together and act independently, and their results will defeat the experts.

“Our feeling about what is happening is that people are reading this article and asking themselves,’Does this match everything else I know?'” Rand said. “This is where the wisdom of the crowd comes in. You don’t need everyone to know what’s going on. With the average ratings, the noise is cancelled out, and you get a much higher resolution signal than anyone else.”

This is different from the Reddit-style yes and no system, nor is it a Wikipedia model of citizen editing. In these cases, a small, non-representative subset of users chooses to manage the material, and everyone can see what others are doing. The wisdom of the group will only be reflected when the group is diversified and the individual makes independent judgments. Relying on randomly assembled, politically balanced groups, rather than a group of volunteers, makes the researcher’s approach more difficult to game. (This also explains why the experiment method is different from Twitter’s Bird watching, A pilot project that invites users to write notes explaining why a given tweet is misleading. )

The main conclusion of the paper is simple: social media platforms such as Facebook and Twitter can use crowd-based systems to significantly and cheaply expand their fact-checking operations without sacrificing accuracy. (Non-professionals in the study pay $9 per hour, which means that the cost of each article is about $0.90.) The researchers believe that the crowdsourcing method will also help increase trust in the process because it is easy to assemble A group of laymen who are politically balanced, so it’s harder to blame partisanship. (According to 2019 Pew Survey, The overwhelming majority of Republicans believe that fact-checkers “tend to favor one side.”) Facebook has made its debut Something similarThe paying user group “as researchers looking for information that can contradict the most obvious online hoaxes or confirm other claims.” But this effort is aimed at providing information on the work of official fact-checking partners, rather than adding to it.

Expanding fact-checking is one thing. The more interesting question is how the platform should use it. Should stories flagged as false be banned? What about stories that may not have any objective false information but are still misleading or manipulative?

Researchers believe that the platform should stay away from true/false binary, regardless of it/mark it as binary. Instead, they suggested that the platform incorporate “continuous crowdsourcing accuracy ratings” into its ranking algorithm. Rather than have a single true/false cutoff value and handle all content above it in one way and all content below it in another way, the platform should determine whether a given link is in the user feed When the degree of prominence, the scores allocated by the population are combined in proportion. In other words, the more inaccurate the crowd judges a story, the lower the algorithm ranks it.



Source link

Recommended For You

About the Author: News Center