Three Ways Facebook Could Reduce Fake News Without Resorting To Censorship

The

The public gets a lot of its news and information from Facebook. Some of it is fake. That presents a problem for the sites users, and for the company itself.

Facebook cofounder and chairperson Mark Zuckerberg said the company will find ways to address the problem, though he didnt acknowledge its seriousnes. And without apparent irony, he made this announcement in a Facebook post surrounded at the least for some spectators by fake news item.

Other technology-first companies with similar power over how the public informs itself, such as Google, have worked hard over the years to demote low-quality datum in their search results. But Facebook has not constructed similar moves to help users.

What could Facebook do to meet its social be obliged to kind fact from fiction for the 70 percent of internet users who access Facebook? If the site is increasingly where people are getting their news, what could the company do without taking up the mantle of being a final arbiter of truth? My work as a prof of information studies suggests there are at least three options.

Facebooks role

Facebook says it is a technology company , not a media company. The companys primary motive is gain, rather than a loftier objective like rendering high-quality information to help the public act knowledgeably in the world.

Nevertheless, posts on the site, and the surrounding conversations both online and off, are increasingly involved with our public discourse and the nations political agenda. As a outcome, the corporation has a social obligation to use its technology to advance the common good.

Discerning truth from deception, however, can be daunt. Facebook is not alone in raising very concerned about its ability and that of other tech companies to judge the quality of news. The director of FactCheck.org, a nonprofit fact-checking group based at the University of Pennsylvania, told Bloomberg News that many claims and narratives arent entirely false. Many have kernels of truth, even if they are very misleadingly phrased. So what can Facebook truly do?

Option 1: Nudging

One option Facebook could adopt involves utilizing existing listings identifying prescreened reliable and fake-news sites. The site could then alert those who wish to share a troublesome article that its source is questionable.

One developer, for example, has created an extension to the Chrome browser that indicates when a website youre looking at might be fake.( He calls it the B.S. Detector .) In a 36 -hour hackathon, a group of college student created a similar Chrome browser extension that indicates whether the website the article comes from is on a list of verified dependable sites, or is instead unverified.

These extensions present their alerts while people are scrolling through their newsfeeds. At present, neither of these works directly as part of Facebook. Incorporating them would provide a more seamless experience, and would make the service available to all Facebook users, beyond only those who installed one of the extensions on their own computer.

The company could also use the information the extensions make or their source material to alert users before they share unreliable info. In the world of software design, this is known as a nudge. The warning system monitors user behavior and advises people or gives them some feedback to assist alter their actions when using the software.

This has been done before, for other purposes. For instance, colleagues of mine here at Syracuse University built a nudging application that monitors what Facebook users are writing in a new post. It pops up a notification if the content they are writing is something they might sadnes, such as an angry message with swear words.

The beauty of nudges is the gentle but effective route they remind people about behaviour to help them then change that behaviour. Studies that have tested the use of nudges to improve healthy behaviour, for example, find that people are more likely to change their diet and workout based on gentle reminders and suggestions. Nudges can be effective because they give people control while also devoting them useful information. Ultimately the recipient of the nudge still decides whether to use the feedback provided. Nudges dont feel coercive; instead, theyre potentially empowering.

Option 2: Crowdsourcing

Facebook could also use the power of crowdsourcing to assist assess news sources and indicate when news that is being shared has been evaluated and rated. One important challenge with fake news is that it plays to how our brains are wired. We have mental shortcuts, called cognitive biases, that help us make decisions when we dont have quite enough information( we never do ), or quite enough time( we never do ). Generally these shortcuts work well for us as we make decisions on everything from which route to drive to work to what automobile to buy But, occasionally, they fail us. Falling for fake news is one of those instances.

This can happen to anyone even me. In the primary season, I was following a Twitter hashtag on which then-primary candidate Donald Trump tweeted. A message appeared that I received sort of shocking. I retweeted it with a comment mocking its offensiveness. A day afterwards, I realized that the tweet was from a charade account that looked identical to Trumps Twitter handle name, but had one letter changed.

I missed it because I had fallen for confirmation bias the tendency to overlook some information because it runs counter to my expectations, predictions or hunches. In this case, I had disregarded that little voice that told me this particular tweet was a little too over the top for Trump, because I believed he was capable of producing messages even more inappropriate. Fake news preys on us the same way.

Another problem with fake news is that it can travel much farther than any correction that might come afterwards. This is similar to the challenges that have always faced newsrooms when they have reported erroneous datum. Although they publish corrections, often the people originally exposed to the misinformation never ensure the update, and therefore dont know what they read earlier is incorrect. Furthermore, people tend to hold on to the first information they encounter; corrections can even backfire by repeating incorrect information and reinforcing the error in readers minds.

If people evaluated information as they read it and shared those ratings, the truth scores, like the nudges, could be part of the Facebook application. That could help users decide for themselves whether to read, share or simply dismiss. One challenge with crowdsourcing is that people can game these systems to try and drive biased outcomes. But, the beauty of crowdsourcing is that the crowd can also rate the raters, just as happens on Reddit or with Amazons reviews, to reduce the effects and weight of troublemakers.

Option 3: Algorithmic social distance

The third style that Facebook could help would be to reduce the algorithmic bias that presently exists in Facebook. The site chiefly shows posts from those with whom you have engaged on Facebook. In other terms, the Facebook algorithm creates what some have called a filter bubble, an online news phenomenon that has concerned intellectuals for decades now. If you are exposed merely to people with ideas that are like your own, it leads to political polarization: Liberals get even more extreme in their liberalism, and conservatives get more conservative.

The filter bubble creates an echo chamber, where similar ideas bouncing around endlessly, but new information has a hard time detecting its way in. This is a problem when the echo chamber blocks out corrective or fact-checking information.

If Facebook were to open up more news to come into a persons newsfeed from a random situate of people in their social network, it would increase the chances that new information, alternative information and contradictory info would flow within that network. The median number of friends in a Facebook users network is 338. Although many of us have friends and family who share our values and faiths, we also have acquaintances and strangers who are part of our Facebook network who have diametrically opposed views. If Facebooks algorithms brought more of those views into our networks, the filter bubble would be more porous.

All of these options are well within the capabilities of the engineers and researchers at Facebook. They would empower users to make better decisions about the information they choose to read and to share with their social networks. As a resulting platform for information dissemination and a generator of social and political culture through talk and information sharing, Facebook need not be the ultimate arbiter of truth. But it can use the power of its social networks to help users gauge the value of items amid the river of content they face.

TheJennifer Stromer-Galley, Professor of Information Studies, Syracuse University

This article was originally published on The Conversation. Read the original article.

Read more:

Leave a Reply

Your email address will not be published. Required fields are marked *