top of page

On Your News Feed

A few weeks ago the New York Times published an article about the nature of how we read the news on Facebook. Facebook tries to learn about your preferences – as you click on different links, what's displayed on your news feed is affected. The whole article is worth a read, as it sheds light on just how easy it is – how, perhaps, inevitable it is – for a Facebook user to get drawn into reading non-traditional and sub-standard news sources for information.

The thing that interested me the most about the article, however, was the possibility that a user, by clicking certain links, could end up in an echo chamber. She could be shown articles and videos from increasingly biased (but possibly less reliable) sources, which in turn would further perpetuate any biases in her beliefs. In fact, one could see the entire process of news consumption on Facebook as a new way to think about economic phenomena – Confirmation Bias (that people tend to seek out information that agrees with what they already believe) and Divergence (that there can be persistent disagreement among otherwise identical individuals about subjects that have objective truths).

WHAT IS CONFIRMATION BIAS?

Peter Wason coined the term “Confirmation Bias” in 1966 to describe the fact that people tend to seek, collect, interpret, and remember pieces of information in a way that favors beliefs they already have. For example, if I'm a Republican, I might watch FOX, which filters the news in a way I find more enjoyable and which aligns with my beliefs.

WHAT IS DIVERGENCE?

Divergence described the idea that two people who previously had identical, or very similar beliefs, can look at an identical stream of data and come to different conclusions about the truth of an objective reality.

Divergence is a very difficult phenomenon to explain in economics. As shown by Blackwell and Dubin, all agents must eventually agree on what the truth is in the presence of an infinite amount of information. If you and I both see a series of articles that each give a tiny bit of accurate information on the question of whether alien life exists, we should eventually agree on the answer to that question, even if the articles are complicated.

Divergence often arises as a direct result of confirmation bias. Suppose you and I start with similar beliefs on the existence of alien life, but interpret one piece of information differently. We will each be affected by confirmation bias, as we start to look for information that agrees with our now slightly different beliefs, and our beliefs will be driven further apart, leading to divergence.

WHAT ARE COMMON EXPLANATIONS FOR DIVERGENCE?

The only common fundamental assumption the rational (as opposed to behavioral) divergence literature makes is that there is an objective truth, but that it is difficult to assess, or identify. Such a truth might be “Man-made global warming exists.” Trying to evaluate that statement is complicated, and the disagreement, though shrinking, is still strong. Using that common assumption, there are a few different approaches to explain divergence:

1. Agents can choose how much attention to pay to different potential states of the world. I have a paper in this area. The basic idea is that if you and I have different opinions about the truth of global warming (for example, you believe it, I don't), that will change how much we pay attention to articles that support or debunk it. These attentional choices cause us to look for information that agrees with our priors. If we do find (by accident) information that disagrees with our prior beliefs, we may respond strongly to it, but the odds of that happening are low.

2. Another possibility is that signals are simply noisy for non-attentional, exogenous reasons. If, however, we don’t know how to interpret the signals we do get, we could still diverge from one another. Daron Acemoglu of MIT has a couple of papers that contemplate this mechanism.

3. Roland Fryer and coauthors show that an inability to correctly remember all of the signals you've been shown can cause you to falsely remember them to be stronger than they were, thus pushing you to be more confident in your beliefs than you should be.

WHAT IS DIFFERENT ABOUT FACEBOOK?

The Facebook dilemma is quite different from the explanations of divergence that currently exist. In the explanations above, people either choose what to pay attention to, knowing that it will affect what they perceive in the future, or they pay attention to everything, and other cognitive issues drive them away from each other.

But because Facebook chooses what you see by adapting to your preferences, it also chooses what you can pay attention to. We likely do not fully internalize how clicking on an article today impacts the types of articles we will see in the future. For example, if I click on an article criticizing GMO foods, Facebook will choose to show me more articles that do the same, which means that even if GMO foods are not problematic, I will be presented with fewer signals that tell me that. My choice set and my attention are diverted towards supporting the topic I initially clicked on. I am now much less likely to ever be presented with evidence that contradicts my beliefs, so I will probably never change my mind, even if my initial beliefs were not different from yours.

Compounding this problem is that the quality of the news sources (and therefore the signals) deteriorate as your beliefs become more extreme. This externality is more insidious than the previous one, because it undermines a central hypothesis of the divergence and confirmation bias literatures: that signals are at least slightly informative. The Times points out that some sub-par news sources post articles that they know have no factual basis.

These problems are triply compounded by the fact that you usually only see articles and posts from people you are friends with on Facebook, a group of people that you probably share some world-views with on average. You may never be presented with differing points of view in the first place.

HOW CAN YOU ESCAPE THE EXTERNAL BIAS?

So what can you do about the fact that Facebook twice distorts your already biased news feed, once with confirmatory signals, and once again with less reliable signals? For some, it may not matter. Or at least, it may not matter all the time. Sometimes it's fun to read a polemic you agree with, rather than a more balanced piece that will be informative.

But for the other times – don’t use Facebook for news. Use actual news sources. Doing so will reduce the risk of being shown nonsense articles and eliminate any adaptive learning that will skew how you are shown the news. The sources themselves might be biased, but such a level of fixed bias is easier to understand and correct for.

If you must use Facebook:

1. Pay attention to the sources of the articles you click on. If you find a piece you like from a source that you think is biased, try to confirm it with other, less biased sources.

2. Seek out links that espouse views that go against your opinion - this will make it less likely that you end up in an echo chamber of confirmatory views, due to a machine learning mechanism over which you have no control.

WHAT COULD BE DONE INSTITUTIONALLY?

Facebook has no incentive to change its behavior. It's generating a lot of clicks and a lot of information and a lot of advertising revenue by indulging your preferences. But is there some larger civic goal of maintaining (or producing) an informed populace? I would argue that there is, and given that premise, we need to think about what could be done to prevent learning algorithms from sending all of us to our favorite echo chambers. I firmly believe that the New York Times would try to do what Facebook does (presenting different articles on the homepage to people based on their historical interests) if they were able. So here are some possibilities:

1. A ratings agency, like PolitiFact for individual news pieces. That might be too difficult, so a similar concept could apply to news outlets instead of articles. This might increase awareness of how our biases evolve, but only if people were made aware of the ratings by Facebook. It is unclear to me to whether the ratings should reflect accuracy, bias, or both.

2. Instead of a centralized agency that provided mandated ratings, one decentralized solution would be an agency that could offer its services (a la Moodys, or Fitch) to individual sources. Relatively unbiased, or accurate sources would be incentivized to purchase ratings, and prominently mention them, while worse sources would not.

3. Facebook could, potentially get information about your preferences more quickly, if they just asked for it. For example, Facebook could display on one side of your homepage, your current preferences for news quality and bias, and then show you how this compares to your friends' preferences, your neighborhood, your country, etc. It could also ask you to set your preferences on what types of articles you want to be shown, and then try to provide that to you.

I think ultimately the last solution is the most feasible, and the most plausible, especially because it could provide valuable data to Facebook on what you want your news feed to look like.

FOOTNOTE ON CONFIRMATION BIAS: One of the first, and simplest experiments to show confirmation bias considered four cards lying on a table:

The participant is told that each card has a number on one side and a color on the other side. They are allowed to turn two cards over to answer the question: do all cards that have zeros on one face have red on the other?

In logical terms this asks to test the statement: IF one side is zero, THEN the other side is red. A logical statement of this form (if A, then B), can only be invalidated when A is true but B is false. So the participant should be looking for a card where one side has a zero but the other side is green. Therefore, the two cards she should flip over are the card with a zero showing, and the card with the green face. However, the results of the experiment show that participants generally flipped the cards with the zero and the red face.

The putative explanation for this was that agents wanted information that would confirm the hypothesis as opposed to disconfirm it. Another explanation is that people don't understand how logic works. This experiment, and many, many others like it have worked to show the strength of the Confirmation Bias in different experimental and non-experimental settings.


bottom of page