An updated version of this article was published on September 20, 2021. Read it here.
Facebook is quietly experimenting with reducing the amount of political content in users’ news feeds. The move is a tacit acknowledgment that the way the company’s algorithms work could be a problem.
The heart of the matter is the difference between provoking a response and providing people with desired content. Social media algorithms – the rules their computers follow in deciding the content you see – depend heavily on people’s behavior to make these decisions. Specifically, they keep track of the content that people respond to or like, comment on, and share.
As a computer scientist who studies ways to interact with large numbers of people using technology, I understand the logic of using crowd knowledge in these algorithms. I also see how social media companies do this in practice.
From Lions on the Savannah to Likes on Facebook
The concept of crowd wisdom holds that using cues from the actions, thoughts, and preferences of others as a guide will lead to good decisions. For example, collective predictions are generally more accurate than individual predictions. Collective intelligence is used to predict financial markets, sports, elections and even disease outbreaks.
Over millions of years of evolution, these theories have been codified into the human brain as cognitive biases that come with names such as familiarity, mere-exposure, and the bandwagon effect. If everyone starts running then you should start running too. Maybe someone has seen a lion coming and running, so that your life can be saved. You probably have no idea why, but it’s better to ask the question later.
Your brain picks up clues from the environment — including your peers — and uses simple rules to quickly translate those cues into decisions: go with the winner, follow the majority, imitate your neighbor. These rules work remarkably well in specific situations because they are based on sound assumptions. For example, they believe that people often act rationally, that many are unlikely to be wrong, that the past predicts the future, and so on.
Technology allows people to access signals from a vast number of other people, most of whom they do not know. Artificial intelligence applications make heavy use of these popularity or “engagement” signals, from selecting search engine results to recommending music and videos, and from suggesting friends to ranking posts on News Feed.
Not everything that goes viral deserves to be
Our research shows that virtually all web technology platforms, such as social media and news recommendation systems, have a strong popularity bias. When apps are driven by cues like engagement rather than explicit search engine queries, popularity bias can lead to harmful unintended consequences.
Social media such as Facebook, Instagram, Twitter, YouTube and TikTok rely heavily on AI algorithms to rank and recommend content. These algorithms take as input what you “like”, comment on and share – in other words, the content you engage with. The algorithm aims to find what people like and maximize engagement by ranking it at the top of your feed.
On the surface it seems reasonable. If people like credible news, expert opinions and funny videos, these algorithms should identify such high quality content. But the wisdom of the crowd makes an important assumption here: recommending what’s popular will help “bubble up” high-quality content.
We tested this assumption by studying an algorithm that ranks items using a mix of quality and popularity. We found that in general, popularity bias is more likely to lower the overall quality of content. This is because engagement is not a reliable indicator of quality when certain people have been exposed to an object. In these cases, the join generates a noisy signal, and the algorithm is likely to amplify this initial noise. Once the popularity of a low-quality item becomes large enough, it will continue to grow.
Not only are algorithms affected by engagement bias – it can affect people as well. Evidence shows that information is transmitted via “complex contagion,” which means that the more times a person is exposed to an idea online, the more likely they are to adopt it and share it again. will do. When social media tells people that an item is going viral, their cognitive biases translate into irresistible urges to pay attention to it and share it.
We recently ran an experiment using a news literacy app called Fakey. This is a game developed by our lab, which simulates a news feed like Facebook and Twitter. Players see a mix of fake news, junk science, ultra-partisan and conspiratorial sources as well as current articles from mainstream sources. They get points for sharing or liking news from credible sources and for flagging low-credibility articles for fact-checking.
We found that players are more likely to like or share and less likely to flag articles from low-credibility sources when players can see that many other users have linked to those articles. Exposure to engagement metrics thus creates a vulnerability.
The wisdom of the crowd fails because it is built on the misconception that the crowd is made up of diverse, independent sources. There can be many reasons for this not happening.
First, because of people’s tendency to connect with like-minded people, their online neighborhoods aren’t very diverse. The ease with which a social media user can befriend people they disagree with pushes people into homogeneous communities, often referred to as echo chambers.
Second, because friends of many people are friends of each other, they influence each other. A famous experiment demonstrated that knowing what music your friends like affects your own stated preferences. Your social desire for conformity distorts your independent judgment.
Third, signs of popularity can be fabricated. Over the years, search engines have developed sophisticated techniques to combat so-called “link farms” and other schemes to manipulate search algorithms. Social media platforms, on the other hand, are beginning to learn about their vulnerabilities.
People aiming to manipulate the information market have created fake accounts and organized fake networks such as trolls and social bots. They have flooded the network to pretend that a conspiracy theory or a political candidate is popular, deceiving both the platform algorithm and the cognitive bias of the people at once. They have also changed the structure of social networks to create confusion about the majority vote.
[Over 110,000 readers rely on The Conversation’s newsletter to understand the world. Sign up today.]
dialing down engagement
What to do? Technology platforms are currently on the defensive. They are getting more aggressive during elections to remove fake accounts and harmful misinformation. But these attempts may sound like a game of odds.
A different, preventive approach would be to add friction. In other words, slowing down the process of dissemination of information. High frequency behavior such as automatic likes and sharing can be circumvented by captcha testing or charges. Not only will this reduce opportunities for manipulation, but with less information people will be able to pay more attention to what they see. This would leave less room for engagement bias to influence people’s decisions.
It would also help if social media companies adjusted their algorithms to rely less on engagement to determine the content you serve.