Lawmakers have spent years studying how hate speech, misinformation, and bullying on social media can do real harm. Increasingly, they are pointing the finger at the algorithms that power sites like Facebook and Twitter, the software that decides what content users will see and when they will see it.
Some legislators on both sides argue that when social media sites increase the effectiveness of messages containing hate or violence, those sites become accomplices. And they proposed bills that would strip companies of legal protections that would allow them to deflect legal action for much of the content posted by their users when the platform increases the reach of a malicious post.
The House Energy and Trade Committee discussed several proposals at a hearing on Wednesday. The hearing also included testimony from Frances Haugen, a former Facebook employee who was recently leaked from internal company documents.
The removal of the legal shield known as Section 230 would mean a dramatic change for the Internet, because it has long since opened up a wide range of opportunities for building social media websites. Ms Haugen stated that she supports changing Section 230, which is part of the Communications Integrity Act, so that it no longer applies to certain decisions made by algorithms on technical platforms.
But what exactly counts as algorithmic amplification? And what is harmful? The sentences offer completely different answers to these important questions. And how they respond to them may determine whether the courts will find the bills constitutional.
Here’s how the draft laws address these pressing issues:
What is algorithmic gain?
Algorithms wherever… Basically, an algorithm is a set of instructions that tell the computer how to do something. If the platform could be sued anytime the algorithm did something to the post, products that lawmakers aren’t trying to regulate could be trapped.
Some of the proposed laws define the behavior they want to regulate in general terms. The bill, sponsored by Senator Amy Klobuchar, a Minnesota Democrat, would sue the platform if it “promotes” the spread of public health disinformation.
Ms Klobuchar’s health misinformation bill will enable platforms to pass if their algorithm promotes content in a “neutral” manner. This could mean, for example, that a platform that ranks posts in chronological order doesn’t need to worry about the law.
Other legislation is more specific. The bill from Democratic Representatives Anna Ashu of California and Tom Malinowski of New Jersey, both Democrats, defines dangerous amplification as any action aimed at “ranking, ordering, promoting, recommending, expanding, or similarly changing the delivery or display of information.”
Another bill, written by the Democratic House of Representatives, states that platforms can only be sued if the enhancement in question was caused by the user’s personal data.
“These platforms are not passive onlookers – they deliberately prefer profit to people, and our country is paying the price,” spokesman Frank Pallone Jr., chairman of the Energy and Trade Committee, said in a statement when he announced the law.
Mr. Pallone’s new bill includes an exemption for any business with five million or fewer monthly users. It also excludes messages that pop up when a user searches for something, even if the algorithm ranks them, as well as web hosting and other companies that make up the backbone of the Internet.
While Ms Haugen previously told lawmakers that there should be Article 230 restrictions, she warned the committee on Wednesday to avoid unforeseen negative consequences.
She appears to have been referring to a 2018 setting that has removed the protection of the legal shield where platforms knowingly promote human trafficking for the purpose of sexual exploitation. Sex workers say the change puts them at risk as it becomes more difficult for them to use the Internet to verify clients. In June, the Government Audit Office reported that federal prosecutors have only used the new leeway once since Congress approved it.
“When you consider the Section 230 reforms, I urge you to move forward with open eyes on the implications of the reform,” Ms Haugen said. “I encourage you to speak with human rights defenders who can help provide context on how the latest reform of 230 had a dramatic impact on the safety of some of the most vulnerable people in our society, but was rarely used for its original purpose.”
What content is harmful?
Legislators and others have pointed to a wide range of content that they believe is related to real harm. There are conspiracy theories that can lead to violence for some supporters. Messages from terrorist groups could prompt someone to commit an attack, as one person’s relatives claimed when they sued on Facebook after a Hamas member fatally stabbed him. Other politicians have raised concerns about targeted advertisements that lead to housing discrimination.
Most of the bills currently in Congress deal with certain types of content. Ms. Klobuchar’s bill covers “health misinformation.” But that proposal leaves it up to the Department of Health and Human Services to determine what exactly that means.
Explore Facebook Articles
The tech giant is in trouble. A leaked internal file by a former Facebook employee has provided insights into the secretive social media company and renewed calls for stricter regulation of the company’s widespread presence in the lives of its users.
“The coronavirus pandemic has shown us how deadly disinformation can be, and we must take action,” Ms. Klobuchar said when she announced the proposal, co-written with Senator Ben Ray Luhan, a New Mexico Democrat.
The legislation proposed by Ms. Eshu and Mr. Malinowski takes a different approach. This only applies to an increase in the number of posts that violate three laws: two prohibit violations of civil rights, and the third prosecutes international terrorism.
Mr. Pallone’s bill is the newest of all, and applies to any post that “has materially contributed to causing physical or serious emotional harm to any person.” This is a high legal standard: emotional distress must be accompanied by physical symptoms. But this may concern, for example, a teenager who views Instagram posts that reduce her self-esteem so much that she tries to hurt herself.
Some Republicans raised concerns about the proposal on Wednesday, arguing that it would encourage platforms to remove content that should remain operational. Washington spokesman Katie McMorris Rogers, the top Republican on the committee, said it was “a slightly veiled attempt to get companies to censor more speech.”
What do the courts think?
Judges are skeptical about the idea that platforms should lose their legal immunity if they increase their reach.
In the attack, which Hamas claimed responsibility for, most of the judges hearing the case agreed with Facebook that its algorithms did not cost it the protection of the legal shield for user-generated content.
If Congress makes an exemption from the legal shield – and it stands up to legal scrutiny – the courts may have to follow suit.
But if the bills become law, they are likely to raise serious questions about whether they violate the First Amendment’s freedom of speech protections.
The courts ruled that the government cannot provide benefits to individuals or companies subject to restrictions on freedom of expression that would otherwise protect the Constitution. Thus, the tech industry or its allies can challenge the law, arguing that Congress is finding a back door to restrict free speech.
“This begs the question: can the government directly prohibit algorithmic amplification?” said Jeff Kosseff, associate professor of cybersecurity law at the United States Naval Academy. “It will be difficult, especially if you are trying to say that you cannot amplify certain types of speech.”