YouTube’s tough policies against election misinformation followed a sharp drop in the spread of false and misleading videos on Facebook and Twitter, according to new research released Thursday outlining the power of video service on social media.
Researchers from the Center for Social Media and Politics at New York University found that there was a significant increase in election fraud YouTube videos shared on Twitter shortly after the November 3 election. In November, those videos consistently accounted for nearly one-third of all election-related video shares on Twitter. Top YouTube channels about election fraud shared on Twitter that month came from sources that had promoted election misinformation in the past, such as Project Veritas, Right Side Broadcasting Network and One America News Network.
But after December 8, the proportion of election fraud claims shared on Twitter dropped sharply. That day YouTube said it would remove videos that promoted the unfounded theory that widespread errors and fraud changed the outcome of the presidential election. By December 21, for the first time since the election, the proportion of election fraud content from YouTube shared on Twitter had dropped to less than 20 percent.
The ratio fell further after January 7, when YouTube announced that any channel that violated its election misinformation policy would receive a “strike” and channels that received three strikes in a 90-day period would be punished. will be permanently deleted. By Inauguration Day, the ratio was around 5 per cent.
This trend was replicated on Facebook. A post-poll surge in sharing videos containing fraud theories hit nearly 18 percent of all videos on Facebook just before December 8. After YouTube introduced its stricter policies, the ratio declined sharply, before increasing slightly before January 6. Riot in the Capitol. After the new policies came into force on 7 January, the ratio fell again to 4 per cent by Inauguration Day.
To reach their conclusions, the researchers collected a random sample of 10 percent of all tweets each day. He then singled out the tweets that were linked to the YouTube video. They did the same for YouTube links on Facebook using CrowdTangle, a Facebook-owned social media analytics tool.
From this large data set, the researchers filtered for YouTube videos broadly about the election, as well as election fraud, using a set of keywords such as “stop the steel” and “sharpgate”. This helped the researchers get an idea of the volume of YouTube videos about election fraud over time and how that volume shifted in late 2020 and early 2021.
In recent years there has been a proliferation of misinformation on major social networks. YouTube in particular has lagged behind other platforms in cracking down on various types of misinformation, often weeks or months after Facebook and Twitter announced stricter policies. In recent weeks, however, YouTube has tightened its policies, such as banning all antivaccine misinformation and suspending the accounts of prominent antivaccine activists, including Joseph Mercola and Robert F.
Megan Brown, a research scientist at the NYU Center for Social Media and Politics, said it’s possible that after YouTube banned content, people could no longer share videos promoting election fraud. It is also possible that interest in electoral fraud theories may have diminished significantly after states have certified their election results.
But the bottom line is, Ms. Brown said, “we know that these platforms are deeply intertwined.” He pointed out that YouTube has been identified as one of the most shared domains on other platforms, including both Facebook’s recently released content report and NYU’s own research.
“It’s a big part of the information ecosystem,” said Ms. Brown, “so when YouTube’s platform gets healthier, so do others.”