Research reveal verified social media customers are fueling COVID-19 pretend information

The pandemic has resulted in social media accounts operated by malicious actors who aim to sow misinformation about COVID-19. As vaccination campaigns begin, they threaten to curb the push for herd immunity around the world. Misinformation about masks and injections could – and has – contribute to low adoption rates and increased disease transmission, making it difficult to prevent future outbreaks.

While several studies were published on the role of disinformation campaigns in shaping narratives during the pandemic, new research was released this month by staff at Indiana University and the Politecnico di Milano in Milan, Italy, as well as a German team from the University of Duisburg-Essen and the University of Bremen are specifically investigating the extent of the influence of automated bots. The studies identified dozens of bots on Twitter and Facebook, especially in communities where “low credibility” sources and “suspicious” videos proliferate. But intuitively, no study found evidence that bots were a stronger driver of misinformation on social media than manual, human-led efforts.

The Indiana University and Politecnico co-authors of the first study entitled “The COVID-19 Infodemic: Twitter versus Facebook” analyzed the spread and spread of links to conspiracy theories, untruths and general disinformation. To do this, they extracted links from social media posts that contained COVID-19 related keywords such as “coronavirus”, “Covid” and “Sars” and found links with low credibility content by comparing them to the database Media Bias / Fact Check to match websites with low credibility and flag YouTube videos as suspicious when blocked from the website. Media Bias / Fact Check, founded in 2015, is a crowdsourcing attempt to rank sources based on accuracy and perceived bias.

In their survey, the researchers from IU and Politecnico collected over 53 million tweets and more than 37 million Facebook posts on 140,000 pages and groups between January 1st and October 31st. They identified nearly a million low-credibility links shared on both Facebook and Twitter, but bots alone weren’t responsible for spreading misinformation. Aside from the first few months of the pandemic, the main sources of low-credibility information, according to the co-authors, tended to be high profile, official and verified accounts. Verified accounts made up nearly 40% of the number of retweets on Twitter and nearly 70% of reshares on Facebook.

“We… find coordination between the accounts that are expanding [misinformation] Content on both platforms, including much that is controlled by influential organizations, ”the researchers wrote. “Since automated accounts don’t seem to play a huge role in content amplification, these results suggest that the COVID-19 infodemy is more of an overt rather than an undercover phenomenon.”

In the second publication, entitled “Conspiracy Machines – The Role of Social Bots During COVID-19 Infodemy”, researchers from the University of Duisburg-Essen and the University of Bremen wanted to find out to what extent bots intervene with pandemic discussions on Twitter. In a sample of over 3 million tweets from more than 500,000 users using hashtags and terms such as “coronavirus,” “Wuhan virus,” and “coronapocalypse,” the co-authors discovered 78 likely bot accounts that generated 19,117 tweets within 12 weeks published period. While many of the tweets contained misinformation or conspiracy content, they also contained retweets with facts and updates on the virus.

The results of the studies appear to contradict the findings of the Indiana University Social Media Observatory in July, which found that 20% to 30% of links to low-credibility domains on Twitter were shared by bots. The co-authors of this work claimed that some of the accounts were sharing information from the same websites, suggesting that coordination was taking place behind the scenes.

Carnegie Mellon University researchers also posted evidence of misinformation bots on social media, and endorsed the Social Media Observatory’s preliminary report. In May, the CMU team announced that of over 200 million tweets discussing the virus since January, 45% were from likely bot accounts, many of which were conspiracy theories about hospitals with mannequins and connections between 5G radio towers and Infections tweeted.

It is possible that the steps Twitter and Facebook have taken to contain COVID-19 misinformation attributed to the spread of bot between early this year and fall. Twitter is now applying warnings to misleading, controversial, or unverified tweets about the coronavirus. The company recently stated that users may need to remove tweets that “promote harmful false or misleading reports of COVID-19 vaccinations”. For its part, Facebook is affixing similar labels to COVID-19 lies and is committed to removing any misinformation about vaccines that could cause “imminent physical harm”.

Twitter also recently announced that it will be relaunching its verified accounts program in 2021. According to the company, the program was interrupted in 2017 to ensure more transparency and clarity. The network is also planning to create a new account type that will be used to identify accounts that are likely to be bots.

Between March and October, Facebook removed 12 million pieces of content on Facebook and Instagram and added fact-checking tags to another 167 million posts. In July alone, Twitter said it had removed 14,900 tweets for COVID-19 misinformation.

There is evidence that social media platforms continue to struggle to tackle misinformation and disinformation from COVID-19. However, previous research paints a mixed picture of the role of bots in spreading on Twitter and Facebook. Indeed, the main drivers seem to be high-profile conspiracy theorists, conservative groups, and fringe groups – at least according to the co-authors associated with Indiana University and Politecnico.

“Our study raises a number of questions about how social media platforms deal with the flow of information and enable the distribution of potentially dangerous content,” they write. “Unfortunately, this is likely to prove difficult to address as we find that high status accounts play an important role.”

Comments are closed.