Top Scoops

Book Reviews | Gordon Campbell | Scoop News | Wellington Scoop | Community Scoop | Search

 

Why Is It So Hard To Stop COVID-19 Misinformation Spreading On Social Media?



Tobias R. Keller, Queensland University of Technology and Rosalie Gillett, Queensland University of Technology

Even before the coronavirus arrived to turn life upside down and trigger a global infodemic, social media platforms were under growing pressure to curb the spread of misinformation.

Last year, Facebook cofounder and chief executive Mark Zuckerberg called for new rules to address “harmful content, election integrity, privacy and data portability”.

Now, amid a rapidly evolving pandemic, when more people than ever are using social media for news and information, it is more crucial than ever that people can trust this content.






Read more:
Social media companies are taking steps to tamp down coronavirus misinformation – but they can do more





Digital platforms are now taking more steps to tackle misinformation about COVID-19 on their services. In a joint statement, Facebook, Google, LinkedIn, Microsoft, Reddit, Twitter, and YouTube have pledged to work together to combat misinformation.

Facebook has traditionally taken a less proactive approach to countering misinformation. A commitment to protecting free expression has led the platform to allow misinformation in political advertising.

More recently, however, Facebook’s spam filter inadvertently marked legitimate news information about COVID-19 as spam. While Facebook has since fixed the mistake, this incident demonstrated the limitations of automated moderation tools.

In a step in the right direction, Facebook is allowing national ministries of health and reliable organisations to advertise accurate information on COVID-19 free of charge. Twitter, which prohibits political advertising, is allowing links to the Australian Department of Health and World Health Organization websites.




Twitter is directing users to trustworthy information.
Twitter.com

Twitter has also announced a suite of changes to its rules, including updates to how it defines harm so as to address content that goes against authoritative public health information, and an increase in its use of machine learning and automation technologies to detect and remove potentially abusive and manipulative content.

Previous attempts unsuccessful

Unfortunately, Twitter has been unsuccessful in its recent attempts to tackle misinformation (or, more accurately, disinformation – incorrect information posted deliberately with an intent to obfuscate).

The platform has begun to label doctored videos and photos as “manipulated media”. The crucial first test of this initiative was a widely circulated altered video of Democratic presidential candidate Joe Biden, in which part of a sentence was edited out to make it sound as if he was forecasting President Donald Trump’s re-election.




A screenshot of the tweet featuring the altered video of Joe Biden, with Twitter’s label.
Twitter

It took Twitter 18 hours to label the video, by which time it had already received 5 million views and 21,000 retweets.

The label appeared below the video (rather than in a more prominent place), and was only visible to the roughly 757,000 accounts who followed the video’s original poster, White House social media director Dan Scavino. Users who saw the content via reweets from the White House (21 million followers) or President Donald Trump (76 million followers), did not see the label.

Labelling misinformation doesn’t work

There are four key reasons why Twitter’s (and other platforms’) attempts to label misinformation were ineffective.

First, social media platforms tend to use automated algorithms for these tasks, because they scale well. But labelling manipulated tweets requires human labour; algorithms cannot decipher complex human interactions. Will social media platforms invest in human labour to solve this issue? The odds are long.

Second, tweets can be shared millions of times before being labelled. Even if removed, they can easily be edited and then reposted to avoid algorithmic detection.

Third, and more fundamentally, labels may even be counterproductive, serving only to pique the audience’s interest. Conversely, labels may actually amplify misinformation rather than curtailing it.

Finally, the creators of deceptive content can deny their content was an attempt to obfuscate, and claim unfair censorship, knowing that they will find a sympathetic audience within the hyper-partisan arena of social media.

So how can we beat misinformation?

The situation might seem impossible, but there are some practical strategies that the media, social media platforms, and the public can use.

First, unless the misinformation has already reached a wide audience, avoid drawing extra attention to it. Why give it more oxygen than it deserves?

Second, if misinformation has reached the point at which it requires debunking, be sure to stress the facts rather than simply fanning the flames. Refer to experts and trusted sources, and use the “truth sandwich”, in which you state the truth, and then the misinformation, and finally restate the truth again.

Third, social media platforms should be more willing to remove or restrict unreliable content. This might include disabling likes, shares and retweets for particular posts, and banning users who repeatedly misinform others.

For example, Twitter recently removed coronavirus misinformation posted by Rudy Guilani and Charlie Kirk; the Infowars app was removed from Google’s app store; and probably with the highest impact, Facebook, Twitter, and Google’s YouTube removed corona misinformation from Brasil’s president Jair Bolsonaro.






Read more:
Meet ‘Sara’, ‘Sharon’ and 'Mel': why people spreading coronavirus anxiety on Twitter might actually be bots





Finally, all of us, as social media users, have a crucial role to play in combating misinformation. Before sharing something, think carefully about where it came from. Verify the source and its evidence, double-check with independent other sources, and report suspicious content to the platform directly. Now, more than ever, we need information we can trust.The Conversation

Tobias R. Keller, Visiting Postdoc, Queensland University of Technology and Rosalie Gillett, Research Associate in Digital Platform Regulation, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.


© Scoop Media

 
 
 
Top Scoops Headlines

 

Eric Zuesse: U.S. Empire: Biden And Kerry Gave Orders To Ukraine’s President

Eric Zuesse, originally posted at Strategic Culture On May 19th, an implicit international political warning was issued, but it wasn’t issued between countries; it was issued between allied versus opposed factions within each of two countries: U.S. and Ukraine. ... More>>

Binoy Kampmark: Budget Cockups In The Time Of Coronavirus: Reporting Errors And Australia’s JobKeeper Scheme

Hell has, in its raging fires, ringside seats for those who like their spreadsheets. The seating, already peopled by those from human resources, white collar criminals and accountants, becomes toastier for those who make errors with those spreadsheets. ... More>>


The Dig - COVID-19: Just Recovery

The COVID-19 crisis is compelling us to kick-start investment in a regenerative and zero-carbon future. We were bold enough to act quickly to stop the virus - can we now chart a course for a just recovery? More>>

The Conversation: Are New Zealand's New COVID-19 Laws And Powers Really A Step Towards A Police State?

Reaction to the New Zealand government’s handling of the COVID-19 pandemic and resultant lockdown has ranged from high praise to criticism that its actions were illegal and its management chaotic. More>>


Keith Rankin: Universal Versus Targeted Assistance, A Muddled Dichotomy

The Commentariat There is a regular commentariat who appear on places such as 'The Panel' on Radio New Zealand (4pm on weekdays), and on panels on television shows such as Newshub Nation (TV3, weekends) and Q+A (TV1, Mondays). Generally, these panellists ... More>>


Binoy Kampmark: Welcome Deaths: Coronavirus And The Open Plan Office

For anybody familiar with that gruesome manifestation of the modern work place, namely the open plan office, the advent of coronavirus might be something of a relief. The prospects for infection in such spaces is simply too great. You are at risk from ... More>>

Caitlin Johnstone: Do You Consent To The New Cold War?

The world's worst Putin puppet is escalating tensions with Russia even further, with the Trump administration looking at withdrawal from more nuclear treaties in the near future. In addition to planning on withdrawing from the Open Skies Treaty ... More>>


Binoy Kampmark: Why Thinking Makes It So: Donald Trump’s Obamagate Fixation

The “gate” suffix has been wearing thin since the break-in scandal that gave it its birth. Since Watergate, virtually anything dubious and suggestive, and much more besides, is suffixed. Which brings us to the issue of President Donald Trump’s ... More>>

Gordon Campbell: On The Ethics (and Some Of The Economics) Of Lifting The Lockdown

As New Zealand passes the half-way mark towards moving out of Level Four lockdown, the trade-offs involved in life-after-lockdown are starting to come into view. All very well for National’s finance spokesperson Paul Goldsmith to claim that “The number one priority we have is to get out of the lockdown as soon as we can”…Yet as PM Jacinda Ardern pointed out a few days ago, any crude trade-off between public health and economic well-being would be a false choice... More>>


Binoy Kampmark: Brutal Choices: Anders Tegnell And Sweden’s Herd Immunity Goal

If the title of epidemiological czar were to be created, its first occupant would have to be Sweden’s Anders Tegnell. He has held sway in the face of sceptics and concern that his “herd immunity” approach to COVID-19 is a dangerous, and breathtakingly ... More>>


 
 
 
 
 


 
 
 
  • PublicAddress
  • Pundit
  • Kiwiblog