Update your Cookie Settings to use this feature.
Click 'Allow All' or just activate the 'Targeting Cookies'
By continuing you accept Avaaz's Privacy Policy which explains how your data can be used and how it is secured.
Got it
We use cookies to analyse how visitors use this website and to help us provide you the best possible experience. View our Cookie Policy.
OK

Legislative Principles for tackling Disinformation

Posted: May 2019
Europeans were flooded with disinformation ahead of the EU elections. Avaaz uncovered disinformation networks in France, UK, Germany, Spain, Italy and Poland, posting content that was viewed over 760 million times over the past three months, before finally being removed by Facebook after the Avaaz team flagged the pages to them.

Avaaz’s analysis focused on just 6 countries in the last 3 months, looking only on Facebook, and found the networks' content had been viewed three quarters of a billion times. Yet these extraordinary numbers are just the tip of the iceberg -- the spread of disinformation across the continent is likely much wider.

It is clear that the Code of Practice coordinated by the Commission has failed to achieve their purpose: protecting Europeans from disinformation.

Current Commissioners must urgently prepare the groundwork for strong EU regulations to defeat disinformation and protect democracy. It is up to them to ensure that the next Commission is empowered to tackle this threat quickly.

Avaaz has developed 5 legislative principles for the EU - principles that should be at the foundation of any legislation to fight disinformation. You can find them below, along with our report:

5 Legislative Principles

Avaaz has developed a comprehensive regulatory proposal based on the principles of transparency, responsibility, and freedom. We’ve consulted deeply with academics, lawmakers, civil society, and social media executives to arrive at 5 legislative principles that must inform any democratic effort at legislation:
  • Correct the Record
    Correct the Record exposes disinformation and other manipulations, educates the public, and reduces belief in lies. The policy requires platforms to inform users and push effective corrections to every person who saw information that independent fact checkers have determined to be disinformation. This solution would tackle disinformation while preserving freedom of expression; as Correct the Record adds the truth but leaves the lies alone. Newspapers publish corrections right on their own pages, television stations on their own airwaves; platforms should provide the same service to their users.

    Research commissioned by Avaaz and conducted by leading experts proves that providing corrections to social media users who have seen false or misleading information can decrease belief in disinformation by half. Multiple other peer-reviewed studies have demonstrated that effective corrections can reduce and even eliminate the effects of disinformation. Studies attempting to replicate the often discussed “backfire effect” - where corrections entrenched false beliefs - have instead found the opposite to be true. Meanwhile, researchers are converging best practices for effective corrections.

    In our view, correcting the record would be a five-step process:
    1. Define: The obligation to correct the record would be triggered where:
      • Independent fact checkers verify that content is false or misleading;
      • A significant number of people -- e.g. 10,000 -- viewed the content.
    2. Detect: platforms must:
      • Proactively use technology such as AI to detect potential disinformation with significant reach that could be flagged for fact-checkers;
      • Deploy an accessible and prominent mechanism for users to report disinformation;
      • Provide independent fact checkers with access to content that has reached e.g. 10,000 or more people.
    3. Verify: Platforms must work with independent, third-party verified fact-checkers to determine whether reported content is disinformation, as defined by the EU.
    4. Alert: Each user exposed to verified disinformation should be notified using the platform’s most visible and effective notification standard.
    5. Correct: Each user exposed to disinformation should receive a correction that is of at least equal prominence to the original content, and that follows best practices.
  • Detox the Algorithm
    Many platforms use an algorithm to determine when and in what order users see content that the algorithm determines may be of interest to them and that will keep users viewing the service (“display algorithms”). Algorithmic content curation like this has important consequences for how individuals find news, but instead of human editors selecting important sources of news and information for public consumption, complex algorithmic code determines what information to deliver or exclude -- often based on criteria that also promotes harmful content. Popularity and the degree to which information provokes outrage, confirmation bias, and engagement are increasingly important in driving algorithms’ choices of which content to promote. The speed and scale at which content “goes viral” grows exponentially, regardless of whether or not the information it contains is true. Although the Internet has provided more opportunities to access information, algorithms have made it harder for individuals to find information from critical or diverse viewpoints. Therefore, there is a risk that users get trapped in an online bubble of disinformation.

    In this way, without proper care and oversight, the recommendation algorithms can increase user engagement with disinformation and other harmful content. These algorithms can be gamed to effectively ensure that the most divisive and sensational fake news quickly reaches a viral status. Platforms should transparently adapt their algorithms to ensure that they are not themselves exponentially accelerating the spread of disinformation. And although social media platforms need virality for profitability, it is crucial for them to monitor exactly what is going viral - an MIT study found that falsehoods on Twitter spread six times faster than true news.

    Some platforms have initiated small reforms of their algorithms for selected sets of content; however, more must be done, given the speed with which content can go viral.

    That’s why comprehensively detoxifying the platforms’ algorithms is crucial. “Detoxing the Algorithm” means adjusting social media platforms’ content curation algorithms to ensure that they downgrade disinformation, pages belonging to disinformation accelerators and malicious actors, and other harmful content out of their recommendations to viewers. This would ensure that the recommendation, search, and newsfeed algorithms are not abused by malicious actors, and that disinformation is sidelined rather than boosted by these platforms.
  • Ban Fake Accounts and unlabeled bots
    Fake accounts and unlabelled bots act as conduits for disinformation and harm voters in precisely the same way that misleading advertising and unfair business practices harm consumers. They must therefore be mandatorily banned on all platforms. Many platforms’ guidelines and policies already include this ban, but they are underperforming on actively searching for fake accounts, closing the loopholes that allow them to multiply and reducing the incentives provided by their own services that favor the existence of bots.

    Bots must be prominently and effectively labelled, and all content distributed by bots must prominently include the label and retain said label when the content or message is shared, forwarded, or passed along in any other manner. All such labels must be presented and formatted in a way that it is immediately clear to any user that they are interacting with a non-human.

    In summary, platforms must ban fake accounts and unlabelled bots that act as conduits for disinformation and take action to track down the networks that create and run them, closing the loopholes that allow them to multiply, and reducing the incentives provided by their own services.
  • Label paid content and disclose targeting
    Transparency regarding financial compensation should apply to all paid communications, political and non-political. Citizens ought to be able to know who paid for any advertisement and on what basis the viewer was targeted. In order to protect citizens from disinformation warfare, these standards of transparency should apply to all paid content -- not just political advertising. Additionally, platforms must label state-sponsored content (or propaganda) in order to increase transparency by disclosing where and how that content creation was paid for and who created it.
  • Transparency
    In the evolving defence of our democracies against disinformation, it is essential that governments, civil society, and the general public be informed about the nature and scale of the threat, and about the measures being taken to guard against it. Online platforms must therefore provide comprehensive periodic reports listing - aggregated by country and/or language - the disinformation found on their services, the number of bots and inauthentic accounts that were detected, what actions were taken against those accounts and bots, how many times users reported disinformation. The reports must also detail platforms’ efforts to deal with disinformation in order to make the nature and scale of the threat public.

Read the full report

Avaaz Report Network Deception Download

REPORT: FAR RIGHT NETWORKS OF DECEPTION

Ahead of the EU elections, Avaaz conducted a Europe-wide investigation into networks of disinformation on Facebook. This was the first investigation of its kind and uncovered that far-right and anti-EU groups were weaponizing social media at scale to spread false and hateful content. Our findings were shared with Facebook, and resulted in an unprecedented shut down of Facebook pages just before voters headed to the polls.