Dear friends,
This might just be the most impactful thing Avaaz has ever done.
For years, mega tech platforms like Facebook, YouTube and TikTok have been making billions while flooding the world with disinformation, hate speech and other harmful content. But after years of incredible campaigning, the European Union has just agreed on a historic law that will force Big Tech to change – and this could be the start of a global revolution to protect us all!
We can honestly say Europe’s new Digital Services Act wouldn’t look like it does without Avaaz.
We ran massive investigations into the harms caused by social media and shared our findings everywhere. Then we drafted groundbreaking legislative proposals on how to protect our societies as well as freedom of speech, and ran a huge push to get key lawmakers on board! And it worked!
EU Commission Vice-President Margrethe Vestager endorsed our call for a
“Paris Agreement of the Internet”.
And EU Commissioner Thierry Breton was so convinced by our research that he went on TV to talk about it the day after we met him!
EU leaders publicly thanking Avaaz and its members
For over four years, our movement - together with an inspiring civil society coalition – has been at the forefront of this battle to protect citizens and democracy. Read on for the full story of how a few dozen activists, researchers, and hundreds of thousands of Avaaz members across the world took on some of the most powerful corporations that have ever existed – and won!
2018: A Hundred Zuckerbergs (yikes!)
It all started almost exactly 4 years ago, in 2018: disinformation was creating havoc in democracies, and hate speech was being weaponized around the world. In April that year, we launched our first global call to platforms and regulators to "Fix Fakebook" and rein in big tech.
Over 1 million people joined that call,
and we flooded Washington and Brussels with over a hundred cardboard cutouts of Facebook CEO Mark Zuckerberg. The image landed in media all over the world, including on the front page of the New York Times.
We travelled to Silicon Valley to meet with top executives from Facebook, Twitter and Google, trying to convince them to act. But we were banned from some of their offices, and had to hold meetings in the car park!
It was clear that the companies weren’t serious about tackling the toxicity on their platforms. We needed to change the laws that governed them.
Next stop: the European Union.
2019: Diagnosing the Problem, and Defining the Fix
In 2019, many EU lawmakers didn’t really understand the problem. The idea that lies and conspiracy theories going viral online were having a serious impact on our democracies was contested – and without proof, the regulators wouldn’t act.
So,
inspired by a Lithuanian project,
we hired researchers we called “elves” to investigate internet “trolls” and reveal the scale of the disinformation problem, especially the impact it was having in Europe.
Working from a war room in Brussels, our team of 30 “elves” uncovered what 30,000 Facebook monitors and their team of experts seemed to have missed: huge networks, using fake accounts and inauthentic pages, spreading toxic lies and hatred across Europe ahead of crucial elections.
Following our investigation,
Facebook took down networks that could reach an estimated 3 BILLION (!!!!) views in a single year!!
A glimpse into our anti-disinformation war room in Brussels
As election day approached, top EU politicians, journalists and security experts were coming to our war room almost every day for information and briefings. Our work made headline news all over the world warning millions of Europeans of the disinformation threat just before the elections. Even Facebook
publicly thanked us!
Exposing these networks helped Europe
dodge a bullet in the elections.
But top EU officials were shocked by what we had found and asked us: what could be done?
So, working with social media insiders, academics and lawmakers, we developed research-backed proposals to clean up social media while protecting freedom of speech:
-
Detox the algorithm.
Stop platforms from making dangerous disinformation and harmful content go viral just to keep us hooked to our screens
-
Correct the record.
Show every single person who sees disinformation a correction from an independent fact-checker right in their news feeds. TIME Magazine called it a
‘radical new proposal that could curb fake news on social media’
But with a new virus emerging in Wuhan and the US elections looming, we were about to see some of the most terrifying impact of disinformation yet.
2020: Social media vs. Democracy and Public Health
As Covid-19 spread across the world, lies and conspiracy theories about it went viral with it. Our researchers released
a bombshell investigation
showing how Facebook was an epicentre of Covid misinformation. On the same day, Facebook
announced
that they would direct anyone who engaged with Covid misinformation to fact-checks on the World Health Organisation’s website. This was the first time Facebook EVER did that!
Politico wrote: “Thing is, it wasn’t the globe’s most powerful tech regulator that forced Facebook to acknowledge flaws in its policy — it was campaign group Avaaz...”
We didn’t stop there, and in another hard-hitting report showed how Facebook’s algorithm itself became a global threat to public health - spreading our findings all over the media and presenting them directly to key EU and US officials.
2020 was also the year of the US presidential election. A year before the vote, we’d found
there was already more disinformation
on Facebook than found in the three months ahead of the 2016 elections! We hired a dedicated team of US researchers, and in the run up to the election, we produced over 40 investigations on the rampant
disinformation, hate, violence and extremism that was spreading online,
pushing Facebook to act on many harmful networks that had spread dangerous content to millions.
And just around the elections, Facebook launched emergency measures
, throttling the spread of many of the pages we had identified as repeat misinformers, making it harder for them to flood social media with lies and hatred before the vote.
But shortly after election day, Facebook withdrew some of the measures they’d put in place! It was a disaster. A tsunami of lies claiming that the election had been stolen flooded Americans’ social media feeds.
We investigated many of
the networks
making these lies go viral, and one of the
biggest
we found, tied to Trump's former chief strategist Steve Bannon, was banned from Facebook for spreading false claims about the vote.
But the damage was already done.
On January 6th, protestors convinced that the election had been stolen violently stormed the US Congress.
Our researchers jumped into action and within days were able to show how Facebook had been used to stoke the violence. Our
report
was covered in a slew of outlets, from
AP
to
Time,
Washington Post,
and
more,
and our research was mentioned repeatedly in a
Congressional hearing
at which Zuckerberg and other tech executives testified.
2021 and Beyond: Towards a "Paris Agreement for the Internet"
There was no longer any doubt about the threat that disinformation posed. But we still didn’t have the laws we needed. Then came a chance – the EU was developing the Digital Services Act, a new law to govern digital technology. As we met with decision-maker after decision-maker to argue that the legislation should focus on holding platforms accountable for harms created by their algorithms, slowly the idea started to gain traction.
And we didn’t let go – showing up at every single meeting, event or video call with our findings, and publishing ever more evidence exposing how platforms were failing.
We even organised a conference on disinformation
bringing together some of the most influential EU politicians and executives from Facebook and Twitter to make our case!
Despite frantic lobbying by the social platforms, things began to move in our direction. But it wasn’t a done deal – so to show public support, we commissioned a huge poll that found
83% of people
in Germany, France, Italy and Spain wanted platforms to change their algorithms if they were found to be amplifying harmful content. We even delivered messages to politicians from Avaaz members across Europe in beautifully made books!
Tell Your Friends