Update your Cookie Settings to use this feature.
Click 'Allow All' or just activate the 'Targeting Cookies'
By continuing you accept Avaaz's Privacy Policy which explains how your data can be used and how it is secured.
Got it

The Digital Services Act and how it can make our internet better

The internet is an astounding box of tools, through which we connect, learn, inform ourselves and communicate even when we are miles apart. But it has a darker side – Avaaz has reported on the damage that platforms like Facebook, YouTube, Twitter and WhatsApp can cause to vulnerable minorities, our health, elections, and youth psychology. In many ways, hate speech, disinformation and harmful content have become like the CO2, the pollution of our digital spaces.

But now, the EU is introducing the Digital Services Act (DSA), a law that can become something like the digital equivalent of the Paris Agreement for Climate Change, holding platforms to account for the toxicity in our information ecosystem and forcing them to reduce it.

How can the Digital Services Act make social media platforms better?

1) It will help make social media algorithms less toxic
Platforms will be held responsible for the damage they cause to our societies and can even be made to change how they work to prevent these harms, e.g. to minimise the spread of harmful content, such as disinformation or hate speech.

This is a revolution in the way Silicon Valley works. Social media, like Facebook or YouTube, are not just neutral platforms, they use algorithms to select content they think may interest us and keep us on their services. They mostly do this because the more time we spend there, the more ads we see, and the more money they make. The problem is that attention-grabbing content can be harmful. It often can contain disinformation, or hateful speech, or content that encourages eating disorders.

Under the DSA, big tech platforms will have to scrutinise their services to identify how harm can be caused. And the EU can make them fix it, whether that means redesigning the algorithm, improving content moderation quality or labelling known false information.

Protecting freedom of speech has been a key priority in developing the DSA, so the text has multiple checks and balances to prevent abuse, and also catch mistakes. One important example is ensuring a user can appeal if they think the platform has made the wrong decision, for example, for unfairly demoting or deleting their profile or post. And crucially, the platforms must explain their decisions (see point 2).

2) It will force platforms to be MUCH more transparent
From explaining to users and regulators how their algorithms work, to publishing reports on the problems identified, and what they are doing to fix them, to allowing trustworthy researchers and organisations to study and analyse the way the platforms work and identify other possible harms. The DSA will kickstart a huge change in the culture of tech transparency required by tech platforms.

Until now, private companies have mainly decided the rules that govern our online public debate, including what is allowed and isn’t allowed. And they have been doing so without any meaningful transparency. For example, overnight they could decide to ban (or not to ban) politicians, presidents, or a specific post, without having to explain the reason why. This is a huge risk to freedom of expression, but the DSA can change that.

The DSA’s new rules will force big platforms to explain to the public why they recommend specific pieces of content, what measures and policies they're putting in place to fight the harms their platforms cause, and even what data they collect about us, and how they use it.

And for the first time, both academics and civil society will have a right to see internal data of these platforms - hundreds of expert eyes, scrutinising their actions, and uncovering what Big Tech is doing.

3) It will start protecting both our most sensitive data and children’s data
There will be new limits to the exploitation of our most private data, and we will have more information on why we see certain ads, and manipulative practices will be banned.

Your first click starts the cycle of data collected about you. Online services use this data to build a profile about you and what you like - so they can target you with advertising. The DSA should mean that platforms can no longer use our most personal data, such as our medical profiles or religious beliefs, for targeting. And it prohibits platforms from showing advertising to children based on data collected about them through their online behaviour.

The DSA also gives you the chance to identify where an ad comes from, how our data is monetised, and if the influencer you are watching has been paid to say what they’re saying.

Finally, the DSA will get under the hood of so-called “dark patterns”, or manipulative practices, for instance, a design feature that tricks us into giving consent to something without realising it. The DSA will not, for example, allow platforms to give visual prominence to its preferred consent options or make it more difficult to cancel a service than it was to sign up in the first place.

4) There are going to be real sanctions for Big Tech
The European Commission will monitor the biggest platforms to ensure they comply with the DSA. If these platforms fail to assess and control the harms they cause, they can be fined up to 6% of their global income. And in extreme cases, i.e. failing repeatedly, they could even be banned from operating in the EU!


Towards a Paris Agreement for the Internet?

Social media platforms will not change overnight. The DSA will take time to come into force and will require a profound cultural change in Silicon Valley. The 2015 Paris Agreement helped lead the way on tackling climate change. The DSA has the potential to be a similar groundbreaking agreement, a sort of Paris Agreement for the Internet, confronting another global issue which impacts all of us, and creating a global gold standard for a safer internet for all.