Update your Cookie Settings to use this feature.
Click 'Allow All' or just activate the 'Targeting Cookies'
By continuing you accept Avaaz's Privacy Policy which explains how your data can be used and how it is secured.
Got it

Big Tech companies are threatening the AI Act

As the EU finalises the Artificial Intelligence Act, Big Tech companies do everything they can to remove key safety features, in order to shield their profits.

Companies are lobbying hard against:

  • Fundamental Rights Impact Assessments by private companies.

  • Regulatory oversight on general purpose AI systems, such as ChatGPT, pushing for soft regulation like voluntary Codes of Practice.

The moment is NOW

This is a crucial moment, where politicians must choose if they side with corporate greed or stand up for what they are elected to do and protect people.

A Call for Strong Protections of Fundamental Rights in the European AI Act

No concessions to industry lobbying

To the Participants of the Trilogue Discussions on the Artificial Intelligence Act:

We, the undersigned, need you to listen to the alarm we are sounding: the EU must not make a historic mistake - one that will leave people exposed to a hurricane of harm from AI.

We have a chance to build a bright future for AI, but the dangers of a dark future are already emerging. Left unchecked, we have seen AI harm our most basic rights to employment, privacy, freedom and even life.

Tech companies are pushing hard for a weak AI Act. They want to remove key safety mechanisms from the law that could leave millions vulnerable. In two weeks, EU leaders will decide if they will side with Big Tech by throwing out fundamental human rights protections and regulatory oversight on general-purpose AI models (GPAI), trying to sweep the problems of AI under the carpet with a voluntary Code of Practice.

The lobbying from some companies has been all-pervasive. They've tried to claim that our rights will be protected by other EU laws and regulations. They're wrong. They've tried to claim that measures are unnecessary because AI harms are not here yet. The masses of evidence that researchers and so many of the signers to this letter have exposed say they're wrong again.

We know lobbyists are trying to claim that protecting our rights will come at the cost of losing jobs in the race for innovation. But they're wrong here too. An AI Act which includes protections like Fundamental Rights Impact Assessments (FRIAs) on AI use would be a boon, not a burden, to businesses. The Investor Alliance for Human Rights, representing over US$1.66 trillion (Euro €1.55 trillion) in assets under management and advisement, fully supports FRIAs - as do 89% of the Spanish SMEs Avaaz polled that are developing or using AI, showing industry readiness.

Finally, in a last throw of the dice, we are seeing furious lobbying by the developers of general-purpose AI models, including the suggestion that it is enough to be allowed to self-regulate with voluntary codes of conduct. Let us be clear, voluntary measures such as self-administered Codes of Conduct would be the death knell for effective enforcement of the AI Act. We saw the utter failure of previous EU voluntary codes of conduct to reign in social media platforms' promotion of disinformation and hate speech. The significant changes that social media platforms have made to their services in recent months happened only because they now have a clear legal obligation under the Digital Services Act to assess, and then mitigate the harms of their services.

The challenges relating to general-purpose AI models and generative AI actually strengthen the case for fundamental rights impact assessments. FRIA’s would require any deployer (called user in the Act) to analyse the adverse human rights impacts of how the AI is used in advance and to consult with the people the AI would affect, as well as their own development teams or sellers of AI products, as to whether or not it has been tested and audited against the specific harms identified. FRIAs in this way provide a safeguard for the entire life cycle of AI and focus on the actual application of a use, which makes it more concrete to test.

So we are calling on you not to give in to lobbying and to commit on behalf of the citizens you collectively represent to:

  1. Retain the compulsory Fundamental Rights Impact Assessments as described in Article 29a of the EP proposal for all state and private uses of AI defined as high-risk in Annex III;

  2. Resist any suggestion that self-administered voluntary codes of conduct are sufficient to regulate general-purpose AI models.

The final discussions are here now. When you go into those meetings, know that in addition to all the names in this letter, nearly 240.000 Europeans are standing behind you.

So we urge you to stand strong and fight for human rights in the AI Act.

Signatories:

Avaaz Foundation

Civil Liberties Union for Europe

Defend Democracy

digiQ

Ekō

European Public Service Union (EPSU)

Fair Vote UK

Future of Life Institute

Institute for Strategic Dialogue (ISD)

#jesuislà

Corporate Europe Observatory (CEO)

AI Forensics

Over 250,000 Europeans are calling for action

Our call is backed by the biggest petition on artificial intelligence to date, and it is loud and clear:

We need an AI Act that serves people, not the greed of some tech companies.


AI can help us flourish - if we act now

AI can drive a positive change for all of humanity, from better healthcare to new enjoyable jobs but only if politicians choose to stand up for truly protecting people.

These are the actionable solutions we want them to commit to:

  • Fundamental Rights Impact Assessments by all deployers of high-risk AI, including private companies and public institutions. This would ensure private companies don’t get a free pass when it comes to assessing the impact of their AI system’s on the people whose lives will be affected.

  • No self regulation of general purpose AI. When building a house, we have to know that the concrete and bricks we use are safe, right? General purpose AI systems are the concrete and bricks for thousands of AI products: we must ensure they are safe, with enough enforceable regulatory oversight to make their providers accountable for the AI they release.

We have been here before, and look where 20 years of self regulation got us with social media, scrambling to put the genie back into the bottle. This would be such a mistake, shutting the door to a future where AI could be a safe, reliable tool for humanity, benefiting people equally.

Sign the petition!

The human cost of artificial intelligence

How our human rights are already affected by artificial intelligence


The rights of children and to a family life

"Losing a child is the worst punishment a parent can receive." - Marje, Netherlands

The ProKid 12-SI AI system profiled children at risk of committing a crime in the Netherlands, using data on living environments, involvement in crime as a witness or of criminal behaviour by family, friends, or other associates. The automated risk assessments conducted by AI misassigned the risk level of one third of the children it evaluated.

As a result, over 800 children were wrongly registered on police systems, referred to youth care and child protection services, and were even removed from their families. Consequently, these children may not only feel unjustifiably punished, ‘unforgiven’, and hampered in their future life choices, but may also develop low self-worth and negative judgmental attitudes towards others.


The rights to life and to health

"He would still be here." - Widow of suicide victim

The tragic words of a Belgian widow, who blames her husband’s suicide on his relationship with a generative AI chatbot. The apparent humanity of generative AI – which can present itself as an emotive friend, establishing an intimate bond and dependency – can lead to tragic results when interacting with vulnerable adults.

But the danger to life of poorly used AI extends far beyond generative AI. A predictive algorithm used by Spanish police forces to identify women at risk of domestic violence massively underestimated the level of risk, leaving women who had asked for help vulnerable and exposed to violence and even death.


The right to be free from discrimination

“We often assume machines are neutral, but they aren’t.” - Joy Buolamwini, MIT

AI can replicate existing discrimination, based for example on gender, disability, and race. AI reduced employability rankings for women with younger children in Croatia, whereas for men the parameter was not even displayed or taken into account.

AI has frequently been found not to recognise people of colour in facial recognition applications. For example, one AI program developed in France showed error rates of 1 in 10,000 for white women, but 1 in 1,000 for black women.

Bias is also in healthcare: skin cancer detection using AI shows lower detection rates for people of colour as its training data had overwhelming white examples, and gender bias was also revealed in AI tools screening for liver disease, with algorithms twice as likely to miss disease in women as in men.


The right to liberty, personal freedom, and justice

“Jesse has never been convicted by a judge of a street robbery, but because he is on the list, he is arrested whenever someone is wanted for a street robbery.” - Eline, criminal lawyer

The use of AI in policing in Europe is already affecting our vital right to liberty. In the Netherlands, the TOP 600 AI automated risk profiling system wrongly labelled people as potential criminals.

Just take a moment to imagine this. You can be repeatedly arrested, your home searched and constantly followed by police as part of what the police call “ Very Irritating Policing” and you have no idea why. Imagine the injustice you’d feel. This would have been prevented if the police had to do a Fundamental Rights Impact Assessment (FRIA) before implementing their AI system. The young people targeted would have never had to suffer this injustice.

Tell Your Friends