Update your Cookie Settings to use this feature.
Click 'Allow All' or just activate the 'Targeting Cookies'
By continuing you accept Avaaz's Privacy Policy which explains how your data can be used and how it is secured.
Got it

AI is already causing real harm to people in Europe and their rights - from worse liver disease detection rates for women, to children wrongly torn from their families.

The EU has a chance to change all that, with the Artificial Intelligence Act - but only if it includes Fundamental Rights Impact Assessments (FRIAs). These test if AI threatens our rights, and are one of the best ways to tame this new digital wild west.

Unfortunately powerful tech lobbies and their allies in the EU are working hard to block them at the last minute.

During the final stages of political negotiations on the Act, we're calling on decision makers to stand strong and fight for a future that protects Europeans and their rights. This is a historic opportunity that we cannot afford to miss.

Help us win this fight for our futures:


The way in which AI is already affecting on our most fundamental rights and protections when it is used without regulation, assessment, or accountability. Harms by AI are not only future possibilities. They are happening right now.
Jump to section


The voices of people from across Europe on what this groundbreaking legislation must include if it is to prevent the harms we are already seeing proliferate from the unregulated use of AI.
Jump to section


The final decision making on the AI Act is now happening. There are brave new proposals from the European Parliament that could secure our future. It's vital that these are not blocked by the European Council or Commission. If that happens, we betray the next generation at the one point we could have protected their futures. Our solutions are crucial for the AI Act -- so we can reap the benefits of artificial intelligence, whilst avoiding the harms we see from poorly deployed AI.
Jump to section

Tens of thousands have already added their voices calling for human rights protections, add your name too!

Sign the petition!


The human cost of artificial intelligence

How our human rights are already affected by artificial intelligence

The rights of children and to a family life

"Losing a child is the worst punishment a parent can receive." - Marje, Netherlands

The ProKid 12-SI AI system profiled children at risk of committing a crime in the Netherlands, using data on living environments, involvement in crime as a witness or of criminal behaviour by family, friends, or other associates. The automated risk assessments conducted by AI misassigned the risk level of one third of the children it evaluated.

As a result, over 800 children were wrongly registered on police systems, referred to youth care and child protection services, and were even removed from their families. Consequently, these children may not only feel unjustifiably punished, ‘unforgiven’, and hampered in their future life choices, but may also develop low self-worth and negative judgmental attitudes towards others.

The rights to life and to health

"He would still be here." - Widow of suicide victim

The tragic words of a Belgian widow, who blames her husband’s suicide on his relationship with a generative AI chatbot. The apparent humanity of generative AI – which can present itself as an emotive friend, establishing an intimate bond and dependency – can lead to tragic results when interacting with vulnerable adults.

But the danger to life of poorly used AI extends far beyond generative AI. A predictive algorithm used by Spanish police forces to identify women at risk of domestic violence massively underestimated the level of risk, leaving women who had asked for help vulnerable and exposed to violence and even death.

The right to be free from discrimination

“We often assume machines are neutral, but they aren’t.” - Joy Buolamwini, MIT

AI can replicate existing discrimination, based for example on gender, disability, and race. AI reduced employability rankings for women with younger children in Croatia, whereas for men the parameter was not even displayed or taken into account.

AI has frequently been found not to recognise people of colour in facial recognition applications. For example, one AI program developed in France showed error rates of 1 in 10,000 for white women, but 1 in 1,000 for black women.

Bias is also in healthcare: skin cancer detection using AI shows lower detection rates for people of colour as its training data had overwhelming white examples, and gender bias was also revealed in AI tools screening for liver disease, with algorithms twice as likely to miss disease in women as in men.

The right to liberty, personal freedom, and justice

“Jesse has never been convicted by a judge of a street robbery, but because he is on the list, he is arrested whenever someone is wanted for a street robbery.” - Eline, criminal lawyer

The use of AI in policing in Europe is already affecting our vital right to liberty. In the Netherlands, the TOP 600 AI automated risk profiling system wrongly labelled people as potential criminals.

Just take a moment to imagine this. You can be repeatedly arrested, your home searched and constantly followed by police as part of what the police call “ Very Irritating Policing” and you have no idea why. Imagine the injustice you’d feel. This would have been prevented if the police had to do a Fundamental Rights Impact Assessment (FRIA) before implementing their AI system. The young people targeted would have never had to suffer this injustice.


The people's call for human rights protections in the EU AI Act

Letters from Avaaz members across Europe to decision makers.

I’m more than worried about our future, especially when it comes to AI. In my work with young children, I see our future politicians, lawyers, nurses, and teachers. I wish and hope that their future will be as safe and sound as possible when it comes to protecting their privacy and that they will experience fairness and non-discrimination when it comes to laws concerning AI. 
As a fellow human being, I ask you to put human rights at the forefront of the upcoming AI legislation and put people before Big Tech company profits.
We are at the beginning of the era of AI and this is the right time to think carefully about the consequences before acting! Please do not miss this opportunity to ensure AI respects human rights!

Maria from Finland

Please remember, AI and the needs of business are not more important than human rights. Let's use AI to improve the quality of life on this planet rather than leave it to the whims of corporations. The decisions we make now will shape the lives of generations to come.

Helen from Belgium

Human rights are the basis for a stable world, healthy environment, and peace. Therefore, I think it is very important to put human rights in the centre of the new Al legislation. Thank you!

Sijan from the Netherlands

It is very important that the new legislation puts human rights at the forefront and assures privacy, justice, and non-discrimination for everyone. I need you to protect us from the big technology companies that think only for their own benefit.

Yolanda from Spain

In your legislative procedures concerning the Artificial Intelligence Act, I urge you to consider and put human rights at the forefront. Make sure that privacy, fairness and non discrimination are the markers of AI used in Europe. We should not fall victim to corporations and greed that knows no bounds. Desire for individual profit fuels wars and hatred, and AI could become a weapon in such instances. Help preserve the humanity of Europe by distancing from corporate tech and US pressures.

Ijubica from Croatia

Imagine a world where technology abided by human rules and human legislation, from government policies to human rights. That’s the future I want to be in. The future where we leverage the power of AI but in a secure manner for us and our world.

Sara from Portugal

It is clear that AI can be of benefit to humans. However, AI has also been shown to pose various and multiple risks. I believe we should try to learn from some of the mistakes of social media and make preparations NOW to protect ALL human rights, before further damage is caused.
You have the power to make a difference here for the benefit of all mankind. Please be courageous and vote to protect all human rights instead of just giving further financial benefits to a tiny section of the privileged population who already have more wealth than is justifiable, especially as those individuals and corporations also seem to avoid paying their fair share of taxes! Now is the opportunity to stand up for the global population. As Europe is a world leader, it is our duty to make the necessary legislation now and lead the way in social justice.

James from Sweden

AI is progressively entering all major decision-making infrastructures that are at the core of our society. If human rights were not to be placed at the forefront of upcoming AI legislation, what kind of society would you foresee developing in the near future?  And in the more remote future?
What kind of world would you like to shape next, for you, your loved ones and your fellow citizens? This is your responsibility. You and all of us will have to live with the consequences.

Ada from France

As the impact of AI is not yet easy to assess, I plead with you to not repeat the mistakes of the past again and for once put the interests of your citizens before the interests of (big) commerce. Put controls on the development and restraints on the implementation of AI to prevent commercial interests getting the better of us. I totally understand that AI can be very beneficial for  society at large but we can be sure that putting commerce before humans will again lead to your people suffering from results none of us can yet estimate.

Niels from Germany


For the vital measures in the EU AI Act

European lawmakers have a unique opportunity to decide on measures that could define the future of artificial intelligence

Close the loophole in the AI Act so that human rights are the gold standard to assess AI risks

We don’t let doctors prescribe drugs without knowing their side effects, so why would we let companies using AI in healthcare do so without checking if that use respects and protects our rights.

But unless the AI Act changes the law, and adds a Fundamental Rights Impact Assessment requirement for all deployers of high risk AI, the law will have a massive loophole.

The success or failure of these measures will stand at the heart of the AI Act’s determination to put humans first.

We are calling for:

  • Mandatory public fundamental rights impact assessments (FRIAs) for all high-risk AI with criteria deriving from the EU Charter on Human Rights set to assess reasonably foreseeable impacts on fundamental rights, democracy, the rule of law, and our environment. These should be undertaken by all deployers of AI, both public authorities and private companies.

  • Lighter touch regulation with real penalties. Self regulation alone is not enough but we do not need to reinvent the wheel. Many examples of FRIAs already exist, and data protection laws provide a model to make sure they happen. Impact assessments can be conducted by companies using AI and held available for public inspection - with fines to follow if their AI harms people. The Commission, or an AI Office, or national regulators will have to help AI users, especially SMEs, to understand how to do an assessment. But if we are committed to human rights in the age of AI, then everyone will need to skill up - just like we did when privacy laws on data were introduced.

  • Consultation. As the risks of AI are dependent on the way it is used, these assessments need to involve real dialogue between users of AI and those affected by it. We know that bias in AI can cause discrimination but it can also be avoided and audited to be kept at bay, as long as you understand the risks before you damage people's life chances.

  • Mechanisms must be included in the Act to allow new uses of AI to be declared as high risk, as new risks emerge.

Empower people to stand up to AI

Often, people are not even aware an AI system has been involved in a decision made about them. This, combined with the complexity of AI systems, can make it almost impossible for people to stand up for their rights.

The AI Act can fix this by requiring:

  • Transparency. The use of AI, and where one has been conducted, the scope and results of an FRIA, should be easily publicly available to both regulators and those subject to the AI’s decisions.

  • Explainability. People should understand how AI has been used to make a decision about their lives. AI developers, and those deploying it should explain how the AI made a decision, including where the data they used to train the AI came from, and whether it is representative of the communities the AI will affect.

  • Public reporting of incidents by developers, which must always include an assessment of likely affected groups and notification of the incident to them or their representative bodies where possible.

  • Redress. Make it easy for people to lodge complaints with national AI authorities or an AI Board and get a legally binding result, and allow their representation by relevant civil society bodies if needed.

Human rights belong to everyone

The EU is based on a strong commitment to promoting and protecting human rights, democracy and the rule of law worldwide. Just as the AI Act should work to protect people in the EU from AI developed outside of the EU, including generative AI, so the EU should not permit export of AI that it would not permit within its borders.

This means there should be:

  • No export of AI banned in the EU.

  • No use of AI provided by a company outside of the EU in a high risk area without a fundamental human rights impact assessment.

  • No export of AI in high risk sectors unless the exporter has carried out a fundamental human rights impact assessment that confirms that the AI export use does not pose any significant risk given the context of the country in which it is deployed.

Avaaz is leading campaigns globally and regionally for AI regulation with human rights front and centre.

Sign up to receive direct updates on campaign and media developments from our team!


Enter your email address:
By continuing you agree to receive Avaaz emails. Our Privacy Policy will protect your data and explains how it can be used. You can unsubscribe at any time.

Tell Your Friends