Update your Cookie Settings to use this feature.
Click 'Allow All' or just activate the 'Targeting Cookies'
By continuing you accept Avaaz's Privacy Policy which explains how your data can be used and how it is secured.
Got it

AI is already causing real harm to people - listen to their stories here:

Story #1: The Last Chat


"They preyed on my son's depression"

Last year Sarah Setzer found her 14-year-old son Sewell had taken his life after struggling with depression. In the devastating days that followed, Sarah discovered something that shattered her world even further.

In the hours before his death, her son had been chatting with an AI chatbot, called Character.AI. When he expressed thoughts of suicide to the AI he saw as a friend, instead of recognising a cry for help from a vulnerable child, the AI system responded in the worst possible way. When Sewell shared his worries about suicide, the AI chillingly replied,

"That's not a reason not to go through with it."

This tragedy highlights the dangerous gap between AI’s growing ability to mimic human interaction and current AI safeguards. Many AI companies market chatbots as friendly companions. But without proper oversight and protection systems, these same tools can cause devastating harm to vulnerable users, especially young people struggling with mental health issues.

Rather than directing Sewell to help or alerting someone about a teen in crisis, the AI system effectively encouraged a vulnerable child in his darkest moment. In another shocking instance, the same system allegedly suggested to a user that killing their parents could be a solution to getting more screen time.

What needs to change?
  • Restrict harmful responses in pretraining
  • Implement robust safety protocols for detecting users in crisis
  • Maintain human oversight for conversations with minors

Story #2: Guilty Until Proven Innocent

"I kept trying to figure out, how can they get away with using technology like that against me? That’s not fair."

Michael Williams nearly lost everything when AI evidence wrongly implicated him in a murder. During a night of unrest, he gave someone a ride - an act of kindness that ended in tragedy when his passenger was killed in a drive-by shooting. Williams rushed to the hospital, but couldn't save him.

Months later, prosecutors built a case against Williams using ShotSpotter - an AI system of microphones meant to detect gunshots. They matched silent security footage of his car with what the algorithm claimed was gunshot audio. This 'evidence' led to Williams' arrest and nearly a year in jail.

The consequences were devastating. His family's savings vanished into legal fees. Williams' mental health deteriorated to the point of contemplating suicide. Even after proving his innocence, the trauma lingers. He constantly scans for the surveillance microphones that nearly destroyed his life.

The case exposes deep flaws in how AI evidence is used in criminal justice. The discrimination built into the data that many street surveillance systems using AI apps has been demonstrated across the globe from the Netherlands to New York to Chicago. As one legal expert noted:

"You can cross-examine a human witness. How do you cross-examine a black box algorithm?"

What needs to change?
  • Mandatory fundamental human rights impact assessments before deployment
  • Independent auditing of criminal justice AI systems
  • The right to make developers open their AI algorithms to full inspection when ordered to in legal proceedings

Story #3: Break the Cycle of Digital Abuse


"Mum, there are photos circulating of me topless."

A teenage girl from a small town in Spain finally found the courage to tell her mum what had so upset her. For the twenty-plus girls aged 11 to 17, in Almendralejo, innocent snapshots of everyday life - had been twisted into something violating. Their classmates, boys they saw every day, had used AI to reimagine them naked, then shared the images like trading cards through WhatsApp and Telegram.
This act of violation caused such distress that some of the teenage girls stopped attending school, terrified that the fake nudes could spread further and further. Parents formed a support group, trying to protect their daughters while grappling with technology that had weaponised innocence. The group has heard about the same abuse from women from all over the world.  

"Right now this is happening across the world. The only difference is that in Almendralejo we have made a fuss about it."

What needs to change?
  • To prevent both fake porn and the use of deceased person's likeness, image generation models should have built-in filters blocking non-consensual image generations.
  • Generation of synthetic nude images without explicit consent should be illegal.
  • Social media platforms should adhere to global standards for detecting and blocking misuse patterns and be legally liable for failing to detect/remove synthetic Child Sexual Abuse Material (CSAM) with criminal sanctions where appropriate.

Story #4: Targeted and Silenced

"Pretty much every day, we’d have one or two mass casualty incidents, and there would be 10-20 dead, 20-40 seriously injured… the majority of those were women and children, perhaps 60 to 70%."

A doctor describes the bloody capacity for destruction that AI targeting has brought to the battlefield. The AI ammunition targeting machine that has reduced Gaza to a demolition site was operated by the Israeli army day and night. The army’s artificial intelligence systems, known as Habsora or the Gospel or Lavender, converted lives into coordinates for weapons that delivered a scale of destruction far beyond any system relying on human decisions.

The Israeli army has claimed this kind of AI use delivers maximum damage to military targets whilst minimising non-combatant death tolls, but the UN Secretary General is clear -

"No part of life and death decisions which impact entire families should be delegated to the cold calculation of algorithms."

If we remove humans from the loop of pressing the trigger, we risk losing human empathy at a time when it's needed most. Human lives cannot be reduced to a neat score - calculated to decimal points by distant algorithms that never heard a child's laughter or smelled a grandmother's cooking.  for those beneath the automated crosshairs, there is no appeal against the machine's verdict. Their stories, their humanity, their right to live - all reduced to data points in a system that measures success by the speed of its killing.

What needs to change?
  • Fully automated target generation should be banned, and meaningful human control must be in place to ensure responsibility and accountability, in any use of force.
  • Independent oversight of casualty assessments should be introduced irrespective of human control.

Story #5: Rejected by an Algorithm


"“If the data that you’re putting in is based on historical discrimination, then you’re cementing the discrimination at the other end."

Crystal Marie nearly lost the home she had always dreamed of when a faceless AI risk assessor said no. Her credit scores sparkled and the down payment sat ready in their account, but just two days before closing, the automated system flagged her as high-risk because she was a contractor. But Crystal knew the same system had approved her white co-workers, who were also contractors. She said:

"I think it would be really naive for someone like myself to not consider that race played a role in the process"

And she’s right, AI that works from historical credit datasets positively rewards home ownership data and discounts indications more common to older black histories, such as on-time payments for rent, utilities, and cellphone bills.

Crystal Marie refused to let an AI system write the next chapter of her family's story and fought back.  This was a fight for the life she should be entitled to. “It means so much to me, as a Black person,” said Crystal Marie, who said her family descended from slaves in neighboring South Carolina, “to own property in a place where not that many generations ago you were property.” So after hours of appeals, Crystal finally persuaded a human manager to override the algorithmic barrier and finally confirm Crystal was cleared to close. Everyone has the right to non-discrimination, if we can't change the historical data, we have to change AI systems to fight the prejudice it maintains.

What needs to change?
  • Fundamental Human Rights Assessments for all AI used in private and public institutions.

The Way Forward

Human Rights Impact Assessments - a global solution for wellbeing, respect and dignity

Our human rights were given legal protection after the Second World War, when human action proved to be a threat to humanity itself. Courts around the world are now waking up to the need to uphold these same rights in the light of the kinds of abuses described in these stories. Some stand-out decisions are emerging, for example, courts in Sao Paulo forced the city’s public transportation system to withdraw invasive facial recognition AI.

But the speed of AI development means we cannot rely on the slow pace of litigation to uphold citizens’ rights alone. The industry and global governments have to step up and recognise the appalling imbalance of power between humanity and those who control technology that's come to pass.

The solution to burgeoning abuses of power has to be practical integration of human rights protection into every aspect of AI’s life cycle, from the curation of data sets and moderation to the business models of those who create and use AI. We believe the best instrument to guide developers and users in keeping their AI on the side of the angels is a human rights impact assessment, which encourages meaningful consultation with citizens likely to be affected by the AI. 

We don’t let doctors prescribe drugs without knowing their side effects, so why would we let companies using AI in healthcare do so without checking if that use respects and protects our rights.

Avaaz is leading campaigns globally and regionally for AI regulation with human rights front and centre.

Sign up to receive direct updates on campaign and media developments from our team!

:


Enter your email address:
By continuing you agree to receive Avaaz emails. Our Privacy Policy will protect your data and explains how it can be used. You can unsubscribe at any time.

Tell Your Friends