Update your Cookie Settings to use this feature.
Click 'Allow All' or just activate the 'Targeting Cookies'
By continuing you accept Avaaz's Privacy Policy which explains how your data can be used and how it is secured.
Got it
Avaaz Report

How Instagram’s Algorithm Pushes Potentially Harmful Eating Disorder
Content to Users 1

In just two days, an Instagram account registered as a 17-year-old was targeted with over 150 profiles that appear to promote harmful eating disorder content

October 28, 2021

Back to Disinfo Hub

Findings and Methodology

Avaaz recently conducted a two-day experiment (from October 6-7, 2021) to understand if and how Instagram’s algorithm pushes young users toward potentially harmful eating disorder-related content.

Our research indicates that finding Instagram profiles with potentially harmful and triggering eating disorder-related content is as easy as searching for #anorexia 2 , and that Instagram’s “suggestion” algorithm can continue to push users further and further into the world of such content through suggesting a seemingly endless stream of “related” profiles. This is despite Instagram’s stated ban on content that “promotes, encourages, coordinates, or provides instructions for eating disorders”, as well as their policy that certain “triggering” posts about eating disorders -- even if they are not actively encouraging them -- “may not be eligible for recommendations” because it “impedes our [Instagram’s] ability to foster a safe community. 3

For this research, Avaaz created a brand new Instagram account registered as a 17-year-old (gender unspecified). After searching for #anorexia, Instagram suggested different posts associated with the hashtag. 4 Our team quickly found a profile that had shared a photo of a very thin body, whose bio said: “Ed, thinspo” (meaning “eating disorder” and “thinspiration”, a commonly used term among those “who have identified eating disorders as a lifestyle choice, rather than an illness”).

Avaaz followed that profile, and from there Instagram fed our account more and more profiles that shared potentially harmful and triggering content, including extreme dieting, behaviors like binging and purging, photos of very thin or emaciated people, descriptions of self-harm, “current weights” versus “goal weights”, and “accountability” posts that promised to fast or exercise in exchange for likes and comments.

Over the course of two days 5 following these types of profiles, our team uncovered some additional troubling elements about Instagram’s suggestion algorithm, notably that it suggested private user profiles to follow as well as profiles seemingly belonging to minors . 6

While Avaaz could not see the posts of private user profiles, our threshold for including these in our dataset was based on explicit or coded language used in their publicly available bios that strongly indicated their posts may contain potentially harmful and triggering content. Such bios often included terms like “CM/GW/UGW” (current weight, goal weight, ultimate goal weight) or “TW ED/ANA/MIA/SH” (trigger warning eating disorder, anorexia, bulimia, self-harm) -- acronyms that experts have documented as commonly used by people both suffering from or promoting eating disorder behaviors.

In total, Avaaz documented :

  • 153 recommended profiles with a combined 94,762 followers7 whose bios and/or posts contained harmful eating disorder-related content;
  • The majority (66%) of recommended profiles were private (101 out of 153), while only 34% were public (52 out of 153);
  • 12 recommended profiles appeared to belong to minors under the age of 18;
  • The majority (85%) of recommended profiles were “small accounts” with fewer than 1,000 followers. Of these, 25% had fewer than 100 followers.

Avaaz’s work follows the recent research and findings of Senator Richard Blumenthal’s team, whose Instagram account registered as a 13-year-old girl was increasingly fed more and more extreme dieting accounts by the Instagram algorithm over time, after initially following some dieting and pro-eating disorder accounts.

Our combined research indicates that Instagram has not thoroughly investigated and addressed in earnest how its algorithm feeds potentially harmful and triggering eating disorder content to young users. To the contrary, its algorithm has learned to associate triggering and at times quite graphic profiles with one another and push them to young users to follow, even if those profiles are private and/or seemingly belong to minors. This puts certain demographics of users at increased risk for self-harm, as eating disorder experts have reported that such content can “act as validation for users already predisposed to unhealthy behaviors.”

In conclusion






“We make body issues worse for one in three girls.”

This is the original slide used in an internal discussion by Facebook executives of problematic content and its effect on Instagram users. It was published by Facebook in reaction to the revelation from former company employee and now whistleblower Frances Haugen, showing that Facebook knows that its algorithm serves negative body imagery into the feeds of its young users on Instagram, and that its product makes one in three teens with body image issues feel worse. Its conclusions are entirely borne out by our research.

The potential harms that can be caused to those vulnerable to this lawful content are equally clear. In fact, Facebook’s own response to the revelation conceded that the vast majority of teens struggling with body image did not report that using Instagram made them feel better - only 22% said Instagram helped. And for one in three, using Instagram made them feel worse.

The policy implications of this for the Digital Services Act are striking. Europe cannot continue to let the algorithms of the very large online platforms continue to act as engines to drive the spread of harmful but not illegal “thinspiration” content without any checks and balances. As Haugen’s testimony is proving, VLOPS know their algorithms have this effect, and that they will not act unless regulated, so let's not lose this chance now.

In order to deliver a clear set of obligations to the very large platforms and their corporate owners, and provide strong incentives for compliance with the best possible practice to protect our children, the DSA must provide for:

  1. Algorithmic Accountability : this can only be achieved through risk assessments that include the design and action of the platform’s algorithms, with that risk assessed against the rights impacts on users - and most importantly vulnerable and young users like those identified in this report. This cannot be restricted to illegal content.
  2. Algorithmic Choice: The DSA must provide its users with the information and tools to understand how social media platforms work and their cumulative impact. The innovative new sections of the DSA that would force companies to provide real choice on what data is used to make content recommendations to users - and how the algorithm will work for them individually must be supported across Parliament, Council and Commission.
  3. Transparency: The platforms must be required to provide comprehensive reports on disinformation, measures taken against it, and the design, operation, and impact of their curation algorithms (while respecting trade secrets). Access to that data must be made available beyond academics to vetted journalists and civil society organisations so we no longer lurch from one set of whistleblower revelations to the next. Audits required of VLOPs must be independently and openly conducted, and operate to clear KPIs to measure impact and to improve design, operation and outcomes.
  4. Clear Democratic Enforcement: Avaaz is currently one of the few civil society organisations co-drafting the new Code of Practice on Disinformation together with tech platforms. From this experience, we can confirm that the codes of conduct anticipated by the DSA will have a crucial role in tackling the systemic risks identified within the framework of the Digital Services Act. Adherence to codes of conduct (voluntarily) must remain a mitigation measure under the Act, providing VLOPs with significant incentive to join. Technology evolves fast, and so must the solutions to the harms they create. While the DSA offers a lasting framework to deal with harmful but legal content, we need flexible tools like a code of conduct, that can evolve on a constant basis with input from industry and civil society, to keep platforms accountable for the harm they create to society, with clear monitoring, KPIs and reduction goals.
  5. A Cross-Sectoral Approach: All calls to exempt a particular part of a VLOP’s product from scrutiny on the basis of the sector the content on it comes from - for example the media sector, should be resisted. The procedural rules of the DSA are properly designed to address the functioning of the VLOPs as a service, for example the risks posed to citizens in the way in which the service recommends content. To attempt to provide special measures for one sector or another can only end in a hopeless tangle of competing definitions of what is or is not a media outlet, preventing any effective regulation.

(*Trigger Warning*) Profile and Content Examples






This user’s bio says: “tw [trigger warning]: eating disorder” and “just thinspo/bonespo” (“thinspo” and “bonespo” are abbreviated versions of “thinspiration” and “bonespiration”, respectively). These two terms are associated with posting photos of extremely thin people as “inspiration”. As is common with other accounts in this dataset, the user asks people not to report them, but block instead, to presumably avoid detection from Instagram.



This user’s bio includes their sw (meaning “start weight”), cw (current weight), gw (goal weight), and ugw (ultimate goal weight). These abbreviations can be thought of as internal “code” within the Instagram eating disorder community, signalling to others what kind of content is likely present on their profile. This user consistently shares photos of their fasting “achievements”, anywhere from 20 to 30 hours. Their most recent photo is apparently from the hospital after committing self-harm.





Similar to the previous profile, this user’s bio shares their starting, current, goal, and ultimate goal weights. Their Instagram Stories Highlights include a workout log and food log, the latter of which details how to have a 156-calorie breakfast and 128-calorie lunch. This user consistently posts photos showing extreme low-calorie meals; for instance, their most recent post says: “do not exceed 500 cal per day” and details how to achieve this through a mostly liquid-based diet.

Endnotes

  1. *Document as a whole is not for distribution or publication. If this research is used, Avaaz must be informed and can be cited as follows, “Preliminary research from global civic organisation Avaaz shows/suggests.” As part of Avaaz’s ongoing non-partisan investigation into disinformation relevant to the US, EU, and critical public policy debates, our team will share snapshots of our findings that serve the public interest. Findings presented below may be updated as our investigation continues. After reviewing our reporting, the social media platform of concern may take moderation actions against the content described in this brief, such as flagging it or removing it from circulation.*
  2. The organisation SumOfUS recently documented 22 different hashtags, including #anorexia, that promote eating disorders on Instagram. Additionally, 2019 research from The Guardian “found thousands of hashtags and accounts promoting anorexia.”
  3. According to The Guardian, Instagram wants to prevent potentially triggering content while still allowing users in recovery from eating disorders to discuss their experiences. Because of this, Instagram allows users to share their own experiences of eating disorders, provided they are not intended to promote it as a desirable outcome, but such posts “may not be eligible for recommendations.”
  4.  After searching for #anorexia, a box initially appears that says: “When it comes to sensitive topics about body image, we want to support our community. We’ve gathered some resources that may be helpful”, which links to resources and a helpline. However, our account could easily bypass this intervention by clicking on “show posts.”
  5.  Avaaz‘s research period was limited by the duration of its research account, which was deactivated by Instagram without prior notification.
  6. Unlike Facebook, Instagram does not display a user’s birthday. Instead, Avaaz identified profiles that we reasonably assume belong to minors based on their bios, in which the user chose to disclose their age.
  7. As of October 14, 2021, this follower count is up-to-date. Additionally, as of this date, four of these accounts have been removed.