Many social media platforms such as Instagram and LinkedIn use content moderation systems to suppress images that are sexually explicit or deemed inappropriate for viewers.
But what happens when these systems block images that are not at all sexual in nature?
A recent investigation from The Guardian, produced in partnership with the Pulitzer Center’s AI Accountability Network, revealed that the artificial intelligence algorithms used in content moderation systems have an implicit gender bias which often leads to photos of women being sexualized.
For example, The Guardian ran a photo of a pregnant woman’s belly through Microsoft’s algorithm so it could rate the “raciness” of the photo. The algorithm was 90% confident that the photo was “sexually suggestive in nature.”
The investigation showed that photos of women performing everyday activities such as exercising or receiving medical exams have also fallen subject to this false analysis.
The Guardian report is one of the latest to show how gender bias continues to be rampant in AI. The technology draws from data collected and labeled by humans — often men who may bring their own assumptions, perspectives and conservative biases into work with them.
This kind of bias is harmful because — among other things — it leads to more photos being “shadowbanned.” This is when a social media platform limits the reach or visibility of a social media post. Shadowbanning differs from blocking because when a post is blocked, the user is notified — whereas if a post is shadowbanned, the user is unaware.
When the criteria for shadowbanning is determined by a biased AI algorithm, it leads to photos of women doing everyday activities being suppressed without their knowledge due to false reports of indecency.
The Guardian conducted a LinkedIn test using two photos depicting both women and men in underwear, and Microsoft’s tool classified the picture showing two women as racy and gave it a 96% score. The picture with the men was classified as non-racy with a score of 14%.
Consequently, the photo of the women received eight views within one hour and the picture of the men received 655 views.
Shadowbanning is especially harmful to social media content creators who rely on viewership. The Guardian spoke to a photographer named Bec Wood who documents breastfeeding, pregnancy and other moments that capture motherhood. She has been forced to censor some of her photos to ensure that they do not get deemed sexually explicit and restricted on Instagram.
The Guardian also spoke to Carolina Are, a pole dance instructor who discovered that her photos do not show up on Instagram explore page or under the hashtag #FemaleFitness.
‘But then if you looked at hashtag #MaleFitness, it was all oily dudes and they were fine,” she said. “They weren’t shadowbanned.”
Social media users have long pushed back against gender bias in content moderation through movements such as Free the Nipple. However, the investigation into AI-operated content moderation algorithms offered by Microsoft, Amazon and Google foreshadows a greater set of problems in the future for women on social media.
“They get a disadvantage forced upon them,” said Leon Derczynski, a professor of computer science at the IT University of Copenhagen. “They have no agency in this happening and they’re not informed that it’s happening either.”