Ugly Sexist AI
(Credit: Gerd Altmann, Pixabay)

Trust us, you don’t need to be a techno-nerd to understand this podcast. We look at how women are faring (or not faring) in the exploding field of artificial intelligence. And if you don’t think you use AI, think again. “You use AI in lots of invisible ways,” says expert Meredith Broussard, such as every time you use a search engine or when you use facial recognition to unlock your phone or when you tell your home assistant to play the latest Taylor Swift album. The problem — and this is disturbing — is that decades and even centuries of bias are embedded in that AI technology, because of the limitations of the humans who built it. We look at how sexism and racism have wormed their ugly way into the AI we’re using… and what we can do about it. Guests include Broussard, author of “Artificial Unintelligence,” and Davar Ardalan, founder of AI startup IVOW.

More in this series

‘The Women in AI Are Talking’ Podcast
…and it’s up to us to listen. We talk with innovative women leading AI startups about the obstacles they’re facing. (Part 2)

Read Full Transcript

MUSIC: (opening notes to “Also Sprach Zarathustra” by Richard Strauss)

COLLEEN: Amazing music by Richard Strauss.

SUE: Music that will forever be associated with one of the most influential films ever made: “2001: A Space Odyssey.”

COLLEEN: A film set in the future — a future with super-smart talking computers — what we now might call AI — a future that has arrived.

SUE: It was also a film with almost no women in the entire story.

COLLEEN: Hm, yeah. Which leads us to our subject today, here at The Story Exchange.Iin this 2-part series, we wanted to explore how women are faring — or not faring — in the exploding field of artificial intelligence. I'm Colleen DeBaise.

SUE: And I'm Sue Williams. We are concerned that if AI is going to be running huge parts of our lives —

COLLEEN: — already is, in fact —

SUE: — what are the implications if most of the developers —

COLLEEN: — and startup founders —

SUE: — are men?

COLLEEN (FROM ZOOM): Okay, we should be recording, god willing. Yeah, I think we're recording.

COLLEEN: And just a note here: yes, this is a podcast on the highly technical topic of AI.

SUE: But trust us, you don't need to be a techno nerd to understand what's going on.

COLLEEN (FROM ZOOM): All right, so now it says it's recording. So we finally — yay!

MEREDITH BROUSSARD: Yay, awesome!

COLLEEN (FROM ZOOM): It's working!

COLLEEN: First we wanted to understand: what exactly is AI?

BROUSSARD: AI is just math. It's a very, very complicated and beautiful and interesting math, but it's just math.

SUE: That is Meredith Broussard — she's an associate professor at New York University.

BROUSSARD: And I'm the author of a book called “Artificial Unintelligence: How Computers Misunderstand the World.”

SUE: Colleen, you recently talked with Professor Broussard — and 4 or 5 others — to explore the disturbing consequences of not having more women in AI.

COLLEEN: Yeah, that's right. And I'm usually a glass-half-full type — but what I found is just...really dark.

SUE: Tell me.

COLLEEN: It might help to start with an example.

DAVAR ARDALAN: Yeah, that sounds great.

COLLEEN: That's Davar Ardalan — she's the founder of an AI startup.

ARDALAN: We started IVOW in January of 2019.

COLLEEN: She's still fine-tuning her elevator pitch, but it goes something like...

ARDALAN: We're building AIs that help journalists, students, teachers and museums, writers, formulate articles, reports and curated experiences.

COLLEEN: She's built a digital research assistant — like a Siri or Alexa — that aims to be more gender-inclusive. Here's what it sounds like:

SEENA: Hello, my name is Seena. I am IVOW's digital storyteller.

COLLEEN: Seena is designed to counteract...

ARDALAN: ...the lack of culturally reliable machine ready data on women.

SEENA: It would be a real honor for me to tell you about some of the amazing women who played important roles in the history of humanity.

COLLEEN: Davar is actually a former journalist.

ARDALAN: Yeah. Three of our co-founders, we came from NPR News.

COLLEEN: Last year, as part of her research...

ARDALAN: We spent the summer with TopCoder and sponsored by Microsoft, looking at what kind of publicly available data sources are out there on stories on women.

COLLEEN: Keep in mind that anyone, or any business, inventing new AI products or services — say, for women customers — might tap into these public data sources to develop their technology.

SUE: I don't like where this is headed...

ARDALAN: What we found was shocking. I’ll just give you two examples.

COLLEEN: The first involves Wikipedia.

ARDALAN: Unlike our male counterparts, many of the contributions of women who have paved the way in science and computing have been watered down, overshadowed or simply undocumented. In terms of Wikipedia, the current classification of gender, ethnicity and race is flawed and lacking.

COLLEEN: Keep in mind, when you look something up on Wikipedia, there is also a Wikidata page with all the data on it, that a developer might access.

ARDALAN: The word embeddings trained on Wikipedia disproportionately show that male terms are associated with science terms and female terms are associated with arts.

COLLEEN: The situation is even worse on another public data set, called Pantheon.

SUE: I don't know that one.

ARDALAN: Pantheon is a list of 70,000 biographies of famous people throughout history.

COLLEEN: As part of her research...

ARDALAN: We went to Pantheon and we filtered for females in public figure occupations going back to 3,501 BCE, like 5,000 years ago. The visualization on Pantheon illustrates that the major occupations for females in the past 5,000 years has been modeling and pornographic acting. I'm not making this up. It's because the guys are tagging — to them, famous women are pornographic actors and models.

COLLEEN: Coincidentally, as Davar was doing this research last summer, some news broke at MIT regarding its public database, called 80 Million Tiny Images.

SUE: This was the project started by researchers at MIT — that's the Massachusetts Institute of Technology, as in, one of the most prestigious tech institutions on the planet.

COLLEEN: So back in 2006, the researchers started scraping photos from Internet search engines. The idea was to create a massive dataset so that AI systems could automatically identify images of objects and people in a snap with whatever labels they had been assigned.

ARDALAN: To give you an example, some women were tagged as whores and some women were tagged with the C-word for their private parts. It was absolutely appalling. Because these are 80 million tiny images that were very crucial in many datasets, MIT decided to take the whole thing down because they said that this is just too much to go back and retag these.

COLLEEN (FROM ZOOM): Oh my God. I know we turned off our video, but if the camera was on you would see my jaw just dropped when you said some of this. This is something that existed at MIT?

ARDALAN: Yeah, so just —

COLLEEN (FROM ZOOM): I mean not just in a server in some guy's basement?

ARDALAN: No. Yeah, exactly.

COLLEEN (FROM ZOOM): Oh my God.

SUE: It's really...beyond shocking.

COLLEEN: Yeah. And MIT, for its part, not only took down the image dataset but apologized. To be clear, the MIT researchers weren't the ones that had labeled the images these shocking terms. These were photos collected from all over the Internet — where things broke down is that the researchers just hadn't filtered or screened them enough.

SUE: And yet the 80 Million Tiny Images situation barely made headlines.

COLLEEN: Yeah. Possibly because there's a familiar pattern here: this had already happened before, with actually a much more influential database, called ImageNet.

SUE: If I worked in AI, I'd probably be familiar with ImageNet.

COLLEEN: Yeah, I think you would — ImageNet is considered the thing that sparked the AI revolution. It's a visual dataset of millions and millions of images. And it was created by a woman, the computer scientist Fei Fei Li.

FEI FEI LI: My role is to be the thought leader of AI and machine learning.

COLLEEN: That's her, speaking in a Makers video. But her story shows how incredibly difficult it is for even well-intentioned people to make headway in AI without the worst of human bias creeping in. About 15 years ago, Fei Fei got the idea to build this giant visual resource...

LI: Most people were skeptical.

COLLEEN: Her thought was that as humans, we rely on our visual intelligence to inform our decisions.

SUE: If we cross the street, we'll stop if we see a car coming; when we walk down the sidewalk, we'll walk around a trash can or a bus stop or a dog...

LI: Humans recognize 30,000 categories by age 6.

COLLEEN: So her idea was, if we want to make machines do tasks for us...

SUE: ...those machines need to recognize objects.

COLLEEN: Fei Fei and a small team at Princeton University began collecting and hand-labelling images of basically every object on earth.

SUE: Wow. That sounds...time-consuming — and expensive.

COLLEEN: Yeah, impossibly so. A couple years into it, they figured out they could outsource much of the project to Mechanical Turk..

SUE: That’s the Amazon site where you can basically hire remote workers to do grunt work for cheap, for just pennies on the dollar.

COLLEEN: Right. Here's Fei Fei again, in a talk at the Photographer's Gallery in London.

LI: I still have goosebumps thinking about...that was the moment I knew Imagenet would happen. For the next two and a half years we employed about 50,000 global workers from 167 countries and helped us to label billions of images.

SUE: She got it done.

COLLEEN: Indeed. It was an enormous success. And Imagenet has been widely credited with propelling the AI used in many areas, from facial recognition to medical imaging to self-driving cars. But there was a problem. When you have 50,000 workers from all over the world labelling images...labelling what they see...

SUE: ...they bring their own biases or politics or cultural thinking to the job.

COLLEEN: And that might not matter so much for an image of, say, an apple or a cat.

SUE: But if it's a woman in a bikini on a beach — you might get something very different.

COLLEEN: That brings us to...I guess you could call it a high-profile stunt. Here's BBC News.

BBC NEWS: This exhibition at London's Barbican provides an insight into AI data training. Huge numbers of pictures like these are needed to create artificially intelligent algorithms.

COLLEEN: In 2019, the artist Trevor Paglen and researcher Kate Crawford staged a major art project to shed a light on everything that's problematic with artificial intelligence.

SUE: Not just problematic, but offensive and alarming.

COLLEEN: Yes. And as part of this art project, they created this tool — I guess you could call it a game — that went viral on social media.

YOUTUBER: So let's jump right into it.

COLLEEN: It was called #ImagenetRoulette. And just so you understand how it worked, I'll share some sounds of a Youtuber playing with this “tool.” You'd upload a selfie of yourself, or an image of a famous person. And then the AI, using the ImageNet database, would analyze what it saw.

YOUTUBER: Now who should our first test be on?

COLLEEN: So this Youtuber that we're listening to — he runs a channel called Unwanted Commentary — he starts uploading images of other Youtubers and cracking up at the results that the AI spits back, which are sort of bizarre.

YOUTUBER: Hypocrite, disassembler, dissimulator, phony, phoney, pretender, face very good...

COLLEEN: And some of them are funny — there's an image of a popular adventure vlogger, who's on a beach, and the AI has for some reason labeled him “horse wrangler.”

YOUTUBER: Not all white people wrangler horses, okay.

COLLEEN: But then some images start getting...questionable results. There's an image of Kanye West, labeled "Black African." Still others, usually with dark skin, are labeled "Rape suspect" or "wrongdoer." Here's an image of a Youtuber/Comedian named Liza Koshy, who's of Indian/German descent — she's labeled an offensive term for someone who is multiracial.

YOUTUBER: Are you even allowed to say that in 2019? I mean, shame on you.

SUE: Women were labelled everything from "cheerleader" and "platinum blondes" to far more loaded terms...

COLLEEN: ...sluts, slovenly women, trollop, slattern.

SUE: And I guess what the artists wanted to point out here, is that this AI technology that is increasingly part of lives...

COLLEEN: ...is built on easily accessible, public data that has sexist or racist flaws — because of the sexist or racist humans behind it. Not long after this "ImageNetRoulette" experiment, ImageNet announced that it would remove over half a million images to "identify and remedy fairness issues." It’s a process called "debiasing."

SUE: I find it really interesting there’s even a term for it now, “debiasing.”

COLLEEN: Mhm.

SUE: We reached out to Fei Fei Li for an interview but we didn’t get a response.

COLLEEN: We'll be right back.

COMMERCIAL: The Story Exchange is a nonprofit media company that provides
inspiration and information for entrepreneurial women...including women like Holly Herndon, who experiments with AI and computer programming to create the music you're listening to, right now. We'll be featuring more music created by women in AI throughout this 2-part series.

COLLEEN: Welcome back. I'm Colleen DeBaise.

SUE: I'm Sue Williams.

COLLEEN (FROM ZOOM): For a woman who's listening to this who might be starting and running her own business, and maybe doesn't think she is actually someone who uses AI or needs AI, can you give a few examples of something she might be using that incorporates AI?

BROUSSARD: So we use AI in lots of invisible ways.

COLLEEN: That again is Meredith Broussard, author of “Artificial Unintelligence.”

BROUSSARD: There is AI at work in choosing the search results that you get when you use a search engine. There's AI involved in the facial recognition you might use to unlock your phone. There's AI in the voice assistance that you use when you call customer service or when you tell Alexa to play the new Taylor Swift album.

COLLEEN: Al is embedded in all kinds of things...

BROUSSARD: ...and it's actually kind of mundane. So if I told you, "Oh yeah, there's AI in this fitness tracker," you'd be like, "Oh, that's so fancy." But honestly, when you look at the code, you'd be like, "Oh, it's just that couple of lines?"

COLLEEN: The challenge is that AI is weirdly limited, and AI is not human enough to fix, say, issues like fairness. In fact...

BROUSSARD: The problem arises when we start to try and use math to solve social problems.

COLLEEN: Let's explore this idea, by looking at how how we use AI in business.

BROUSSARD: You can use machine learning to increase revenue in all kinds of transactions, which is great for making money.

COLLEEN: Some simple math and you can predict, say, what a visitor on your website is going to click on next.

BROUSSARD: Or you can pull items out of a store database and say, "Okay, these are the items that people who have bought a particular kind of dress are also likely to click on."

SUE: That’s used on almost every shopping site today.

COLLEEN: Yeah. But the problem with these machine learning models is that...

BROUSSARD: ...they're not alive. They don't have human values.

COLLEEN: Take furniture shopping, for example.

BROUSSARD: It's actually cheaper to buy a sofa at full price than to rent a sofa from a furniture rental company over time. But if you're poor, you don't have the money upfront or the credit upfront to buy the sofa, so you end up spending more. So the machine learning models will look at the data about who has been charged what in the past. And they'll say, “Okay, well, we should just charge the poor people more and charge the rich people less,” which is not actually what we want for social justice.

*Musical Interlude*

COLLEEN: Meredith says she was prompted to write "Artificial Unintelligence" because of misconceptions about AI.

BROUSSARD: There's a lot of confusion about the Hollywood kind of AI versus the real kind of AI, which is totally understandable because the Hollywood kind of AI is very cool.

SUE: This brings us back, finally, to “2001: A Space Odyssey.”

DAVE: Open the pod bay doors, Hal.
HAL: I'm sorry, Dave. I'm afraid I can't do that.
DAVE: What are you talking about, Hal?
HAL: This conversation can serve no purpose anymore. Good-bye.

COLLEEN: I can't believe that film was released way back in 1968!

SUE: I’ll never forget the first time I saw it. I was terrified!

COLLEEN: It's the ultimate showdown between man and machine.

SUE: But Hollywood being Hollywood, there's also romantic AI...

THEODORE: I wish I could put your arms around me. I wish I could touch you.
SAMANTHA: How would you touch me?

COLLEEN: Oh my!

SUE: That's from the movie "Her," where the character played by Joaquin Phoenix falls in love with an AI virtual assistant played by Scarlett Johansson.

COLLEEN: Yes. They're quite different but the common thread of both movies is machines adopting qualities or emotions or abilities that aren't just on par with humans — they're superior. And according to Meredith, that’s the part that’s still kind of fiction.

BROUSSARD: I wanted people to understand what AI is and isn't so that we could make better decisions about when we can and should use technology.

COLLEEN: It's a mistake, she says, to assume that computers will always get it right.

BROUSSARD: They're not making the world better. They're just reproducing all of the existing inequality and bias, and they're making it harder to see and impossible to eradicate.

COLLEEN (FROM ZOOM): Which sounds so devious. Now, am I giving too much of a human personality to something?

BROUSSARD: I think that the problem is a kind of pro-technology bias that I call technochauvinism. Technochauvinism is the idea that computers are superior to people or that computational solutions are superior to all others. And so when you have this pro-technology bias and you think, “Oh yeah, it's going to be better to do something with the computer because it's going to be more objective or more unbiased,” then you're falling into the trap of pro-technology bias.

COLLEEN: And what happens then is you fail to see...

BROUSSARD: ...that your technology that you're building, or the technology that you're implementing, is hurting people or cutting people off from opportunity.

SUE: All of this is really common sense.

COLLEEN: It is, it is. Yet we see mistakes being made, over and over again, with all the tech giants, actually.

BROUSSARD: The Amazon resume thing is the iconic example.

COLLEEN: Here's a clip from Reuters, in 2018.

REUTERS: Amazon software engineers recently uncovered a big problem: their new online recruiting tool did not like women.

COLLEEN: So a few years ago, Amazon decided to automate the hiring process.

SUE: Amazon automates everything, at this point.

COLLEEN: Yeah. So, Amazon develops this AI resume tool that gives job candidates scores, from one to five stars.

SUE: Sort of like a product review.

COLLEEN: Yeah, I think that was the idea. The problem was that Amazon's computer models were trained to rank job candidates based on trends in resumes submitted over the past 10 years — and most resumes came from men...

REUTERS: ...in effect, teaching themselves that male candidates were preferable.

SUE: It's no surprise, as tech, of course, is a very male-dominated industry.

COLLEEN: Exactly. So this tool ended up giving you a low score if your resume basically divulged your gender — say, you went to an all-women's college.

BROUSSARD: Yeah, or if you played women's field hockey.

SUE: Proving once again, that AI is only as good as the data that's put into it.

COLLEEN: Amazon eventually dropped the AI recruiting tool. It said publicly that the technology was never actually used to evaluate candidates.

*Musical Interlude*

COLLEEN: And there's one more example we need to mention of AI gone wrong.

CRAIG FEDERIGHI: Now, something that we all care a lot about...health.

COLLEEN: That's Apple software executive Craig Federighi in 2014. He's on a big stage, typical Apple event, announcing an innovative new app...

FEDERIGHI: With Health, you can monitor all of your metrics that you’re most interested in.

COLLEEN: Everything from...

FEDERIGHI: ...your activity level to your heart rate to your weight and chronic medical conditions like high blood pressure and diabetes.

COLLEEN: It sounds great! What could go wrong?

SUE: Well, I'd like to read something that reporter Arielle Duhaime-Ross of the tech site The Verge wrote, as it just about sums everything up nicely: “Of all the crazy stuff you can do with the Health app, Apple somehow managed to omit a woman’s menstrual cycle. In short, if you’re a human who menstruates and owns an iPhone, you’re shit out of luck...Is it really too much to ask that Apple treat women, and their health, with as much care as they’ve treated humanity’s sodium intake?”

COLLEEN: Apple took a lot of heat for this omission, and in 2015, added some period tracking features to Health.

COLLEEN (FROM ZOOM): When was this first understood, that the sexism that already exists in the world is now being incorporated into the system or the tool?

BROUSSARD: You can look back to the founding of artificial intelligence, which happened in 1956 at a meeting at the Dartmouth Math Department. And you can look at the attendees at the founding meeting, and they are all white male mathematicians, mostly educated at Ivy League and Oxbridge institutions. And there's nothing wrong with being a white male mathematician —

COLLEEN (FROM ZOOM): — not really representative of the rest of the world, I suppose.

BROUSSARD: It's not. And the problem is that, when you have a small and homogeneous group of people giving us what we have collectively adopted as our vision of technology and society and the future, you are left with the limitations of the founding fathers.

COLLEEN (FROM ZOOM): And right now, working in AI, how much of that world is...I guess the expression is pale and male? How much of that world is pale and male?

BROUSSARD: Most of it. The majority of it, the vast majority of it.

SUE: Studies suggest that just 12% of machine-learning researchers are female. Only 22% of jobs in the AI industry are held by women; even fewer hold senior roles. And there's still a giant diversity gap at tech companies, like Facebook and Google.

BROUSSARD: So even if you have programmers at Facebook who are woke, who are doing their best to make equitable technology, they still have unconscious bias. And so you need more diverse groups of people creating the technology in order to start to address a lot of these problems.

COLLEEN: I asked Meredith how we do that, exactly.

BROUSSARD: Well, we can start by hiring more women and people of color and keeping them in the technology workforce. This is a really complex social problem. It doesn't have an easy answer. We're kind of addicted now to the idea that there's an easy solution to every problem. There isn't.

*Musical Interlude*

COLLEEN: In next week's episode, we'll sit down with 3 women who are working in AI —

SUE: — they're trying to invent new innovative products and services that will change our future.

COLLEEN: And we'll hear about their startup experiences, which at times can be incredibly difficult to hear, especially if you're interested in a fair and just world.

SUE: We have a really long way to go before we can call AI truly ethical.

COLLEEN: That's right. We should note that the tech giants — like Google and Microsoft — are researching how to prevent bias from seeping into datasets and algorithms

SUE: But when the real world that AI is being trained on, is biased itself — that's the fundamental problem. And we'll explore that next week.

OUTRO: This has been the Story Exchange. If you liked this podcast, please share on social media or post a review wherever you listen. It helps other people find the show. And visit our website at TheStoryExchange.org, where you’ll find information and inspiration for entrepreneurial women. And we’d love to hear from you! Drop us a line at [email protected] — or find us on Facebook. I'm Colleen DeBaise. Sound editing provided by Nusha Balyan. Our research assistant is Noël Flego. Our mixer is Pat Donohue at String & Can. Executive producers are Sue Williams and Victoria Wang. Recorded at Cutting Room Studios in New York City. Thanks to Taryn Southern for letting us use her song “Break Free,” composed with Amper artificial intelligence with lyrics and vocals by Taryn. The piece “Home” is performed by Holly Herndon, licensed courtesy of 4AD and Holly Herndon (www.4ad.com / www.hollyherndon.com), by arrangement of Beggars Group Media Limited and Holly Herndon. The piece “Also Sprach Zarathustra” composed by Richard Strauss is licensed courtesy of C.F. Peters Corporation.