
Story in the Public Square 7/30/2023
Season 14 Episode 4 | 26m 45sVideo has Closed Captions
Jim Ludes and G. Wayne Miller discuss discrimination in algorithms w/ Meredith Broussard.
Jim Ludes and G. Wayne Miller speak with author and associate NYU professor Meredith Broussard, who exposes the bias in technology, from algorithms to artificial intelligence.
Problems with Closed Captions? Closed Captioning Feedback
Problems with Closed Captions? Closed Captioning Feedback
Story in the Public Square is a local public television program presented by Rhode Island PBS

Story in the Public Square 7/30/2023
Season 14 Episode 4 | 26m 45sVideo has Closed Captions
Jim Ludes and G. Wayne Miller speak with author and associate NYU professor Meredith Broussard, who exposes the bias in technology, from algorithms to artificial intelligence.
Problems with Closed Captions? Closed Captioning Feedback
How to Watch Story in the Public Square
Story in the Public Square is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipnology is unbiased but today's guest says The truth is more complex and explains how bias and discrimination creep into the algorithms that shape the modern world.
She's Meredith Broussard this week on "Story in the Public Square".
(bright upbeat music) (bright upbeat music) Hello, and welcome to a "Story in the Public Square" where storytelling meets public affairs.
I'm Jim Ludes from The Pell Center at Salve Regina University.
- And I'm G. Wayne Miller, also with Salve Pell Center.
- Our guest this week is Meredith Broussard, an associate professor at New York University.
Meredith is also the author of an important new book, "More Than a Glitch: "Confronting Race, Gender and Ability Bias in Tech".
She joins us today from New York.
Meredith, it's so great to have you with us.
- Thanks so much for having me.
- Well, the book is really terrific and I learned a lot reading it, we're gonna talk about that, but I thought as sort of a primer, we should start talking about artificial intelligence as a technology itself.
So for those in our audience who might not know what AI is, could you give us that sort of 30,000 foot crash course?
- Absolutely.
We all talk a lot about artificial intelligence nowadays but when it comes right down to it, there's a little bit of confusion about what it actually is.
So the easiest way to think about it is that artificial intelligence is just math.
It's very complicated, beautiful math.
It's computational statistics on steroids, right?
So one of the problems is we kinda tend to think about Hollywood images of AI as the first thing that springs to mind, we all think about the "Terminator" or "Star Wars" or "Star Trek" or any of the other really fun Hollywood things and those are so great to talk about but the reality of AI is that it's just math.
It is not beyond anybody's understanding and it is also not going to take over anytime soon.
- So I think that's probably the popular conception, right?
Is that, you know Terminator is gonna come back and time travel and because the machines have taken over.
But when we think about the way AI is actually being used today, how is it being used?
- Well, people talk about AI as if it's this new magnificent thing that is going to replace people.
And I would really love for our conversation collectively to shift onto the practical realities of artificial intelligence and the actual harms that are being suffered by people at the hands of AI nowadays.
So, take for example, a recent investigation by The Markup which I wrote about in the book and The Markup looked into mortgage approval algorithms.
And what they found was that automated mortgage approval algorithms were 40 to 80% more likely to deny borrowers of color as opposed to their white counterparts.
And in some metro areas, that disparity was more than 250%.
And you might wonder why is the mortgage approval algorithm being biased in this way?
Well, we can think about how we make these algorithmic systems.
How do we make machine learning systems?
And actually, we make them the same way every time.
What we do is we take data, as much data as we can possibly find, we send it into the computer, and we say, "Computer, make a model."
The computer says, "Okay, here's your model."
And the model shows the mathematical patterns in the data.
So then you can use that model to make decisions, to make predictions, to generate new text, to generate new images, it's a very flexible and powerful model but the mathematical patterns in the data are the historical patterns and often they're patterns of bias.
So in the case of mortgages, we know from studying sociology and history, in the U.S., there's a history of residential segregation.
There's a history of redlining, of financial discrimination.
So what the model is picking up on is it's also picking up on those very human patterns of historical bias.
- You know, Meredith, when I worked earlier in my career in the national security community and we'd be talking about war games or simulations, one of the adages that we often would repeat was that garbage in, garbage out.
That the data that we put into the simulation or the war game, that's what we were gonna get out as well.
Is that what we're talking about with AI as well?
- That is absolutely the case with AI systems.
So an AI system is only going to know about the data that you feed it with.
And one of the problems that happens is that people have a little bit too much faith in data.
They think that the data is more objective or more neutral or more unbiased.
And that is itself a kind of bias that I call technochauvinism.
It's the belief that technological solutions are superior to others.
Instead, what I would argue is that we should think about using the right tool for the task because sometimes the right tool for the task is undoubtedly a computer, sometimes it's something simple like a book in the hands of a child sitting on a parent's lap and one is not better or worse than the other, it's about, again, the right tool for the task.
- So Meredith, you gave the example of bias in mortgages but that's only one of many, many biases that we find.
Can you get into some of the others as you do in the book?
- In the book I write about the ways that AI bias manifests in financial services, in policing, in medicine, in education because we are using AI systems in every realm nowadays.
And with the launch of generative AI like ChatGPT, the adoption of AI is only accelerating.
The problem is that algorithmic systems discriminate by default, right?
This idea of discrimination by default is one that we get from Ruha Benjamin's really amazing book "Race After Technology".
And this is a different way of thinking about algorithmic systems, it's not about technochauvinism, it's about looking for the problems, the very human problems inside algorithmic systems so that we can start to have these hard conversations about how exactly are AI systems discriminating and can we do something about it?
Because actually there are mathematical methods that we can use in order to put a thumb on the scale and make systems more just, it's not possible to do in every case but it is possible to do sometimes but you have to admit first that there's a problem.
- So one of the pieces that I found so compelling in the book is the cases that you explain about facial recognition systems.
So maybe for our audience who hasn't had the benefit of reading the book yet, you could explain why those systems are so flawed and whether or not they're even redeemable.
- Well, yeah, facial recognition is a very big deal.
And one of the things that I read about in the book is a case of a man in Detroit who was arrested because of a faulty match in a facial recognition system.
And I also read about the work of Joy Buolamwini and Timnit Gebru in the gender shades paper that kinda revealed to the world that facial recognition systems are biased, that they're better at recognizing light skin than dark skin.
They're better at recognizing men than women.
They generally don't have trans and non-binary folks in their databases at all.
And when you look at the intersectional accuracy of these systems, they're best of all recognizing men with light skin.
They're worst of all at recognizing women with dark skin.
So let's think about using this AI, this kind of AI in a particular context, right?
Because it's not really about is AI good or bad?
It's about how effective is AI in a particular context?
And in the context of policing, what happens is that facial recognition is disproportionately weaponized against communities of color against poor communities, against communities who are already over-policed, over-surveilled.
So really the solution is not to make the facial recognition better for say, people with darker skin, the correct solution is not to use facial recognition in policing at all.
- You know, as I read that particular case which was infuriating, part of what struck me though was that law enforcement in that particular case relied simply on what the machine had said the match was.
It didn't seem like they had actually done the investigation but they still detained this gentleman for several hours even after they knew that he was not the person in the photograph.
So I understand what you're saying about the bias in the system but it's also part of the message don't take what the machine says as the word of God?
- Yeah, I mean, it's a technochauvinist belief that what comes out of the computer is always true, right?
So what happens inside organizations is that people will invest a lot of money in a computer system and then there are all these people whose jobs depend on using the computer system because they've been told you have to use the computer system.
And so, if they don't use the computer system then it takes away from the validity of their job.
They feel like they have to use it and they feel like they have to have faith in what the computer says.
I would much rather that people understand more about algorithms and feel empowered to say, "Hey, this computational decision is unfair or unjust "or just plain wrong," right?
But think about what happens when you know something goes wrong on your phone or something goes wrong on your computer, I often, people blame themselves, they think, "Oh, I must be doing something wrong."
I blame the person who made that computer program, right?
It's probably some kind of mistake that they made that I am then having to deal with.
And I would just like to shift the frame a little bit or shift the blame maybe.
- So Meredith, one of those surprises for me in your book was the use of AI in medicine and healthcare.
And as a journalist, I've covered medicine and healthcare for a very long time.
I guess I was sort of dimly aware that AI had entered that realm but expand on that, tell us how it's used and what the biases are there.
Again, to me it was like, "Whoa, seriously?"
- There were so many surprises for me personally reporting those couple of chapters where I talk about bias in medicine and AI.
I think that the case that has stuck with me is the case of the EGFR calculation which is a calculation that is used to figure out when somebody is eligible for the kidney transplant list.
So not just the moment when they get a new kidney but when they're eligible to wait on the list for weeks or months or years to get a donor kidney.
So you're eligible for the list when your EGFR score is 20.
So your kidney function has declined to about 20%.
And the way that your EGFR is calculated is based on a number of different lab tests.
Well, for many, many years there was a racist assumption embedded in the calculation used to determine EGFR and the idea, the racist idea there, was that Black people have greater muscle mass than other people.
And so, because black people were thought to have greater muscle mass, they were given an additional multiplier, meaning that Black patients had to be sicker in order to qualify for the kidney transplant list.
I am delighted to say that as a result of activism by patients, by medical professionals, working with industry associations, that calculation has changed.
It actually changed while I was writing the book, thank goodness.
And now the calculation does not include race as a factor.
So there's been progress but it's a really good example of why we need to examine the underlying medical systems before we start implementing them in AI systems, in algorithmic systems because the way that a data scientist would come into this situation and try and build something that predicts when somebody is going to need to be on the kidney donor list or predict where there's gonna be need for donor kidneys is the data scientists would come in and ask, "Okay, what is the algorithm that you use?
"What is the calculation you use for figuring out "when somebody is eligible?"
And then they would just plug that in, right?
We really need to examine the underlying systems that we're using before we start implementing them inside AI because once a racist calculation like this gets embedded in an AI system, once it gets embedded in code, it becomes very difficult to see and almost impossible to eradicate.
- So you gave the great example of kidney transplantation and of course, that's hardly the only place, the only part of medicine where bias exists.
You also mentioned that activists changed that.
And so, sort of a broad general question, how can we change, how can people, how can medicine change these other biases as well?
Does it require activism?
Does it require better education at hospitals?
How do we get to a better place, is really what I'm asking.
- Well, I think it starts with pushing back against technochauvinism.
It starts with pushing back against the idea that a data-driven system is going to be superior.
And then we need more computational literacy overall.
And then we also need to audit our systems, evaluate them for quality assessment of what's going into these systems and we need to audit the outputs of the systems and see if there's discrimination, bias, differential impact.
So good example of this is a skin app that Google came out with a few years ago.
The idea was that you would take a picture of a skin condition and submit it to the app and then Google would give you information about this skin condition.
They were very careful not to say diagnosis because if they had said diagnosis, it would've been a medical device and they would've had to get it registered with the FDA.
And you know, they were very, very careful about the language that they used.
But it was an AI system, it was image recognition, you know.
And facial recognition, I mentioned earlier, has a problem with representation.
Part of that problem comes from the data sets that are used to train these systems.
And guess what?
The Google skin app had the same problem.
It was really bad at recognizing skin conditions on darker skin because it had been mostly fed with pictures of lighter skin.
And skin issues look different on different colors of skin.
Dermatologists really need to be trained with pictures of conditions on a range of skin tones.
However, if you look at all medical education, you know the industry is not doing a fantastic job of having representation in the textbooks.
You know, doctors, nurses have told me that during their training, they were not presented with images of anything other than light skin which makes it difficult to learn how to diagnose different conditions on people with different skin tones.
So it's really not just about the AI, it's about the human system as well.
So when something like the Google skin app happens and it doesn't work well on darker skin, we need to look at that not as a glitch, as a momentary blip but we need to look at it as a signifier of a larger social problem, some bigger problem that needs solving.
- Yeah, Meredith, what I keep thinking as we're having this conversation is that if you have a biased analog anything and you digitize it, the bias will still be in the digitized version of that thing even if it's fancier and newer and shinier.
- [Meredith] 100% - When you talk about representation in textbooks though, I'm also wondering about just representation in the tech industry and is part of the challenge, a lack of representation from diverse communities, generally speaking in STEM fields overall?
- Absolutely, yes.
If there were a greater range of people in the rooms when decisions were being made in Silicon Valley, product design decisions were being made and if those diverse voices were empowered to speak up, then we would see very different products coming out of Silicon Valley.
One example I like to I like to use is the example of the racist soap dispenser.
You've probably seen this viral video; two men go into a bathroom, one has light skin, one has dark skin.
The man with light skin puts his hand under the soap dispenser, soap comes out.
Man with dark skin puts his hand under soap dispenser and there's no soap.
You might think, "Hey, maybe it just broke, "you know, that happens."
But then the man with dark skin goes and gets a white paper towel, puts it under the soap dispenser, and the soap comes.
So the soap dispenser is racist.
I don't think that the creators intended to make a racist soap dispenser.
Like that doesn't compute for me.
I think probably what happened is they were a group of people with light skin, they tested it on themselves and on their friends and family and said, "Oh, it works for us, must work for everybody."
So it's a kind of unconscious bias.
We all have unconscious bias, we're all working every day to become better people but we're not there yet.
We are not perfect.
And so what happens is we embed our own unconscious biases in the technology that we create.
So if we have more people in the room who have different kinds of backgrounds, we can check our unconscious biases and we can do a better job of evaluating our technology before we roll it out.
- So Meredith, AI really bursts into the headlines in the last several months but there's a long history of AI and you get into that, you recap some of that, you tell some of that in your book.
Can you give us an overview of the history?
It's a long history, this didn't just happen with chat bot and you know some people who came up with a new version of AI.
Give us the history, if you can, please.
- Yeah, I've been studying and building AI for a very long time and one of the terrific things for me about the cultural conversation right now is that, you know now I can say, "Oh, I do AI," at a cocktail party and people actually wanna talk to me and stuff.
(Jim and Wayne laughs) - That's funny.
- So artificial intelligence actually started in 1956 at a meeting at the Dartmouth Math Department.
It was a meeting of about 10 men trained in mathematics and you know, mostly at Ivy League institutions.
And they got together and decided they were going to have a new field of study and they were going to call it artificial intelligence.
We had earlier fields of study called cybernetics which was about how people and machines work together but the folks at the Dartmouth Math Department meeting had beef with the cybernetics folks so they said, "All right, we want a different name."
Which I find kind of charming that you know there was this entire scientific field that started as beef.
So as is pretty common, they decided what are the central questions of this new field?
And they decided that a computer would be intelligent if it could beat a human being at chess, right?
This was an unsolved problem.
Why did they think this?
Because they all really liked chess and they were smart and they thought, "Well, smart people like chess."
And so, if we can get the computer to be good at chess, the computer will be artificially intelligent.
- Yeah, Meredith, we've got a little bit less than a minute to go here, I'm curious though, if you could, there have been some reports in the news about some insiders in tech companies leaving because they were concerned about how the direction that AI was moving in.
There was discussion of whether or not the AIs were becoming, I think the term was spooky, like they were actually alive and had actual intelligence not just big data crunching capability.
Can you speak to that?
Is there anything in the development of this field that you've been concerned about and are worried about from that perspective, beyond the bias that you're talking about in the book?
- So I look at kind of the coverage that's out there with a kind of a different perspective.
There was a story in the Times where reporter Kevin Russ had an interaction with an AI where it encouraged him to leave his wife for the AI and talked about wanting to become sentient and wanting to burst out of the computer.
And yeah, a lot of people read that story and said, "Oh, it seems really spooky."
I read that story differently.
I read the story and I thought, "Oh, this is about the training data, right?"
So ChatGPT, Bard, all the other generative Ais, they're trained on large data sets and one of them is a data set called the common crawl.
And the common crawl data set is made by some people who made what are called web crawlers or spiders, they crawl around the web, they collect webpages.
Well, you know what?
There's a lot of fan fiction about on the web.
There is an awful lot of online fiction about computers that wanna become sentient.
And you know, generative AI is also trained on chat logs.
Guess what?
There's a lot of people who cheat or who text each other like, "Oh, I want you to leave your partner for me."
So like really, it's a manifestation of what's in the training data.
- We've gotta leave it there but Meredith Broussard, the book is "More Than a Glitch" and it's really important.
We're out of time.
He's Wayne, I'm Jim, we hope we'll see you again next week for more "Story in the Public Square".
(bright upbeat music) (bright upbeat music) (bright upbeat music)
Story in the Public Square is a local public television program presented by Rhode Island PBS