Please support our programs

Bad Math: the Risks of Artificial Intelligence

Never miss a show! @ symbol icon Email Signup Spotify Logo Spotify RSS Feed Apple Podcasts

Bad Math: the Risks of Artificial Intelligence

When we think of Artificial Intelligence we often think of intelligent robots who act and think like humans -the walking, thinking, feeling machines that we see in the movies. The advent of that kind of intelligent robot is so far off in the future, that we often don’t recognize the kind of AI already all around us. Or the effects its having on our lives. Courts, search engines, stores and advertisers all use Artificial Intelligence to make decisions about our behavior: to sell us products, but also to send us to prison or set bail. We look at one kind of decision made by AI, called a risk assessment, and why it’s had such an impact on the poor and people of color. We also hear how community organizers on skid row fought back against the use of artificial intelligence by the Los Angeles Police Department. Like this program? Please show us the love. Click here and support our non-profit journalism. Thanks!

Featuring

  • Joshua Kroll – Computer Scientist, UC Berkeley School of Information
  • Jamie Garcia – Stop LAPD Spying Coalition

Making Contact Staff:

  • Executive Director: Lisa Rudman
  • Staff Producers: Anita Johnson, Monica Lopez, Salima Hamirani
  • Host: Salima Hamirani
  • Outreach & Audience Engagement Coordinator: Kathryn Styer
  • Associate Producer : Aysha Choudary

Music

Episode Transcription

Narr – I’m Salima Hamirani, and today on Making Contact:

 

Space Odyssey:

“Do you read me hal?

Affirmative dave I read you.

Open the pod bay doors hall.

I’m sorry dave I’m afraid I can’t do that.

What’s the problem.

I think you know what the problem is just as well as I do.

 

 

Narr: – That’s Hal 9000 from 2001: A Space Odyssey. Hal is a computer. He’s an example of Artificial intelligence-  Smart robots that evolve — and if we believe what we see in science fiction movies, they’ll eventually take over the world. And…. well, kill us all.

 

Scene “from I robot”

 

Narr: To protect humanity some humans must be sacrificed. To ensure your future some freedoms must be surrendered. We want to ensure mankind’s continued existence. You are so like children. We must save you from yourselves. Don’t you understand?

 

Joshua Kroll: – We all sort of worry about the robot army cresting the hill you know and the Terminator coming to coming to get us and take us back to the future to whatever the Terminator does Skynet becoming self-aware. [29.8s]

 

Narr: – That’s Joshua Kroll, He’s a computer scientist at the University of California Berkeley School of Information.. He doesn’t create apps or new technology. Instead he studies the way new technologies impact the world

 

Joshua Kroll:But I worry much more about the way these technologies are changing the structure of life today already.

 

Narr: Kroll doesn’t envision Artificial Intelligence as a computer cresting the hill. But as a very powerful decision making tool. These decisions, made by a software  program, are having incredible impacts on poor people and communities of color. Together we’ll look at the impacts of Artificial Intelligence and We’ll also talk about how we should control AI, through policy and activism, if we don’t want AI to control us.

 

Narr: – The world  of Artificial Intelligence  has a lot of technical jargon. But You don’t need to understand all of it to understand the effects AI is having. To start, Joshua Kroll helps us with the basics:

 

Joshua Kroll: I use a working definition of artificial intelligence is anything that a machine does that we would consider to be intelligent. That’s quite a broad definition. It leaves open lots of space for various kinds of technologies. People often use the word or the phrase Artificial Intelligence to refer to new systems based on a technology called machine learning which uses the automatic discovery of patterns in data to make inferences about what patterns or rules should be applied in future cases. as long as you have enough data. And this creates an incentive whereby people want to gather up a lot of data and then throw it at this tool that gets a pretty good solution. The other thing about deep learning is that it has allowed us to solve problems that previously we didn’t have fantastic solutions to problems like recognizing people’s faces and images or recognizing objects and images or processing certain kinds of text large volumes of text.

So that’s opened up many new applications for these technologies. And that’s been very exciting.

 

Narr: Artificial Intelligence is actually everywhere. Google maps, music recommendations on spotify, even your spam filter is a kind of AI. In fact some of the music you’ll hear in our show today  is by … a computer composer named Emily Howell. We don’t have enough time in this show to cover everything. So, we’re going to focus on a type of decision made by AI called “a risk assessment.” Here’s how it works.

 

Joshua Kroll: Sure so when you have many data points there is a kind of a natural tendency as humans to see patterns in them or we see patterns everywhere partly because we’re wired to find certain kinds of patterns you mentioned faces were very much wired to see faces and things and so we see faces in the front of cars or in our breakfast toast or in all sorts of things.

There’s actually a phenomenon psychologists refer to this as pareidolia I think that humans infer faces and lots of things. That’s what it’s like when you see Jesus and I mean he’s Yeah exactly. And so it is the case that we want the computers to also go through the data and extract patterns and find repeating information in the data that doesn’t mean they have any understanding of what it is it just means that they understand that there is some kind of pattern there and that that pattern can be extracted and made available for use.

And in this case once you’ve extracted a pattern from data about the world you can apply that pattern to new cases and assume that it will basically continue to hold true. That’s in an analogy to the way that we learn as children we observe the world we identify patterns we find some representation for those patterns in our minds and then we apply those ideas going forward and that helps us understand the world and. In these machine learning systems you have the same sort of thing to create a score like credit score or some other kind of risk score.

 

Narr: These risk scores are used to try and predict how you might act in the future. Especially when it comes to how you might handle money.

 

Joshua Kroll: There are a lot of prediction systems that make use of these scores and then depending on how you’re scored it predicts that you are likely say in the case of a credit score to pay back your loan or not pay back your loan.

 

Narr: That’s not the only place they’re used however.

 

Joshua Kroll: There are scores that are used in the process of administering the criminal justice system and there’s even been a lot of discussion in California recently where we are now about the use of these risk assessments and Criminal Justice where the goal is to.

Predict whether someone is likely to be arrested again or is likely to fail to appear for their trial date.

And this informs decisions human decisions about how much bail to set for the person and whether or not to detain them pre-trial

The risk scores that are being used you might say they are a good replacement for money bail because they allow judges to have an objective touchpoint for whether or not someone is actually likely not to appear for their trial or is likely to commit another crime if you release them back into society on their own recognizance.

And the reason for that is that money bail is often seen as an unfair institution that it causes poor people to end up in jail and to not be able to mount a defense of the charges against them or to be able to mount a less good defense in a way that encourages mass incarceration or other.

Problematic interactions. The criminal justice system that systemically pushed down people of color or poor communities who can’t as effectively defend themselves against the criminal justice system. It is. The case that when these things have been studied it has found that it has been found that.

People of color get higher scores as a group than white people which causes judges to see them as higher risk which in turn reinforces that structural bias in the criminal justice and the administration the current justice system.

 

Narr: but wait a minute. aren’t computers supposed to be objective? That’s one of the big advantages of computers right? At least that’s what we’re told. Humans are fallible. Computers are not. So how did an algorithm become racist?

 

Joshua Kroll: I think that’s a difficult question to answer because it’s natural to say well the score is just math. The score doesn’t know if you’re black or white it doesn’t make that judgment. It doesn’t even get that input. It doesn’t know. So it can’t possibly be discriminating based on your blackness or whiteness. But nonetheless we find that these scores have a disparate impact on different communities and you might ask why that is and partly that seems to be because of the way that the scores are created which as I mentioned is because you take a bunch of data about people who’ve previously been involved in the criminal justice system and then you maybe ask them a bunch of questions you interview them and you ask questions about their drug use or their access to jobs their family criminal history say and then you use those answers to try to predict whether they are going to be arrested again in say the next two years. Well it turns out that a black person walking down the street has a higher risk of coming into contact with police than a white person of being arrested by police. Out of that contact that is something that we we know to be true. And it’s something that the machine learning algorithm picks up on because if you’re asking it to find a pattern that predicts arrest and the black people are being arrested more than of course it will have to give higher scores of black people because they really do have a higher risk of arrest.

There’s a big gap in the amount of surveillance between the rich and the poor and actually you occasionally see people describing privacy as a luxury or a product that only the rich will have access to. There’s an interesting event every year at the Georgetown University Law Center called the color of surveillance in which people present research on. The ways that surveillance technologies are deployed against people of color and poor people too to. Make it more. Likely that they can and that comes out of a long history of using. Social structures and surveillance and the criminal justice system to. Oppress. People of color especially black people in the United States.

 

Narr: So that’s one problem, the data is already skewed against the poor and people of color. Partially that because the poor and people of color already have so much data on them – from when we apply for section 8 housing, or welfare, and of course because people of color are policed more. But there are other problems – For example, the basis of pattern recognition is this idea that the future will look exactly like the past. —But what if it doesn’t?

 

Joshua Kroll: And actually in the statistics community people refer to this as the ecological fallacy that if you do an analysis of patterns that emerge out of a population that those patterns maybe aren’t good for predicting the future behavior of an individual even an individual within that population. And that’s a problem that you might imagine comes up in the context of these risk assessments. Right.

But also there’s a fundamental rights aspect here where in the U.S. we have a right to what’s called due process. I’m not a lawyer I’m not a constitutional scholar but the way that it’s been explained to me is that there’s a right to due process meaning there’s an individualized decision about you as an individual based on your circumstances in your case. And that’s something that’s guaranteed to every one. and the. System the risk assessment system is considering people based on population level data.

 

 

SIRI INTERLUDE:

Aysha: Hey Siri what’s up.

 

Siri: I’m at work my shift ends in six hundred fourteen thousand nine hundred seventy seven years.

 

Aysha: Hey Siri are you my friend.

 

Siri: I’m your assistant ….and your friend …too [00:02:57][10.1]

 

Aysha: Hey Siri what’s your favorite color.

 

Siri: My favorite color is. Well it’s sort of greenish but with more dimensions [00:03:55][11.3]

 

Aysha: Hey Siri do you have feelings.

 

SiriI feel like doing a cartwheel sometimes [00:04:54][0.0]

 

Narr: – That was our intern Aysha Choudhary, talking to Siri, which is of course a quite sophisticated form of Artificial intelligence. At the start of this episode we talked about how AI is already everywhere in our lives. So we wanted to give you some examples. And in fact, risk assessment, the type of AI decision that we’re talking about today is also incredibly wide spread. Here’s Joshua Kroll again from the University of California Berkeley School of Information.

Joshua Kroll: I mentioned that the use of risk assessments and criminal justice was recently mandated across California in every jurisdiction. Many other states do it.

But they’re used in other applications they used to score people for credit products they used to score people for insurance products they’re used to score people for different scores they use to score people for access to apartments or to buy products. Often when you go into a store or you go to an online store you’re being scored as a customer.

There have been some news stories recently about people who were denied the ability to buy things onAmazon.com because Amazon had predicted that they were likely to return those items.

 

Narr: An AI created risk assessment is even used by Child Protective Services.

 

Joshua Kroll: So this is something that many CIS many.

Child Protective Services have done around the country and probably around the world.

Scoring the risk of a call on the theory that there aren’t enough agents to go out and investigate every tip. So in the past there was a screener a human screener who made a judgment about using their experience and professional knowledge of the situation to say this tip is likely to lead to a situation where we need to intervene and this other tip is likely to be a false alarm and using that so often what happens is a score is presented as a decision aid to the human screener who’s still there but then you have to ask how much is the poor human able to understand when the score is wrong when the score should be overridden when the score should be ignored.

And that’s difficult.

There’s a phenomenon that we’re aware of called automation bias where humans are susceptible to thinking that something a machine does. People are designed to do that thing and so we should just believe that it does that and the score must must know better than I do because you don’t really know everything that’s been taken into account. And so you kind of naturally defer to it even even sometimes when it’s very obviously doing the wrong thing.

 

Narr: So ok, you could argue, well what’s the big deal. People’s past behavior has always been used to determine their access to a loan. And maybe if you don’t return a book to the library a lot eventually you don’t get to take out more. But here’s the real problem with these risk assessments – Not only have we had very little time to think about their effects in our daily lives, we also haven’t been very successful at fighting the decisions made by them. Take the case of Eric Loomis. In 2013 he was sentence to six years in prison based off a risk assessment created by the Compass assessment, a tool used in court rooms to decide how “high risk” a defendant might be based on data fed to it. Eric Loomis challenged his sentencing. He wanted to know how the AI had reached it’s assessment.

 

Joshua Kroll: They had an expert come in and talk about the score but you know it’s hard to think about how the score is created or how the score is used or how the score should be used in this case because in the case in Wisconsin the county had purchased a score from a company that created it for the purpose of selling it to jurisdictions around the country. And that product called the correctional offender management profile for alternative sanctions or compass A-s.

They insisted that the details of the score were their proprietary information and those details could not be given even to the county that had purchased the score.

And as a result it was very hard forMr. Loomis or his attorneys to get access to or for that matter or for the judge to get access to the information. The result of this case was that the Wisconsin Supreme Court has required every presentencing investigation to come with a disclaimer that says that the score might be wrong. And ways in which the score might be wrong and that the score might be higher for people of color than white people and that those things are all true although I think we all see disclaimers on websites all the time and just click through them and of course you could imagine that judges who see the same disclaimer on the top of every report get very used to just skipping over it.

Narr: One of the reasons we put this show together is because the kind of risk assessments used in court rooms are becoming extremely popular. Partially that’s because a computer can make a lot of decisions very quickly about a lot of people.

 

Joshua Kroll: so that’s one of the things that pushes people toward the use of these systems is scope and also speed. Right. If decisions about who gets a credit card had to be mailed off to send her somewhere and lots of people had to work on them for you know even an hour each one. It would take much longer to get a decision these days. You can apply for a credit card and get a decision in 60 seconds which is probably even itself a lie right. The 60 seconds is probably just there to make you feel like the system is thinking hard about your application when in fact computing the score might take I don’t know fractions of a second.

I think it will remain a tool for people for a long time I think to the extent that we are at an inflection point where at an inflection point in a broad set of questions about how society is organized and what the relationship is between say labor and capital and how we believe that the returns from this new technology should be distributed.

 

Narr: And here’s the thing. Joshua Kroll isn’t against Risk assessments.

 

Joshua Kroll: No I think there are many benefits right. So you can take these risk scores for example in the criminal justice system. You could turn that application around and say there should be and many of these applications do in fact refer to themselves as risk needs assessments so you could imagine assessing people’s needs not just assessing their risk saying you know based on the data that we have these inmates would be more likely to do better if given access to a drug rehabilitation program or given access to job training or job placement assistance or housing assistance. And in that same setting there have been and there has been interesting work showing that for example a lot of failure to appear for trial is due to people’s life circumstances so in cities for example giving people public transit reimbursing their public transit fare can massively improve their appearance rates giving access to some kind of childcare can improve appearance rates and people are not.

Not showing up for trial because they are deadbeats. They’re not showing up because they have some legitimate. Problem. They need to solve in order to be able to show up. And if we can use this technology to help people then that seems like a better application of it to me.

 

Narr: And he’s not against Artificial intelligence, as a tool.

 

Joshua Kroll: Right. I certainly appreciate being able to use google to find access to what is functionally all human knowledge and any question that you have in your head you can just get an answer in a few seconds by typing the right phrase into a little box on your phone or your computer. We do have better cheaper more reliable air travel than we’ve had or travel of all kinds. And if the pilot came on and said So you know we were afraid of this problem so we’ve disabled this system that has been problematic and also we disabled argi GPS and we disabled everything and we’re going to use a compass and a stopwatch to navigate.

And you know you’re trying to go to New York and you end up in Cleveland. Sometimes that will happen. And there are real benefits to technology. It’s just a question of deciding where the benefits go and who reaps the benefits and who pays the costs or whether you can avoid. Creating new technologies in a way that they have major costs to individuals or to society sort of thought of more broadly.

 

Narr: But the way they’re being used in courtrooms does disturb Kroll. And that’s something he wants us to think about – how a tool, like Artificial Intelligence , is being used, and what we’re ignoring when we decide ease and speed are important values.

 

Joshua Kroll: And just as we wouldn’t ever buy a full fledged manual of court policy from some third party corporate vendor we go through a public process with hearings and meetings and people come and express their opinions and that’s how we sort of discursively come up with the right policy through a process that’s open. Often transparency means to a computer scientist Hey let’s show the formula let’s show the code let’s show the data and then people can play with it and they’ll do interesting investigations and they’ll discover what’s good and what’s bad. Transparency in government often means that you have the right to show up at a meeting and express your opinion. And that because you could have objected then you are part of determining what the rules should be. And that helps you determine that you’ve actually captured the right set of rules as opposed to some other set of rules.

I think it’s a problem that by outsourcing these policy decisions away from local governments or even state governments or national governments that we are putting them in a place where where we can’t see we have no visibility into what those decisions are and those decisions are just getting made on the desk of some data scientists somewhere who is not accountable to anybody.

I do worry that when we bake decision making power into technology it lasts a long time and because of automation bias and because of the fact that by putting decision making power into the structure of a problem or the structure of a situation we take it away from humans and humans you can change their minds you can change their experiences you can teach them why they were wrong.

You can. I was just thinking of a Christmas carol and Ebenezer Scrooge right. Goes through this life changing experience to come to the realization that everything he’s done for many years is wrong headed and he should behave very differently.

When you put that decision making power into the structure of the problem it last a lot longer and it creates a much stronger effect. Like if you were to change the technology it would take much longer for.

The rest of society to restructure itself around that. And what you do that you run the risk of ossifying the unfairness in society today in the technology that we will be using tomorrow I started out by saying that when you use machine learning to find patterns those patterns are useful in so far as the future looks a lot like the past.

Well if we don’t want the future to look like the past then maybe we should do something about that

 

Narr: That was Joshua Kroll from the UC Berkeley School of Information. And he’s not the only one with concerns about the future — and so do a group of people in Los Angeles, well hear about next. The problem with big data systems is that people often feel powerless – there’s so much information being gathered all the time, and these corporations are large and powerful, so what chance do people have of stopping AI?

 

Jamie Garcia: So my name is Jamie. I’m with the stop LAPD spine coalition. [3.5s]

 

Narr: we wanted to end the show with our talk with Jamie Garcia because the Stop LAPD spying coalition recently had a big win in Los Angeles.

 

Jamie Garcia: We were able to see the Office of Inspector General basically audit two of LAPD data driven policing programs are also known as predictive policing programs. They were operation laser which stands for L.A. strategic extraction and restoration and Pred pol. [00:03:45][21.4] [00:04:07] the Los Angeles Police Department actually on its own terminated the laser program [00:04:13][6.1]

 

Narr: Ok let’s back up a little bit here. We haven’t talked much about predictive policing, but it’s an AI tool becoming popular across the nation. Here’s how it works –

 

Jamie Garcia: So for Operation laisser what they used are two different programs to determine what places they were going to criminalize and who they were going to criminalize. So for the place based component they used a kind of density mapping program called r GISS to create hotspots where they claimed that gun and gang violence was going to occur. And they also used a risk assessment to determine who they were going to criminalize. And this risk assessment included five different factors to basically give people a certain amount of points to make them you know the person with the most points being effectively the person who is targeted. And just real quick I want to list them because it’s really important for us to understand what starts to get used in these risk assessments. So five points for being on parole or probation. So being forcibly formerly incarcerated you get five points five points for being identified as a gang member which we all know with the cowled clogging the calking database audit and 26000 how problematic gang identification actually is. Five points for having a violent crime on your rap sheet. Five points with an arrest with a handgun and one point for every interaction you had with law enforcement. So effectively they they add up all those points and that’s the risk assessment. And then the person with the most points is the one who’s supposed to be targeted by LAPD. And they kind of create this most wanted poster of you give them to line officers and say go out and find this person

 

Narr: So the police go after someone before they’ve actually committed a crime?

 

Jamie Garcia: Yes. And they are instructed to find them to find out what they’re doing if they can be violate and violate them.

 

Narr: The stop LAPD Spying coalition knew that the LAPD had a predictive policing program, and they knew it was part of a larger system of surveillance they wanted to fight. So they began organizing against it in 2013. They talked to community members –

 

Jamie Garcia: we had about a group of 10 folks a majority of them from skid row all in a room in our office and we sat around and started talking about the little bit that we knew about predictive policing. And from there this conversation just started to blossom and words like pseudoscience you know crystal ball policing the same old policing just a new tactic like these are the terms that generated themselves from these conversations where it became very clear to the community in Skid Row that what predictive policing was was just a disguise for the same type of targeted policing and racist policing that always existed. [00:09:58][40.7]

 

Narr: they also got together with all kinds of community groups in affected areas –

 

Jamie Garcia: Specifically of groups in East L.A. The antonymous group out in East in Boyle Heights known as the OVAs and they’re a feminist young autonomous group and we actually coordinated not only teach ins for the community but we actually did bike rides through the different zones that operation laisser were targeting and zones they were targeting were like hot spots right. So we were actually bike riding through these hotspots and making them or making them known to community members. [00:15:20][38.4]

 

Narr: All the while they wrote reports and pushed the office of the inspector general to audit the AI programs.

Jamie Garcia: So this all that activity started happening at the end of the year. And so by the time we get to 2019 so much of the community is aware of this program and so much power has been built that when the office of inspector general released his audit it became very apparent to LAPD that they had to do something and they had to do it now. And that effectively meant that they ended the laser program which is a shear win for the community a share win across all boards for the community [00:15:59][39.8]

 

Narr: That, she says is how you fight these systems, through education that can help people understand the power they have.

 

Jamie Garcia: Well you know we we have a saying in the coalition. We talk about building power not paranoia.

And so in power not paranoia we are having digital security trainings now with especially with the use of talking to them what it means to be on social media. What does it mean to share information how much information are we sharing. How is that being used against us. How do we protect ourselves even from speaking to our community folks down in Skid Row and throughout the city who are in general relief.

You know we’ve always been in this power dynamic where the state has this immense amount of power but people power hasn’t really. The state has never been able to crush people power. And it’s exactly people power that made this happen. [00:18:24][113.2]

 

Narr: that was Jamie Garcia from the STOP LAPD SPYING COALITION talking about the end of the Laser program in LA, a kind of predictive policing software. And before that we heard from Joshua Kroll, from the University of California Berkeley School of Information. And that does it for today’s show.

Author: FoC Media

Share This Post On