Edspresso episode 3: the ethical implications of AI decision-making, with Lyria Bennett Moses

Image: Lyria Bennett Moses

Lyria Bennett Moses considers the risks of outsourcing decision making to smart machines and the role of education in mitigating these risks.

Should AI algorithms decide who gets bail or a bank loan? How is big data changing our world? And how can education prepare young people to design and challenge smart technology?

To find out, we talk to Lyria Bennett Moses, a Professor and Director of the Allens Hub for Technology, Law and Innovation at the University of NSW.

Lyria has written extensively about the dangers of AI bias in the legal system, and why it?s crucial for everyone to understand how smart machines are impacting on our society.

Lyria is the author of our occasional paper - Helping future citizens navigate an automated, datafied world.

Credits: Recording and production by Jennifer Macey (Audiocraft), editing by Andy Maher, and voiceovers by Sally Kohlmayer. The views expressed in Edspressos are those of the interviewees and do not necessarily represent the views of the NSW Department of Education.

Episode 3: The ethical implications of AI decision-making.

Transcript for Episode 3: The ethical implications of AI decision-making

Voice-over:

Welcome to the Edspresso series from the New South Wales Department of Education. These short podcasts are part of the Education for a Changing World initiative. Join us as we speak to a range of experts about how emerging technologies such as artificial intelligence are likely to change the world around us and what this might mean for education.

Should AI algorithms decide who gets bail, or a bank loan? How can education prepare young people to design and challenge smart technology? To find out we asked Lyria Bennett Moses, a professor and director of the Allens Hub for Technology, Law and Innovation at the University of New South Wales. Lyria has written extensively about the dangers of AI bias in the legal system - and why it's crucial for everyone to understand how smart machines are impacting on our society.

Lyria Bennett Moses:

I am Lyria Bennett Moses, I am director of the Allens Hub for Technology, Law and Innovation, here at UNSW Sydney. I'm also a professor in the Faculty of Law.

There are a lot of people asking how artificial intelligence technologies are currently impacting on the law and on our society. And in some ways, I think it's an overly simplistic question that's being asked. So, artificial intelligence covers a whole range of very different kinds of technologies and the two that I think is perhaps the biggest are the ones that are currently in use today, are what I would call pre-programmed logics.

This is classic coding, where you say, do this do that, if this, then that, or what's called machine learning, which is essentially data-driven inferencing. So, you have a large amount of data, you feed that large amount of data through a process, that process is trying to optimise for something and then you essentially train a system to perform in a particular way, based on the evaluation of its performance on that historic data set.

So, if you think about those two things as being under the umbrella of artificial intelligence, they clearly are having an important impact both on the law, and on society more broadly. But, the kinds of impacts they're having are quite different. If you want to think about some of the risks of outsourcing decision making to smart machines, it's good to think about a particular example. I'm going to use an example I know from law, and this is the COMPAS system that's being used as a risk assessment tool in decisions like, how people serve their sentence, whether they get parole and so forth.

This is the kind of tool that is being driven by data, rather than a pre-programmed logic. So, it is saying, here's all the data in the past about who has committed different kinds of crimes or reoffended and so forth - let's try to spot patterns in that data and let's use the correlations that we've identified to create a score for a particular defendant currently going through the system. So, what are the problems with doing something like that?

Well, the first problem, and this has been detected by an organisation called ProPublica, is that you can introduce historic racism into the inferencing. So, the tool, for example, has been found to have a higher false positive rate, that's a higher rate of flagging people as high risk when they do not go onto reoffend, if you are African American in the US compared to if you are white. So, we can decide as the programmers, we society can decide, what do we want these tools to optimise for?

Now, if you're thinking about questions like this, and you're thinking about how you go about outsourcing decisions to machines, the important thing is to make sure that the machine, what it is doing, is appropriate for the context in which it is making decisions. Do we want to use data based inferencing in deciding how an individual will be treated in the criminal justice system? So, my argument would be, no, we don't. It's simply the wrong tool for the task to which it is being assigned.

I think what education needs to do, is to equip young people to manage the fact that these kinds of tools are available - so, young people will go into many different careers, some of them will become the people who actually work on programming these tools, that will be the data scientists of the future and those people will do data science degrees at university and learn how to do it properly, hopefully. The concern I suppose is what about everybody else?

Because even if you don't go onto university to become a data scientist, you're going to be working in many contexts, I've given the example of law, but I think across society, where these tools are going to be used in the process somewhere. And what everybody needs, no matter what their job is, is to understand enough about how those tools work to be able to use them both effectively and appropriately in whatever context they go on to work. And that is both in the case of how data-driven tools work, and in the case of pre-programmed logics. But data-driven tools are, I suppose a particular problem.

So, let me go back to an example from when I was in school. We all learned how to read a graph, and graphs were the way that information was presented to us, whether in newspapers, in books, in articles we might read, irrespective of what field of work we went into - we would need to know when people are misleading us, and it was at school because you need it for the whole citizenry, that everybody learned how to read a graph. Now, we're moving into much more complicated systems, if you take machine learning as an example, what is happening is information is being presented to us as an inference drawn from large amounts of varied, complex data.

And those inferences are being presented to us and people are in a magic room called data science coming out and saying, that means that X. But what we need to do is to teach students to go behind that X, in the same way as we've always taught students how to go behind what information a graph is trying present. So, we don't need every student top have a three-year data science degree, but we do need every student to have enough understanding of what is going on behind the curtain, that they can understand, when do I want to use this kind of tool? When is it useful? And when might it not be the appropriate tool?

What might be called thinking skills, so, critical and creative thinking, computational thinking is important in this increasingly automated world. Ultimately, what students are going to confront, is always growing in complexity over time, we, therefore, need students to be able to think critically about the ways information is being presented to them because the machinery behind where those inferences and where that information comes from, is a lot more complicated.

They need to understand computational thinking, so, I've already talked a little bit about the machine learning data-driven decision making, but a lot of these so-called AI tools, are in fact based on a very different logic. It's based on the pre-programmed coding logic. So, do this, do this, do this, if that? do X, if this? do Y - that is what would be called coding. Now, students can learn computational thinking through learning coding, they can learn computational thinking through studying mathematics, it doesn't need to be in the context of coding.

But students do need to understand how that kind of process works, either because they're going to create it themselves or because inevitably they're going to work alongside it. They will need to understand that, anything that is pre-coded will have particular kinds of limitations.

I think there's a really important question, about what skills young people should focus on, if you like, in an AI augmented world, and whether people should focus more on STEM subjects. So, if we're going to navigate the world of the future, if we want our children to navigate that world, they're going to need a whole range of really complex skills, they need mathematics, I also think they need to understand philosophy for example, as well as what's now called HSIE in the school system.

So, let me explain what I mean by all of that in a particular context. Let's look at something like understanding what happens when you do a search on Google. You need to understand something about data and how historic data can be turned into inferences, to drive what a search engine like Google is delivering, when it delivers you results.

But, I think it's important as citizens, that we know that when I do a search and when someone else does a search, and I'm female and white collar, and have particular interests and that person might be male, and a child, and has different interests, that we will not get the same answer. If citizens aren't aware of that, that information is being fed to them in a very tailored way, then we are ceasing to have conversations as citizens. So, we need to understand data and we need to understand something about the kinds of logic that Google is using.

I'm not suggesting that every school child needs to read Google's algorithm, I'm not sure every employee at Google has done that, but in terms of the kinds of logics that occur in a system like that, I think students need to be aware of that. And that does require if you like, a STEM understanding of how machine learning works.

I think you also need to understand something about the HSIE, human society and its environment. We need to understand how for example, politicians might leverage the fact that particular audiences that they might have would have read one thing, and not another thing, to try to influence an election.

Now, that is something that you're not going to learn simply looking at the mathematics and computational thinking that underlies the algorithm itself, that's something that comes from understanding history, from understanding our political system, from understanding the way that democracy works.

So, I think it's also important for citizens to be able to do their own thinking about what are their responsibilities as a citizen, in terms of the information on which they base important decisions - like who to vote for in the election.

So, how can I make myself an informed citizen, might require an understanding of the logics of the system. But the fact that I need to think about doing so, might come from philosophy, and a contextual understanding of how the results might be manipulated and the implications of that, would come from HSIE.

So, what students need to be able to do is all of these subjects, but also I think, and here's where something I think can be added, they need to be able to think about them at the same time.

If I could go back in time and give advice to myself as a school student, I think what I'd say to myself is, "embrace the opportunity to learn, to explore, and to discover." Because, I think that is what the beauty of school education is.

I'd probably tell myself not to sweat all of the assessments, that if you do embrace all of that learning, if you're interested and you explore what keeps you enthusiastic, then you will do well in the exams. But if you focus on the exams themselves you'll lose something.

Now, would I give different advice to today's students? I don't think so, but I would tell students, indeed, what I tell my own children is school is a wonderful opportunity, embrace what you're interested in, keep thinking, exploring, think critically, think outside the curriculum, whether that's through activities you do afterwards, or the conversations you have at lunch time, or the conversations you have with your teachers. Because the more you think about the world around you, the better off you will be. Both as a citizen as well of course as a member of the workforce.

Voice-over:

Thank you for listening to this episode of the Edspresso series. You can find out more about the Education for a Changing World initiative via the New South Wales Department of Education's website. There you can sign up to our mailing list or join our conversation on Twitter @Education2040.

Return to top of page Back to top