Algorithmic Injustice

In light of the increasing use of algorithms to automate decisions, we do not just want them to produce results that are correct. They must also be fair.

RAF officer and a Red Cross female ambulance driver in blindfold race. Rang du Fliers, 1918 | Imperial War Museum | Domini públic

RAF officer and a Red Cross female ambulance driver in blindfold race. Rang du Fliers, 1918 | Imperial War Museum | Domini públic

Artificial intelligence has made it possible to automate decisions that human beings had been responsible for until now. Although many of these decisions are in the fields of entertainment and social media, automated decisions are also used in finance, education, the labour market, insurance companies, medicine, and justice. This phenomenon has far-reaching social consequences and brings up all sorts of questions, from “what will happen to the jobs that took care of making these decisions in the past?” to “how can we guarantee that those algorithms make fair decisions?”

Mary Bollender is a financially struggling single mother from Las Vegas. One morning in 2014, when her ten-year old daughter had a fever that would not go away, Mary decided to drive her to the emergency room. But when they got in the car, she found that it refused to start. The problem was not engine trouble or an empty tank: the bank had remotely activated a device that prevented her car from starting because her monthly car repayment was three days overdue. As soon as she paid, she would be able to get back on the road. Unfortunately, she did not have the money that morning.

As our society continues to become more complex, with data digitalisation expanding on a massive scale, cases like Mary Bollender’s will become increasingly common: automated decisions based on the systematic collection of personal data with potentially negative effects on our lives. Algorithms control which of our friends’ posts we see first on our Facebook news feed and recommend films we might want to watch on Netflix, but algorithms also decide whether a bank will give us a loan, whether someone awaiting trial can be released on bond, whether we deserve a postdoctoral grant, and whether we are worth of being hired by a company.

These computer programs or algorithms are not produced as a result of human programmers analysing and breaking down a problem and then feeding precise instructions into a computer. They are the fruit of complex mathematical operations carried out automatically that look for patterns in an ocean of digitized data. They are not like a recipe that sets out a list of ingredients and then tells us what do do with them step by step. It is more a case of “open the fridge, see what’s in there, rummage around the kitchen to find some pots and pans you can use, and then make me dinner for six.” These types of algorithms that are not explicitly designed by a programmer come under the concept of “machine learning”.

Machine learning algorithms are currently used to process data and determine whether a person will be able to repay a loan. The person who programs the algorithm compiles a database of people who have asked for loans in the past, including all kinds of data: the applicant’s sex and age, whether or not they repaid the full amount, whether and how often they were late with repayments, their average wage, how much tax they paid, in what city and neighbourhood they live, and so on. The algorithm applies a series of statistical formulas to all this data and then generates patterns that estimate the probability that a new potential client will repay a loan. Reliability is usually the only criteria used to develop these algorithms. Is the program accurate enough to replace a human being? If the answer is yes, it gets a green light.

What's an algorithm? | David J. Malan | Ted Ed

It is not easy to determine reliability. Developing a truly reliable algorithm is partly a science and partly an art. When we think about software programs or artificial intelligence, we inevitably tend to anthropomorphise and imagine some kind of entity that follows a mental process similar to our own. But that is not how it works. An automated algorithm does not analyse the examples that we give it and then try to establish some kind of reasonable causal link between the data and the end result. The algorithm knows nothing about gender, age, financial conditions, unemployment, etc. It simply has a string of numbers and tries to find patterns that allow it to come up with the correct result as often as possible.

And this is where the problem comes in. A traditional program developed by a human being follows a certain logic, so it is possible to understand what it is doing. An automated algorithm is like a black box. We give it a certain input (the personal details of the person applying for a loan) and it gives us an output (the probability of that person repaying the loan). It is very difficult – or virtually impossible – to know why the program decided to reject or accept a particular loan application.

In the 1980s, the United States army commissioned scientists to develop an automatic image recognition program that could detect camouflaged tanks in real time. The scientists asked the military officers to provide numerous photos of different sites organised in pairs: one showing the site without a tank and the other showing the same site with a tank camouflaged among the greenery. The idea was to develop an algorithm that would come up with a series of criteria for detecting tanks. The program worked remarkably well.

 

It was one hundred percent accurate. In fact, it detected tanks that were so well camouflaged that a human being would not have noticed them. The researchers were so surprised that they decided to analyse what criteria the algorithm was using. After an in-depth study of the photographs and the algorithm, they realised that the program did not actually recognise tanks or anything remotely similar. Say the army had taken the photographs of the places without a tank at midday. The photographs with a camouflaged tank were taken at six in the afternoon. So in order to decide whether or not there was a tank in the image, all the algorithm had to do was look at the position of the sun.

A Waymo self-driving car

A Waymo self-driving car | Grendelkhan, Wikimedia Commons | CC BY-SA 4.0

Another example is self-driving cars. We like to imagine that cars driven by algorithms have some idea the meaning of a road, traffic lights, zebra crossings, a cyclist, another vehicle, and so on. But they are really just sophisticated versions of the tank-detecting algorithm. Their learning is highly contextualised and depends wholly on the response of the environment in which the algorithms are trained. And because they are black boxes, we will never know for sure how a self-driving car will react in a context that is significantly different to the one in which the algorithm was originally trained.

With enough training in very diverse contexts we can come up with algorithms that are truly robust and reliable. But there is a still more insidious problem: justice, or fairness. Given that we are talking about programs that recognise contextual patterns in a finite data set, and are not based on actual knowledge of the environment, no algorithm would ever consider reactivating Mary Bollender’s car so that she could drive her daughter to the hospital. It just knows who meets repayments and who doesn’t. Poor neighbourhoods have a much higher debt loss ration. A large percentage of single mothers tend to make mortgage and loan repayments late. An automated algorithm would no doubt reject a loan application by a single mother in that poor neighbourhood. The decision would no doubt be statistically correct. But would it be fair? Do we want to live in a world in which crucial life decisions are based on context-dependent statistic patterns?

The machine learning algorithms that are being developed now need the humanities. We have to create infrastructures for cooperation between software engineers and humanists. We need to develop a common language that allows sociologists, anthropologists, philosophers and artists to understand the basic functioning of this new kind of software, and computer engineers to think about how to embed ethics, peaceful coexistence, justice, and solidarity into the new software. Perhaps one day we will manage to develop the artificial superintelligence that Elon Musk is so keen on. But for now we should be much more worried about how existing programs can augment the racist, xenophobic and sexist biases that exist in our society.

View comments3

  • Ramon Sangüesa | 14 March 2017

  • Ramon Crehuet | 15 March 2017

  • David Casacuberta | 16 March 2017

Leave a comment