Bias in a Feedback Loop: Fuelling Algorithmic Injustice

In order to prevent machine learning algorithms from perpetuating social inequalities, public debate is necessary on which problems are automatable.

Loop the Loop, Coney Island, N.Y.

Loop the Loop, Coney Island, N.Y. Unknown author, 1905 | Library of Congress | Public Domain

The problem of using algorithms based on machine learning is that if these automated systems are fed with examples of biased justice, they will end up perpetuating these same biases. For staunch defenders of technology, this could be resolved with more algorithms that detect and eliminate biases automatically. But it is necessary, firstly, to take into account that technology is not neutral but is a tool in the hands of humans and, secondly, that the use of a biased system to make a calculation of probabilities will always have a result that is also biased, which will be applied in the world and will create more inequalities, generating a feedback loop that is quite problematic. It is necessary, therefore, to open up a debate regarding which decisions can be extracted from data that takes into account fundamental human rights and freedoms.

Technological solutionism sustains that the majority of today’s social and political problems are the result of human inefficiency. Only a solid injection of digital technologies can resolve them. High levels of poverty in X neighbourhood in Mumbai? Let’s give them mobile telephones and internet connections and a blockchain protocol and magically, entrepreneurs will start crawling out from under the stones and prosperity will return to the city. People with increasingly less faith in justice because they see how judges take biased decisions daily, whether due to political pressures or ideological conditioning? Let’s get judges to decide based on machine learning algorithms and injustice will disappear from the face of the Earth.

Algorithmic injustice

In a previous post I have already spoken about the problems that may be involved in using machine learning algorithms. In short, the problem is that these automated systems, if fed by examples of biased justice, will end up reproducing and strengthening those biases. If, in a country such as the United States, people with African ancestry have much higher possibilities of ending up in prison without bail, and we train a neuronal network with those data, the algorithm ends up replicating the same types of biases.

Defenders of technological solutionism refute such arguments in the following way: the biases that lead to injustice, such as discriminating against certain races in a courtroom, have not been produced by any machine: they are the result of human action. Algorithms are like knives, they are not good or bad, just or unjust. Just or unjust is the person who applies them. In the worst case scenario, algorithms will be limited to maintaining the injustice that already exists, the result of human actions. The solution to possible unjust algorithms is even more algorithms that detect and eliminate inequalities and biases automatically.

Almost unanimously, defenders of technological solutionism end their statements with a request to be left to work in peace: the general public does not understand how artificial intelligence works; people get led astray by the sensationalist press. Only experts should decide when to apply such-and-such an algorithm and when not.

I will not go into the implications involved in questions such as justice ending up exclusively in the hands of entrepreneurial engineers. Here I would like to show that the response of the technological solutionists is basically wrong.

The Era of Blind Faith in Big Data Must End - Cathy O’Neil | TED Talk

Supposed technological neutrality

First of all we can observe that the supposed neutrality of technologies is a simplification. Any technology is designed. In other words, it has been produced for a purpose. And although some purposes may be neutral, the majority of purposes have an ethical dimension. Knives in the abstract do not exist. There are many types of knives, and each type is designed with a specific purpose in mind. A scalpel is designed to be used in an operating theatre. Evidently somebody may use that scalpel to kill someone, but it was not designed for that. The guillotine of the French Revolution was designed with a very specific mission: chopping off human heads. It is possible to imagine a “positive” use for the guillotine, maybe for cutting watermelons in half, but clearly it would be a rhetorical exercise to show the supposed neutrality of something that is anything but neutral.

Equally, the person or people who programmed the software for Volkswagen so that certain models of diesel-fuelled vehicles appear to be contaminating less than they are in reality, were designing an algorithm with the very clear purpose of deceiving and defrauding the general public. Neutrality is conspicuous by its very absence.

Algorithms in a biased context

But the most problematic part of the argument is supposing that introducing machine learning algorithms into a biased context is an action without consequences. These types of algorithms have no comprehension or conceptual modelling of the problem that they analyse: they are limited to assigning probabilities to a result based on a statistical analysis of the current situation. Judges can be as biased as they want, but they are obliged to explain the reasons behind their decision. Other jurists – and, yes, the general public too – have a right to analyse these decisions and indicate whether they find them to be correct or not. The legal system of any democratic country offers ways to appeal judicial decisions if it is considered that laws have been applied in a biased or improper way.

How I'm fighting bias in algorithms - Joy Buolamwini | TED Talk

In contrast, when an algorithm recommends which new television series will be most interesting for us, or reports to a bank whether it is a good idea to grant a loan to a certain person, or calculates whether another person will easily commit more crimes and so would be better kept in detention until the trial comes, it does not indicate the reasons why it is proposing that result. It simply bases itself on previous regularities. A certain percentage of people who saw many of the series that I have seen loved that new series, so I will probably enjoy it too: over 70% of people of a certain age, civil status, average wage and living in neighbourhoods similar to the person applying for the loan will end up not paying it back, so it is best not to grant it to her, etc.

This turnaround in procedure creates a new variable: if we use a biased system to make a calculation of probabilities, the final decision taken will also be biased. This biased decision will be applied to the real world and will create new inequalities; the statistical regularities of that slightly more unequal world will be used by the algorithm as input for making new decisions, decisions that will be applied in the real world, which will be a little bit more unequal than before. In this way we will create a problematic feedback loop in which the system will gradually become more and more unequal, a bit like an electric guitar that we leave next to the amplifier and that gradually generates more and more noise until it ends up bursting our eardrums.

The automation debate

Fortunately solutions exist. We need to open up a public debate to decide which processes can be automated and which not. That debate must undoubtedly have experts in artificial intelligence present, but we also need to include the experts in humanities, the different social agents and the general public. And the criterion for deciding whether a certain decision is a candidate for automation or not is simple. All we have to do is ask: Is it a decision that can be directly extracted from data? For example, what is the maximum weight that a bridge can stand? In that case, we can leave the subject in the hands of experienced engineers, who will know how to optimise the algorithms.

In contrast, if it is a question in which there will ultimately be an appeal to reason such as, for example, deciding on whether a social network is designed in such a way that it guarantees respect for diversity or not, then at the end of the decision-making chain there must be a team of people who, despite their possible mistakes, emotions and ideological biases, understand that many decisions in the political and social sphere can only be made with a holistic understanding of what it means to be human and what the basic rights and freedoms are. This is something that simply cannot be extracted from purely static data.

References

Mathbabe. The blog of Cathy O’Neil, author of Weapons of Math Destruction

Automating Inequality by Virginia Eubanks

Algorithms and Human Rights

View comments0

Leave a comment

Bias in a Feedback Loop: Fuelling Algorithmic Injustice