We tend to regard math as a guarantee of objectivity. However, as we make ways in the knowledge of new statistical practices, based in the use of mathematical algorithms, we realize that deep down, they are necessarily linked to the values and ideologies of the users. While participating at The Influencers, Cathy O’Neil and Carlos Delclós held a conversation to ascertain the role of algorithms in such crucial current issues as global warming or financial economics. It would therefore seem that the inevitable question arising is: is it possible to consider algorithms as a useful tool for social transformation?
In early December 2017, a group of scientists from the US National Oceanic and Atmospheric Administration (NOAA) checked meteorological data from one of their climate monitoring stations in Utqiaġvik, Alaska. To their surprise, they found that the station had reported no data for the entire month of November. Apparently, an algorithm used to discard unreliable data had removed all the entries, assuming they were outliers. But closer inspection revealed that the recorded observations were correct: in just two decades, average temperatures at the site had increased by 4.3 degrees Celsius in October, 3.8 degrees in November and 2.6 degrees in December. The reality of global warming had simply outpaced the assumptions built into the algorithm.
It’s not the first time an algorithm has been overwhelmed by the rate of climate change. According to Deke Arndt, Chief of the NOAA’s Climate Monitoring Branch, this has also happened at sites in Canada and Scandinavia. For Arndt, it is one more bit of evidence proving that our planet has entered a new climate regime. Increasingly, scholars are referring to this regime as “the Anthropocene”, a new epoch of geological time characterised by the significant impact of humans on Earth’s ecosystems and geology. While its starting point is the subject of an expanding academic debate, it is often cited as July 16, 1945. What took place on that day was the Trinity Test, the first detonation of a nuclear weapon.
From then on, the world has experienced exponential growth in a broad range of socioeconomic and earth system trends. Dramatic increases in population sizes, gross domestic products, urbanisation, telecommunications, fertiliser consumption, water use and energy use have been accompanied by rising carbon dioxide levels, surface temperatures, ocean acidification, biospheric degradation and tropical forest loss. The Geological Society of America refers to this unprecedented rate and scale of change as “The Great Acceleration”.
In this context, what is unsettling about the NOAA’s “broken” algorithm in Alaska is not just the alarming speed of global warming. More profoundly, it is the notion that the systems we use to observe and interpret the world are not adapted to the reality unfolding around us. They are not adapted to that reality because their assumptions are wrong. Yet even as these systems expire, societies are increasingly turning over some of their most important decisions to algorithms designed according to those obsolete, often “toxic”, assumptions. Indeed, some algorithms are even designed to reinforce them, a thought that twists the gut with vertigo and nausea.
It was a similar queasiness that led mathematician and writer Cathy O’Neil to leave her job in 2011. A Harvard- and MIT-trained mathematician, O’Neil worked in the financial sector during the peak years of the global financial crisis as a quantitative analyst for the multinational investment management fund DE Shaw. She also worked at RiskMetrics, a software company that assesses risk for the holdings of hedge funds and banks. Since leaving the financial sector, O’Neil has spearheaded the Lede Program in Data Journalism at Columbia University, founded an algorithmic auditing company (ORCAA) and published several books, including Weapons of Math Destruction: How big data increases inequality and threatens democracy (Penguin Random House, 2016), which was longlisted for the 2016 National Book Award for Non-fiction.
”It was a shocking moment,” she tells me during her visit to Barcelona, where she spoke at The Influencers 2018. Not long after joining DE Shaw, O’Neil attended a training session for new analysts led by a group of experts, managing directors and, in her words, “other fancy people” at the hedge fund. The key moment was when one of the participants began talking about mortgage-backed securities:
We didn’t invest in mortgage-backed securities at Shaw. Our hands were somewhat clean, which meant we were allowed to have an unfiltered conversation about them. And it was a relatively academic environment, so we weren’t lying to each other. So this person explained how they would tranche these financial products so that the least risky would get the highest possible rating (AAA). He defined risk in terms of their likeliness to default, and at the subprime level, things got pretty risky. So he explained how they would take all of those super risky tranches, put them together, retranche that, and give the top of the new pile of risky products another AAA rating.
O’Neil was astonished. She simply could not believe that this could be done while keeping the resulting top-rated financial products safe:
It seemed to have a lot of assumptions behind it. In particular, the assumption that things don’t all happen at once, that defaults happen randomly in the universe. It’s a very strong assumption, and this person admitted that. So I had to stop for a moment and ask, “How much of the economy is resting on this?”
As it turns out, the amount was enough to provoke the most severe financial crisis since the Great Depression. It was also enough to undermine peoples’ trust in the so-called “experts” who had promoted their use. O’Neil recalls another encounter during her time at DE Shaw, attended by three of the most influential figures in the US financial sector: former Treasury Secretaries Larry Summers and Robert Rubin, and former Federal Reserve Chairman Alan Greenspan, whose “easy-money” policies are now believed to have caused both the dot-com bubble and the subprime mortgage crisis.
Despite her misgivings, O’Neil arrived early. On an intellectual level, she still respected these figures and was excited to meet them. At one point in his presentation, Greenspan expressed his concerns about the same financial products she found so troubling:
I remember looking over at Robert Rubin, who was studiously looking away. And I thought, “Wow, that guy looks very uncomfortable.”
We later found out that he was sitting on an enormous pile of these products at Citibank, and they could never get rid of them. That’s what was bailed out.
The encounter confirmed her worst suspicions. First, it was clear that the financial products being sold were mathematically unfeasible. As hedge funds began to hire more and more quantitative analysts, more and more products became correlated, despite being based on previously uncorrelated historical information. Because this was profitable, investment behaviour began to mimic those correlations, further strengthening the links between previously uncorrelated markets. But at some point, reality would catch up to the mathematical smoke and mirrors.
The second suspicion the encounter confirmed had to do with the government bailout, which saved the financial sector from its dependence on these toxic assets. The obvious revolving door between public institutions and private financial firms suggested that, at a very basic level, these were friends bailing each other out at the expense of the rest of the population. “They’re thugs that use formulas,” O’Neil tells me.“They were using the authority of the inscrutable to make themselves rich and famous. But you look at this today, and you see that people like Larry Summers still go to Davos. And people still listen to them!”
These insights soured O’Neil’s take on algorithmic decision-making. “Algorithms are tools for people in power to decide what they should bet on,” she continues. “They’re only going to be used if those people are making profit. They’re not there to help someone.”
This point is discussed in more detail in Weapons of Math Destruction. Drawing on examples of their use in labour markets, banking and insurance, O’Neil’s book convincingly lays out how algorithms are tailored to the motives, values and ideologies of their creators through the curation of data and the operationalisation of highly specific definitions of success. Far from being tools used to guarantee objectivity, O’Neil succinctly describes algorithms as “opinions embedded in math”.
One powerful example of how algorithmic discrimination functions is found in the insurance industry. Let us consider the price of car insurance, which is usually determined by a company’s scoring system. As O’Neil describes in her book, these scores are often calculated using factors that have no direct relationship with a person’s driving record, in ways that disproportionately penalise the poor. A particularly egregious case is in the state of Florida, where a 2015 Consumer Reports study found that adults with clean driving records and poor credit scores paid an average of $1,552 more per year than drivers with excellent credit scores and a drunk driving conviction.
During our interview, I mention the example of flood insurance in flood-prone areas, which are expanding due to the combination of climate change and widespread urbanisation. “This is a good example,” she replies:
As Big Data gets better and better, and predictions improve, insurance companies will be able to establish that this house has very little chance of being flooded while that house has a very good chance of being flooded. Immediately, insurance rates for those that have a good chance of being flooded will become unaffordable. The end result will be that only those who don’t need insurance will be able to afford it, which defeats the whole purpose of insurance. And the exact same thing will happen to health care in the United States, once Trump and the Republicans remove the Affordable Care Act’s clause protecting people with pre-existing conditions.
Another knot in the stomach. The idea of algorithms classifying people as “uninsurable” may evoke a ham-fisted sci-fi dystopia, but it is the basic mechanism through which the insurance industry operates today. In an accelerating world marked by the rise of authoritarian politicians and the proliferation of natural disasters due to climate change, it is not hard to imagine governments resorting to similar tools to decide who is worthy of protection and who is not, who can access basic resources and who must fend for themselves.
But surely algorithms can also be used for good, I suggest. “Algorithms theoretically could help us right problems,” O’Neil replies. “The problem is who owns them and what they are predicting. Let’s just be generous and do a thought experiment. What could go right?”
I mention the Cape Town water crisis. Between 2015 and 2017, the South African city experienced a severe drought that threatened to leave its reservoirs dry. City officials were forced to plan for “Day Zero”, when the municipal government would have to cut off the taps and rely on water distribution centres to provide citizens with 25 daily litres per person. Algorithms could play a critical role in determining where and how to properly guarantee that all citizens have access to water, as well as what types of use to promote and penalise. At the same time, however, algorithmic risk assessments would be carried out to identify conflict-prone neighbourhoods with past histories of protests or gang activity, in order to deploy the South African Defence Force and police personnel in those areas.
“Well, it makes sense to be thinking about that,” she replies:
Obviously, if you do have enough water, the important questions are how you distribute it fairly and how you avoid a black market from emerging. I don’t hate them for considering these scenarios. I would simply hope that the number one goal is to make sure there is enough water for everyone. In the end, how we respond to climate change is not something that can be decided by an algorithm. It’s pure politics.