Increasingly a percentage of our decisions are influences or directly determined by algorithms and computer resources. Even if results are presented as objective and independent, it is common for these technologies to be opaque and not allow us to contrast their operations. The members of Taller Estampa propose projects that help us to re-appropriate these tools, based on reflexive analysis, and using their own mathematical methods.
In recent years we have been witnessing a constant trickle of news on artificial intelligence, machine learning and computer vision. We are told that machines learn, see, create… and all this builds up a discourse based on novelty, on a possible future and on a series of worries and hopes. It is difficult, sometimes, to figure out in this landscape which are real developments, and which are fantasies or warnings. And, undoubtedly, this fog that surrounds it forms part of the power that we grant, both in the present and on credit, to these tools, and of the negative and positive concerns that they arouse in us. Many of these discourses may fall into the field of false debates or, at least, of the return of old debates. Thus, in the classical artistic field, associated with the discourse on creation and authorship, there is discussion regarding the entity to be awarded to the images created with these tools. (Yet wasn’t the argument against photography in art that it was an image created automatically and without human participation? And wasn’t that also an argument in favour of taking it and using it to put an end to a certain idea of art?)
Metaphors are essential in the discourse on all digital tools and the power that they have. Are expressions such as “intelligence”, “vision”, “learning”, “neural” and the entire range of similar words the most adequate for defining these types of tools? Probably not, above all if their metaphorical nature is sidestepped. We would not understand them in the same way if we called them tools of probabilistic classification or if instead of saying that an artificial intelligence “has painted” a Rembrandt, we said that it has produced a statistical reproduction of his style (something which is still surprising, and to be celebrated, of course). These names construct an entity for these tools that endows them with a supposed autonomy and independence upon which their future authority is based.
Because that is what it’s about in many discourses: constructing a characterisation that legitimises an objective or non-human capacity in data analysis. In other words, an old sleight-of-hand and power game in which things escape our reach and even — and this is cynicism and not magic —, their own reach, incarnate in supposedly objective and independent operations. The classical example in the context of the Anglo-Saxon world is the use that can be made of them, for example, to calculate the cost of a medical insurance policy. Here, until this neoliberal measure – presented as what we need because we want to be a “smart” city or whatever the next thing is that they tell us we want to be – arrives, we understand perfectly the situation, put forward as one in which no dialogue or discussion is possible, rather than the imposition of the result of network analysis. As in so many cases, perhaps that is no more than making evident what already happens; if this cost is calculated by a company worker, we doubt that he will be particularly open to dialogue or that it will respond to criteria that they want to explain to us and that they will accept them being questioned. But whether it is done via formulas applied by the workers or is the output of a neural network resulting from the insurance company’s data and the data on the person applying for the insurance, in both cases it is thus because somebody has accepted that this method works. And it is here where there is a responsibility that it is not only opaque because it wants to be – in other words, the internal functioning of the network may be opaque, but the decision to use it and how it has been trained should not be so. Opacity is not the main characteristic of artificial intelligence, but rather a characteristic of power.
If metaphors are one of the fields of power of these tools, the regulatory context in which they are developed should not, under any circumstances, be overlooked. They are tools that are created and fully celebrated within the mass data ideology. With this, we are referring to the idea that everything should leave a trace and be automatically archived and to that fact that, as the expression “data mining” assumes, that data are proposed as a natural resource available for exploitation. This metaphor is significant as an expression of a capitalist fantasy: data would be a natural resource that, unlike those that have already been sucked dry, would never run out but rather, in a delirium typical of fantasy, would only increase as long as we happily accept to coexist with it and acquire more and more devices that monitor us. It is as though the system has built its own fairy-tale bottomless pit. Thus, one of the uses of object-detecting artificial vision tools is none other than the automatic annotation of images; which means it is not only textual information that is vulnerable to being automatically monitored but also visual information, until now paradoxically opaque beyond our eyes.
We now find ourselves in what is, probably, the point of the first cultural reception of these tools. Of their development in fields of research and applications that have already been derived, we are moving on to their presence in the public discourse. It is in this situation and context, where we do not fully know the breadth and characteristics of these technologies (meaning fears are more abstract and diffuse and, thus, more present and powerful), when it is especially important to understand what we are talking about, to appropriate the tools and to intervene in the discourses. Before their possibilities are restricted and solidified until they seem indisputable, it is necessary to experiment with them and reflect on them; taking advantage of the fact that we can still easily perceive them as in creation, malleable and open.
In our projects The Bad Pupil. Critical pedagogy for artificial intelligences and Latent Spaces. Machinic Imaginations we have tried to approach to these tools and their imaginary. In the statement of intentions of the former, we expressed our desire, in the face of the regulatory context and the metaphor of machine learning, to defend the bad pupil as one who escapes the norm. And also how, faced with an artificial intelligence that seeks to replicate the human on inhuman scales, it is necessary to defend and construct a non-mimetic one that produces unexpected relations and images.
Both projects are also attempts to appropriate these tools, which means, first of all, escaping industrial barriers and their standards. In this field in which mass data are an asset within reach of big companies, employing quantitively poor datasets and non-industrial calculation potentials is not just a need but a demand.
Among the strategies that guide our practice, one of the main ones is reflection. We use artificial intelligence tools to direct attention on them themselves and their processes. When we are told that an artificial vision network “sees”, it goes without saying that it sees whatever it has been trained to see. In other words, the vocabulary of the network is basic in order for it to see and talk and this is a choice made by whoever builds the network. Making evident this link in the chain was one of our objectives, whether by nosing through vocabularies of existing tools or datasets (if you want to see what a “Wagnerian” looks like, go to ImageNet and, when you have stopped laughing, go to see how they have done the same through the category of “bad person”) or with various détournements where we have substituted the original vocabulary for others that reflect on taxonomies themselves (for example, the famous classification imagined by Borges of a Chinese encyclopaedia).
We are told that neural networks are experts in what they do. If this is true, then they are compulsively so, because they don’t know how to do anything else. This compulsiveness is an excellent tool for multiplying classifications and relations to the point of paroxysm. Thus, a network trained in artistic styles sees everything and refers it to this vocabulary and one trained to recognise Cindy Sherman or Joan Fontcuberta does no more than multiply their presence everywhere.
In our latest project, Latent Spaces. Machinic Imaginations, we have focused on the field of image generation through generative adversarial networks or GANs. In other words, on the capacity of these networks to produce images similar to the corpus of examples that we provide. This field habitually focuses on a photo-realist tendency, but a practical and poor aesthetic of these tools (with datasets quantitatively inferior to industrial ones and with less calculation potential) leads us towards other imaginaries and visual textures. Here too, we have wanted to make the most of strategies that seem to us to be inherent to the tool, such as that of excess; whether applied to scientific illustration (in a highly coded world where every image must be significative) or to investigate contemporary imaginaries (for example constructing a dataset of examples associated with the Mona Lisa, i.e., an image versioned a thousand times and that is iconic for art and tourism). Also, the technological metaphors themselves move into the foreground; the “latent space” name refers to how, once trained, a GAN network is conceived. It is understood as a multidimensional system of coordinates that we could explore and where each image that it can generate corresponds to a point of this system. For this reason, in various experiments we have worked on the option of splitting a real space into a virtual space like, for example, the corridors in The Shining.
What moves us in these projects is the quest to understand and employ these tools, as a basic strategy to escape the mystification that surrounds them. To avoid the discourse and industrial barriers that distance them from us and offer them to us already predetermined. We must also be able to laugh about them and use them against the grain, in other words, have them within our reach and not the other way around.
Leave a comment