The artefacts we create to organise the world also influence the way we understand ourselves. We therefore ask about the possible effects of the application of AI to control and classify our knowledge, and how the machine could change our narrative and perception of the world.
In 11 years’ time
Let’s imagine that, as Raymond Kurzweil argues in his book The Singularity Is Near,[1] in the year 2035 an AI reaches levels of strong artificial intelligence, making its performance indistinguishable from that of humans. In other words, this AI learns, reasons, solves problems, uses language and develops the ability to perceive and relate to the world around it. It even has what we might think of as consciousness and will. Let’s call it LIndA. It’s possible that LIndA might not have the slightest interest in us, although having been taught with texts, images, audios and videos made by humans, this would, frankly, be strange. Let’s assume that, as an intelligent being, it’s curious about its creator. And if it doesn’t consider this to be us, at least it’s curious about those beings that have nourished it with all their intellectual production, whether of better or worse quality.
LIndA is likely to use the huge amount of data stored about us on the internet to study us. As it’s a strong AI with immeasurable computing power, this seems like a reasonable possibility. Following our history and example, it could start by using a classification similar to that employed by the Enlightenment thinkers based on rationalist assumptions of nature. In other words, it might seek to understand us by following the inverse path from the specimen to the species and, from there, in our case, to what is produced by the species.
So, let’s suppose that it decides to keep all this information. What rules would it use to classify it? It could, for example, choose the criteria of authorship and create a digital twin of each of us by compiling all our family photos, bills, medical tests, our WhatsApp voice messages, the playlists we listen to, etc. Another possible option would be to classify the data into more aesthetic or abstract categories, such as groupings of colour tones in photographs, pitch or timbre in voice messages, concepts quoted in texts, etc. Who knows, it might even decide to create new forms of ordering and correlation that are beyond our understanding.[2]
Nurtured and shaped in our image and likeness, LIndA would assign a value to these classifications in order to adapt them to whatever purpose it considered appropriate, even if this were only for its own preservation or disinterested contemplation. To assign value according to a criterion, this criterion must have a justification. In other words, the AI would have to think up a story and, above all, give it a purpose. The questions that come next are obvious: would LIndA then be the curator of a collection[3] encompassing everything published by humans? Would an AI be creating a (digital) museum of us?
In the not too distant future
A strong AI curating a “museum of human things” would not only be a classifying and narrating entity but also a dispositif, in the sense used by Foucault,[4] as it would define the relationships between knowledge and power. We know that power is not only exercised vertically, but is distributed through a network of complex practices and relations. In its curatorial work, LIndA could establish what is considered valuable or relevant in cultural and human terms. Depending on its narrative, it would thus highlight certain stories and avoid others, deciding which forms of knowledge are favoured and which are marginalised.
In this way, it would wield a form of power that goes beyond the surveillance or control of society, as it would shape the understanding of the past and the present of the human world, thus influencing the future. It would decide what is preserved, but also how it is presented and, above all, how it is understood. Therefore, not only would LIndA be able to subvert certain histories or perpetuate positions of predominance or subjugation, it could also influence formal categories and, consequently, our own taste or judgement.[5] And all this assuming that LIndA has no intention of lying to itself or to us – in other words, without the need to create fakes.
Museums not only preserve tangible and intangible heritage, they also construct and legitimise social identities and realities.[6] By classifying us, LIndA could impose categories and hierarchies that become normative, affecting how we see ourselves as individuals, how others see us and how we see them. To put it another way, it could define otherness. It would thus have control over the narrative of what is ours and what belongs to others, which always involves the management of conflict and thus the redefinition of morality and ethics. It would influence how we organise ourselves as a society and also how we relate to society itself. Thus, for the first time, the human condition could be left to the judgement of an intelligence that, at least biologically, is alien to humanity.
Would we humans be “the losers,” whose stories and cultures are preserved and defined, essentially turned into museum exhibits, by an intelligent entity some of whose capacities exceed our own in some way? History has taught us that what is subjugated in one way or another is musealised. To what extent would we be influenced by this museum of ourselves over which we have no control?
To extend the museological logic, we could surmise that this museum would be empty of human visitors or possibly closed to them, as there would then be no difference between the visiting subject and the musealised object.[7] We might argue that the problem could be solved by ignoring it and refusing to visit it. But let’s assume that this is already impossible – we are overwhelmed by its eagerness for ubiquity and universality. Every document, every image, every text, every sound is digitally accessible, and those that are not, with every second that passes, are knowledge that fades into insignificance. The “museum of human things” seems inevitable. In fact, it is already under construction. All that is missing is LIndA.
Today
This vision of the future may seem alarmist, but if we take time to consider the matter, it’s not far off the present picture of museums and their role in a world where algorithms, neural networks, computer vision, deep language models and other technologies that we collectively call “artificial intelligence” are changing the way in which knowledge is produced. Speculating about strong AI and the creation of a future “museum of human things” is not just a rhetorical exercise on technology, dispositifs, classifications and collections.[8] It is, above all, a profound reflection on power and how the tools we design to understand and organise the world can, in turn, organise and understand ourselves in ways that might be alienating. This is nothing we didn’t know, nothing that hasn’t happened before in the face of a technological paradigm shift like that which we are currently experiencing.
As previously mentioned, Kurzweil uses the term “singularity” for the disruption that would be produced by strong AI, but we don’t have to go that far to analyse the impact of current AI on today’s museums. Any reflection on AI will be outdated as soon as it has been written.[9] The pace of improvement and growth in the technologies that make up the weak AI we use today is unstoppable. To think of a strong AI is perhaps utopian, and to humanise it, as we have done here, a fallacy, but it is the only way we have to understand and imagine it: ut pictura, ita visio.
The whole argument developed in this text can also be understood as a metaphor in which LIndA can be substituted for any of the knowledge control and classification structures that we have seen throughout our history. In these structures, the role played by the museum, as a dispositif, is perhaps not as extreme as that which we have presented here, but it is certainly relevant in the creation of taste, otherness, identity, creativity and so many other things through which we define ourselves as humans.
The key question is how to manage a world in which part of the cultural production and narrative is already created by a hybrid human-AI interface. This shift away from the humanist worldview of man as the sole subject of cognition-vision confronts us with major questions.[10] Questions that have more to do with ourselves than with LIndA.
[1] Ray Kurzweil, The Singularity Is Near. When Humans Transcend Biology. Penguin Publishing Group, 2006.
[2] On new ways in which collections are created by AI, it is interesting to consult the article IA for aesthetics by Lev Manovich, also published in Lev Manovich. AI Aesthetics. Moscow: Strelka Press, 2019.
[3] These lines follow the essay by Boris Groys, La lógica de la colección in Borys Groys, La lógica de la colección y otros ensayos. Barcelona: Arcadia, 2021, which is far from offering an assessment of AI but has insightful reflections on collections and museums in today’s world.
[4] It is well known that Foucault did not offer a precise definition of his concept of “dispositif” in any of his writings. Perhaps one of the best explanations of the Foucauldian dispositif is provided by Agamben in his article Qu’est–ce qu’un dispositif?
[5] On how algorithms currently shape access to cultural content on platforms, we recommend Kyle Chayka, Filterworld: How Algorithms Flattened Culture. Doubleday, 2024.
[6] After 18 months of deliberations, the International Council of Museums (ICOM) General Assembly approved the new definition of a museum on 24 August 2022.
[7] I owe this reflection and much of the inspiration for this article to Jorge Carrión’s novels Membrana and Todos los museos son obras de ciencia ficción, published by Galaxia Gutemberg in 2021 and 2022 respectively.
[8] The influence of weak AI in today’s museums is analysed from various points of view within museology in the monograph by Sonja Thiel/Johannes C. Bernhardt (eds.) AI in Museums. Reflections, Perspectives and Applications. London: Transcript, 2023.
[9] This argument is based on the ideas developed by Prof. Carlos Scolari in his 10 tesis sobre la IA published in Hipermediaciones.
[10] For more information on the implications of AI’s revision of the human worldview, we recommend the article by Prof. Nuria Rodríguez Inteligencia Artificial y el campo del arte.
Leave a comment