Rethinking AI: Distributed Cognition and Expanded Corporeality

A defence of AI as a commons and of the need to collectively intervene in its development.

The Lavery Electric Phrenometer, an automated phrenological measuring device, invented and patented by Henry C. Lavery. 1907

The Lavery Electric Phrenometer, an automated phrenological measuring device, invented and patented by Henry C. Lavery. 1907 | Hulton Deutsch | Public domain

More than the refinement of algorithms, the development of Artificial Intelligence (AI) has been driven by the ability to manage and analyse larger volumes of data. Following the logic that this data is produced collectively, we need to rethink the essence of AI as a commons and its relationship to our bodies and cognitive processes.

A whole host of problems come up when we talk about Artificial Intelligence (hereafter ‘AI’). The first of these has to do with the idea of intelligence in a broad sense: even before we attribute it to an artificial entity, the notion itself is problematic, including when we’re only talking about human beings. At least in the Western philosophical tradition, we can say that intelligence is related to the notions of intellect and understanding, and as such to the logical and mental activity of conscious knowledge. Many of the discussions around this concept include a critique of mind-body dualism, pointing out that cognitive activity does not only encompass a purely psychological-mental dimension, nor a necessarily and exclusively conscious dimension, but also involves sensorial, emotional, affective and unconscious aspects that incorporate physical bodily processes. This view opens up a critique of the concept of intelligence which implies that cognition should also be attributed to certain material processes and non-human agents, including technical artefacts and systems.

Along this line of thinking, which advocates for a broader understanding of intelligence, we find proposals such as that articulated by N. Katherine Hayles, who stresses the cognitive continuity that exists between the conscious dimension of human beings and non-conscious parts of the self, such as some cellular, chemical and digestive processes. When we eat, our digestive system functions based on what we could call procedural knowledge – it is capable of separating nutrients and eliminating potentially toxic material, while also knowing how to perfectly distribute the energy with which these nutrients provide us to the parts of the body that need it. And all this happens without us being the least bit aware of it. Yet it is this somatic knowledge that allows all the other kinds of conscious cognitive activity to exist. However, this does not only occur within the confines of the human body, as it can also be attributed to other non-human agents, such as plants and technical systems. In her book Unthought, Hayles argues that:

Consciousness occupies a central position in our thinking not because it is the whole of cognition but because it creates the (sometimes fictitious) narratives that make sense of our lives and support basic assumptions about worldly coherence. Cognition, by contrast, is a much broader capacity that extends far beyond consciousness into other neurological brain processes; it is also pervasive in other life forms and complex technical systems.[1]

It is important to note that these layers of “non-conscious cognition” are not, however, external to consciousness, rather their relationship constitutes complex cognitive assemblages distributed at different scales and between different agents. These assemblages include interfaces, communication circuits, sensors, agents, processors, storage media, distribution networks, and human, biological, technical and material components. And they are constantly being reorganised and reconfigured: they are not defined, stable networks, but processes in constant transition, to which elements are added and removed while the connections between them are rearranged.

Katherine Hayles  Rethinking Thinking: Material Processes and the Cognitive Nonconscious | The Qualcomm Institute

We propose thinking of AI from this perspective of distributed cognition, so as to avoid thinking of it from the promethean, anthropocentric and grandiloquent imaginaries that tend to associate it with the moment of great revelation or technological singularity in which machines could acquire self-awareness. The truth is that AI is a much more prosaic thing that forms part of our everyday lives, from digital assistants like Siri to personalised recommender systems. Therefore, instead of seeing AI as something autonomous from, or analogous or superior to humans, thinking of it as yet another agent within assemblages of distributed cognition can take us beyond the logic of human-machine dichotomy, competition or replacement to consider the possible forms of desired relationship and interaction that take advantage of the differential cognitive capital of both, in synergies that are complementary rather than exclusive, cooperative as opposed to competitive.

To approach our relationship with technical cognitive assemblages from this paradigm, we may first need to broaden our spectrum of corporeality to understand the extent to which we are already cooperating with their development. Although our idea of intelligence has expanded beyond mind-body dualism, we continue to conceive of the body solely and exclusively from a biological-somatic paradigm. But corporeal reality can also be considered within broader frameworks that take into account the complex relationships through which somatic bodies form corporeal realities with other dimensions that are not only organic, but also technical. In this sense, we can think of data as a second body – an exosomatic body – which is outside the body and yet has a relationship of interdependence and co-constitution with the somatic level. And it turns out that our “data bodies” are extremely important when it comes to training AI: the more data an artificial intelligence has to train with, the more accurate its generalisations will be, and the more complex and sophisticated the patterns it can identify.

In fact, the latest advances in technical cognition are based more on the quantity of data and on storage and processing capacity than on algorithmic prowess, as the big tech companies would have us believe. In her article “The Steep Cost of Capture” (2021), Meredith Whittaker illustrates this reality using the case of AlexNet, an algorithm that is highly efficient at predictive pattern recognition and that won the ImageNet Large Scale Visual Recognition Challenge in 2012, marking a key moment in recent AI history. However, as the author says: “The AlexNet algorithm relied on machine learning techniques that were nearly two decades old. But it was not the algorithm that was a breakthrough: It was what the algorithm could do when matched with large-scale data and computational resources”.[2] And so it is the cognitive potential implicit in the aggregation of our bodies of data that is generating the latest advances in AI.

Abolish Silicon Valley | Wendy Liu

This means we can think of AI as a product of collective effort that is being subject to “a commercialized capture of what was previously part of the commons” or “privatization by stealth, an extraction of knowledge value from public goods”,[3] as suggested by Kate Crawford and Matteo Pasquinelli, who speaks of a “regime of cognitive extractivism”[4] to refer to the colonial relationship between corporate AI and the production of knowledge as a commons. Therefore, instead of crediting large corporations with developing astounding technologies, we should view their activity as a form of pillage and plunder that prevents these technologies from reaching their full social potential by placing private interests first. This is not only because of the enormous benefits of some of the specific applications, but also because AI has become part of what Marx called the “general conditions of production”: those technologies, institutions and practices that shape the environment of capitalist production in a given place and time. This has led experts such as Nick Dyer-Whiteford to speak of “infrastructural AI”:

If AI becomes the new electricity, it will be applied not only as an intensified form of workplace automation, but also as a basis for a deep and extensive infrastructural reorganization of the capitalist economy as such. This ubiquity of AI would mean that it would not take the form of particular tools deployed by individual capitalists, but, like electricity and telecommunications are today, it would be infrastructure – the means of cognition – presupposed by the production processes of any and all capitalist enterprises. As such, it would be a general condition of production.[5]

The concept of “means of cognition” is used to signal the replacement of human perception and cognition by a technological infrastructure intermeshed with means of production and means of transport and communication. In the face of this, his proposal is for a “communist AI” which would not consist of the automation of production processes followed by the establishment of a universal basic income (as proposed by the accelerationists) but of the expropriation of AI-capital, the development of new forms of collective ownership of AI and the application of AI to the collectivisation of other sectors. We believe that this might come about through recognition of AI as the result of aggregating the cognitive potential of our bodies of data, and thus as a computational public utility that should be subject to democratic control.

We have previously proposed that the principles of reproductive justice be applied to the field of technological sovereignty, since if we accept the postulate of data as a second body, we can claim the right to abort unwanted AI or denounce the abuses of the large corporations that infringe on our bodily autonomy; but we may also guarantee the means for it to develop in accordance with our collective interests or needs. This would involve decoupling technological development from the logic of capital, rather than focusing solely on possible socially useful applications, as to unleash technology’s transformative potential, it must serve the public good rather than private profit. In Abolish Silicon Valley [6] Wendy Liu proposes some measures aimed at achieving this goal: reclaiming entrepreneurship as a public service for non-capitalist purposes; earmarking publicly owned investment funds for non-profit enterprises (generating broad access to finance); developing new types of business ownership such as workers’ cooperatives with control over production; improving working conditions in the tech industry and empowering employees; establishing a progressive wealth tax; and, ultimately, expropriating companies with excess profits.

The purpose of these measures should be to regain autonomy and control over AI as both an expanded body (resulting from the aggregation of our bodies of data that extend beyond the limits of our skin) and as part of an assemblage of distributed cognition (i.e. as a constituent element of our mind that extends beyond the limits of our skull). Making this link explicit may help us to shift away from viewing AI as a solely technical achievement linked to the development of algorithms and independent from ourselves, to seeing it as a commons to be managed collectively and which should serve more commendable purposes than facial recognition or targeted advertising. As such, it is not a question of condemning it as a potential threat to our species (either in the form of an evil superintelligence ready to annihilate us or robots that are going to take our jobs) or uncritically celebrating it as a neutral technology capable of solving all our problems. It is a question of intervening in its development, evaluation and implementation by reclaiming AI as an extension of our bodily reality and cognitive processes.

[1] N. Katherine Hayles, Unthought. The Power of Cognitive Nonconscious (University of Chicago Press, 2017).

[2] Meredith Whittaker, «The Steep Cost of Capture». In: Interactions, XXVIII (6), December 2021, p. 52. Available at:

[3] Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press, 2021).

[4] Matteo Pasquinelli and Vladan Joler, “The Nooscope Manifested: Artificial Intelligence as Instrument of Knowledge Extractivism”, KIM research group (Karlsruhe University of Arts and Design) and Share Lab (Novi Sad), 1 May 2020 (preprint forthcoming for AI and Society).

[5] Nick Dyer-Witheford, Atle Mikkola Kjøsen and James Steinhoff, Inhuman Power: Artificial Intelligence and the Future of Capitalism (Pluto Press, 2019).

[6] Wendy Liu, Abolish Silicon Valley: How to Liberate Technology from Capitalism (Penguin Random House, 2020).

All rights of this article reserved by the author

View comments1

  • Grant Castillou | 19 February 2022

Leave a comment

Rethinking AI: Distributed Cognition and Expanded Corporeality