Digital Gentrification

Content platform algorithms reward lowest-common-denominator material and narrow the space for experimentation.


Group of dancers in clown costume performing in the original Wintergarten ‘Pony Ballet’. Brisbane, Queensland, 1926 | State Library of Queensland | Public domain

Songs written to match the next TikTok trend, series based on the dictates of platform metrics. Algorithms not only mediate our recommendations, they also have an increasingly large impact on the content and format of cultural creations. An article published courtesy of Caja Negra Editora.

In early 2018, musician and writer Damon Krukowski noticed that something strange was happening on Spotify with his old band, Galaxie 500. On the platform, the most popular song by the Boston band (together from 1987 to 1991) was “Strange,” which had never been released as a single, did not appear in any popular films or series, and was not included on any of Spotify’s playlists. It was also markedly different to the rest of the band’s output, even to those songs that seemed to be most popular with Galaxie 500 fans. Another thing that caught Krukowski’s attention was the fact that this was only happening on Spotify, not on the other music streaming platforms.

“Strange” was, indeed, peculiar in terms of the compositional style of the dream pop group. Written almost as a parody of the pop music of the time – the late 1980s – it sounded more like an old radio hit than the band’s own creation, which seemed to explain, at least partially, the song’s popularity on Spotify. When Krukowski wrote about this oddity in his newsletter, it caught the attention of Glenn McDonald, one of Spotify’s top data analysts (remember the name, because we’ll come back to him towards the end).

McDonald did some internal snooping into what was going on with “Strange” and discovered that the track sounded similar to songs from bands that were more mainstream and popular than Galaxie 500. “Strange” had managed to “slip into the algorithm” thanks to the characteristics of the music itself. Another reason was that, in 2017, Spotify introduced a feature called autoplay, which automatically plays a similar-sounding track at the end of each album or playlist.

To put it simply, anything that sounds generic and popular is much more likely to be recommended on today’s internet. As a result, it’s difficult for less conventional material to stand out. Although this sounds quite logical today, it clashes head-on with the spirit of the 1990s, when the internet seemed to open the doors to a new world, one ruled less by power brokers and more by the curiosity of internet users.

That utopian vision did not last long. The Galaxie 500 anecdote features in Filterworld, the book in which journalist Kyle Chayka analyses how recommendation algorithms are flattening culture. “This is how algorithmic normalization happens. Normal is a word for the unobtrusive and average, whatever won’t provoke negative reactions,” he writes. “Whichever content fits in that zone of averageness sees accelerated promotion and growth, like “Strange” did, while the rest falls by the wayside.”

Photo by Fath | Unsplash

This is not a particularly new phenomenon, but it has clearly expanded and accelerated in recent years and is affecting industries outside of music and film. Leaving our choices in the hands of algorithms (ones fed by our previous behaviours), coupled with the overwhelming amount of content being constantly produced, is creating a consumer culture where what works is what stands out. An “average culture,” one where experimentation and transgression are not allowed if you want to be visible.

A representative example of this state of affairs is the current array of hits, which get the chorus in early on to avoid being skipped by the anxious listener. Spotify only monetises tracks that notch up at least one thousand plays, in addition to which it has just introduced an option that allows the user to listen to the “best 20 seconds” of each track, so that everything is designed for immediate consumption.

Some have called it the “TikTokisation” of culture. And it makes sense – although it had already been around for a few years, TikTok became globally popular in 2020, the year of the lockdowns, when people spent long hours indoors and most socialising and entertainment was via social media. TikTok’s algorithm takes just a few minutes to detect the user’s interests, before offering a list of videos that appeal to them.

As a social network, the Chinese platform changed the rules of the game set by its predecessors. You can follow accounts, but its greatest strength lies in its categories. Users follow interests, not people. This not only differentiated it from the classic versions of Facebook, Twitter and Instagram, but it also forced these platforms to change their own algorithms and imitate TikTok’s formats (reels in the case of Instagram; the For you and Following tabs in the case of the platform now called X).

This new state of play has also changed the behaviour of influencers – the pop figures of platform capitalism. While before they stood out for their area of expertise (fashion, gastronomy, travel, fitness, film and a long etcetera), many of them now “chase” the algorithm to stay relevant, keeping their eye on what does and doesn’t work in order to adapt to a reality that is changing at breakneck speed. In terms not only of content, but also formats (images, vertical and horizontal video, texts, photo galleries) – a reality that threatens any chance of quality, as it’s very hard to stand out on all fronts. Which is why, in recent times, content creators have emerged who are experts in a particular subject before suddenly becoming coaches, yoga masters or financial gurus. As their original style begins to weaken, the survival instinct forces them to pursue content with greater engagement.

The technology that came to democratise knowledge and offer new ways of accessing information is going through a critical moment, an example of what the French urbanist Paul Virilio called “integral accidents” – a technology cannot exist without the accidents or side effects that arise after mass implementation. On the surface, recommendation algorithms are useful because they allow users to save time and get what they want, but their effects are far from neutral. The obvious consequence is the flattening mentioned at the beginning – songs written to match the next TikTok trend, series based on the dictates of on-demand platform metrics, serialised content for fragmented, fast and ultra-processed consumption.

Photo by Kivanc Erdirik | Unsplash

In an investigation titled “Algorithms and taste-making: Exposing the Netflix Recommender System’s operational logics” (2021), Niko Pajkovic experimented with inventing three users based on different stereotypes, namely the die-hard sports fan, the culture snob and the hopeless romantic. In the first few days of his experiment, he began to notice changes in the homepage of each of these users (something to be expected), but he also made other discoveries – in the thumbnail images of each film or series, for example. The home screen of the sports fan showed “images that included movement and dynamic colours” (even though they were not strictly sports material), while the screen of the culture snob “was dominated by darker hues, black and white artwork images and plenty of actor headshots.”

This can even happen with the same product. In October 2018, some Netflix users accused the platform of using different thumbnails of the film Love Actually depending on the profile. For example, a poster featuring Chiwetel Ejiofor, who doesn’t have a leading role in the film, raised suspicions that black subscribers were being shown posters with black actors. Netflix was quick to dismiss the issue in a statement noting that its algorithm was not guided by “race, gender or ethnicity,” but solely by the user’s viewing history.

So-called “filter bubbles” have been around for at least 15 years, when Google started customising its search engine results in 2009, but even after several investigations, the power of their influence has never been fully defined. The effects of this echo chamber can be clearly perceived on news consumption, whereby when a user becomes interested in information that leans in a certain political direction, coupled with their behaviour (likes, shares, comments), the algorithm tends to show similar content.

This system of distribution and validation of web content is susceptible to generating polarisation, as we have been seeing in the sphere of politics for some time now. But just as on some issues it produces extreme opinions, in terms of cultural consumption it seems to go the other way. “While political bubbles silo users into opposing factions by disagreement, cultural recommendations bring them together toward the goal of building larger and larger audiences for the lowest-common-denominator material,” argues Chayka in his book.

Another example he uses to illustrate this problem is what he calls “Instagram coffee shops” – coffee shops that follow a pattern taken from the predominant aesthetic of this social network (pastel-coloured walls and armchairs, generic pictures on the wall and baristas ready to prepare their umpteenth flat white of the day), and which can be found in any city in the world, in a kind of global gentrification influenced by the culture of algorithms.

These algorithms do not have “taste” in the sense that a human might, which is why using the word “recommendation” is problematic. They are mathematical formulas programmed to detect patterns in collective human consumption and then link them to individual consumption. This is what Glenn McDonald explained in a recent interview in the newspaper El Diario. The “data alchemist,” as he is popularly known, was responsible for some of Spotify’s most popular algorithms, as well as creating Every Noise at Once, a massive music map with genres from across the world.

McDonald left the company last December, following staff cuts. “In public libraries there’s public ownership. Not at Spotify – there are just economic interests. But my work has never been driven primarily by business imperatives,” he said at one point in the interview, before adding that “we should all be afraid. Anyone who works with technology related to human activity should keep that in mind in everything they do. And listen to their conscience. I always felt relieved to work for a music streaming platform rather than a health insurance service.”





Yuk Hui: “We are living in a gigantic technological system”

The Hongkongese philosopher advises deepening our technological imagination outside the confines of the capitalist hegemony and colonialism. 


We talk to the philosopher Yuk Hui about the shift from the industrial to the cybernetic age, the uselessness of our current-day dystopian discourses and the need to encourage a diversity of technological thinkings able to combat the homogenisation of capitalism.

“Whoever invokes ‘humanity’ wants to cheat.” Yuk Hui begins with this warning, a paraphrase of Pierre-Joseph Proudhon by Carl Schmitt and an indication of Hui’s steadfast stance against the totalising discourses of modernity. In contrast, for Hui, our intimate and inexorable relationship with technology can only be fruitfully considered if we accept that there is a multiplicity of technological thinking located within individual world views that guarantee their diversity. Just as there is not one, but diverse humanities, so there is not one technology, but multiple cosmotechnics. This is the philosophical project that Hui has developed over the course of his relatively short but prolific philosophical career, which includes works such as On the Existence of Digital Objects (University of Minnesota Press, 2016), The Question Concerning Technology in China (Urbanomic, 2016), Recursivity and Contingency (Rowman i Littlefield, 2019), Art and Cosmotechnics (University of Minnesota Press, 2021) and his compilation of articles published in Spanish, Fragmentar el Futuro (Caja Negra, 2020).

Hui is a somewhat anomalous case in the history of the philosophy of technology. In a way that diverges from the infamous division between two types of philosophies of technology proposed by Carl Mitcham in the late 1980s, to wit, one centred around engineering and another, more humanistic philosophy, Hui first studied systems engineering before eventually going on to earn a PhD in Philosophy under the supervision of his mentor, Bernard Stiegler. It is this fact, that he transcends both disciplines, which allows him to offer an informed look at the hyper-technological condition of our contemporary world.

Today, what defines our technological condition is the development and expansion of cybernetic technologies. While the technological leap that occurred during the Industrial Revolution was characterised by the introduction of automation, in our time we no longer have only automated technologies, but technologies that are able to regulate themselves and maintain a more open relationship with their environment. In other words, to become more autonomous. And not only that, but the world in which we live is now shaped more than ever by this web of technology. Hui echoes Jacques Ellul when he says that “We are living in a gigantic technological system.”

Yuk Hui & Jorge Carrión | A revolution of ideas

However, this condition is dominated by a capitalist logic that homogenises our relationship with technology. This is reflected in the adoption of the utopian or dystopian discourses that the media are so fond of espousing, which sketch out a series of idyllic or catastrophic futures that seem like something out of science fiction. “It seems like we live in the time of science fiction. We talk a lot about AI domination, about mass unemployment. More and more people are trying to understand our future through science fiction. “I find this really disturbing. I’m a big fan of science fiction, but I find this problematic because it means that we fail to analyse our concrete situation.” Just as Gilbert Simondon – an author whose work is key to understanding Hui’s philosophy – opposed our imaginings around robots in the introduction to his book On the Mode of Existence of Technical Objects (University of Minnesota Press, 2016), Hui rejects the catastrophic imaginings that have been inspired by technological automation and artificial intelligence. To the contrary, we must develop a new critique of economic policy that specifically highlights the technological problems of the present in order to understand the new forms of alienation ushered in by contemporary technology.

For Hui, the goal should be to deepen our technological imagination outside of the capitalist hegemony and colonialism, to generate localised understandings rooted in the particular historical relationships between each culture and its own technological advancements. “We have to accept that there are a multiplicity of technological thinkings. The process of modernisation is a form of colonisation. Modernisation implies a homogenisation of knowledge and world views.” We have the impression that technology is universal, but we fail to understand technology in the same terms of multinaturalism that are talked about by anthropologists such as Eduardo Viveiros de Castro or Philippe Descola. This implies localising technological thought, hence the importance of his concepts of cosmotechnics and technodiversity.  “If we think about the multiplicity of technologies in different philosophical traditions, we can see that technology has different relations to aesthetic, philosophical and religious thinking.” However, this does not imply a return to pre-modern or pre-capitalist forms of technology as some outmoded romantics have tried to propose (which would be impossible from the outset), but of reframing, fracturing and diversifying our technological condition.





The Other AI Revolutions: Synopsis and Conclusions

We review a series of conversations about the AI revolution and its impacts on thought, society and culture.


Rita Jolivet, 1915-1920 | Library of Congress | Public domain

The irruption of AI tools is triggering huge structural changes. It forces us to think about the relationship between humans and technology, the need for regulation, the changes to work and the implications for democracy and the information society. In this article, we look at some of the ideas brought up in the season of debates “The other AI revolutions”.

I

Whichever way you look at it, the explosion of generative AI is a revolution. As well as their scientific or technological facets, revolutions are always cultural, in the two (complementary) senses of the word “culture”: that which relates to “cultivation” (structural), and that related to the body of knowledge, rules, expressions, symbols and ideas (which is always reflected in narratives, the arts, academic disciplines and all levels of representation).

In “The other AI revolutions” season, we set out to analyse the cultural impact of the current changes that are driving deep learning neural networks and other AI systems. To do so, we brought together international specialists from various fields of knowledge, from philosophy and sociology to the arts and law.

In the calendar sense, a season is a period of time which comes to an end only to start over again. There is no doubt that AI is going to need this constant restarting. We cannot, and must not, stop thinking about it – artificial intelligence can only be processed through collective natural intelligence, through exchange and conversation.

II

“Human intelligence is artificial,” said Yuk Hui in the first session of the season, on how computer systems are revolutionising ideas. The Chinese-born philosopher, who belongs to the European academy and has published books such as Fragmentar el futuro (Caja Negra, 2020) and Recursivity and Contingency (Rowman & Littlefield Publishers, 2019), asserted that in order to understand technology, we must think seriously about biology, i.e. the area that connects the mechanism with the organism. And that we must defend diversity in three complementary spheres – technology, biology and the noosphere (at the planetary scale).

To this end, we must not forget that the concept of “human being” is a modern, and therefore a recent one – humanisation was a process of technological inventions, so it could be said that technology invented humanness (and that humanness is a constant reinvention).

It is clear that, in geopolitical terms, we are facing a global race guided by the decisions taken in Silicon Valley, and that China has joined the fray. While Yuk Hui has highlighted the need for a post-European philosophy, there is also a pressing need to imagine a post-Cupertino ideology. After all, as the thinker says, “the challenge of artificial intelligence is not about building a super intelligence, but rather to facilitate a noodiversity.”

Yuk Hui & Jorge Carrión | A revolution of ideas

III

In her presentation of the talk between Mercedes Bunz and Joan Fontcuberta on the cultural revolution inherent in AI, Alex Saum Pascual (professor at the University of Berkeley and poet) pointed out that we must never fail to historicise. On this point, she mentioned Bunz’s book The Silent Revolution (Palgrave Pivot, 2013), which Bunz published in German back in 2012 and in which she predicted the enormous impact of digital media in the context of a society not of workers, but of experts. A year earlier, Joan Fontcuberta published the article “Por un manifiesto post-fotográfico” (For a post-photographic manifesto) in the cultural supplement of La Vanguardia, and in 2008 he directed the international meeting “Do Androids Dream of Cameras”, as he recalls in the first few pages of La furia de las imágenes (Galaxia Gutenberg, 2016). Both, therefore, have been thinking about the automation of the world and its cultures for over a decade.

Mercedes Bunz wondered where intelligence is at the moment, apart from in human brains and books. Asking whether machines will be creative is the wrong question, because human creativity has, in one way or another, always been intertwined with instruments, tools or artificial elements. The problem, at the present time, is that we no longer understand how this cooperation or hybridisation works. Machine learning no longer means just writing code, but introducing huge amounts of data so that the system learns by itself, with the resulting biases and uncertainties. This represents a change of perspective – the machine does not see what the camera sees. This new reality has not only artistic consequences, but also political ones. Which is why we need an open, transparent, public AI. And art can help in this respect – often what is tested in an art lab can be transferred to the user’s private reality or to a social cause. The mainstream cultural revolution is headed in a specific direction, but we must look for ways to redirect it.

For his part, Joan Fontcuberta, who defined himself as an “acrobat of the image guided by curiosity,” started from the idea that nowadays the image does not represent the world – it is the world. It shapes the new consciences, and our decisions. Fontcuberta talked about “Nemotipos”, his latest exhibition (in Murcia), which brings together two decades of work characterised by the production of images without a camera. In recent times he has been generating portraits of people who don’t exist, with more interest in error and imprecision than in mimetic precision, despite working with a supercomputer (from the Barcelona Supercomputing Centre). He has already started using the deep fake technique to generate hyper-realistic videos starring, for example, Donald Trump (with his face contorted into an orgasm), but without overdoing the quality, so that the spectator can clearly see the parody. While botanical illustration was a kind of proto-photography, photography in the last century and a half has been a proto-AI.

Mercedes Bunz and Joan Fontcuberta | A cultural revolution

IV

In his introduction to the conversation with the American professor and author Frank Pasquale, who has just published his New Laws of Robotics in Spanish under the title Las nuevas leyes de la robótica (Galaxia Gutenberg), Xavier Nueno, also an author with a PhD from Harvard University, spoke of the two traditional narratives around robots and AI – as a divine gift and as a diabolical machine, while Pasquale, he said, is working on a third, alternative narrative, that of regulation.

The American professor took as his starting point a paradox: contrary to the expectations generated over recent decades, the impact of the explosion of AI (with the launch of ChatGPT) has been on creative work, rather than cleaning, security or other tedious tasks. In this new panorama, a case-by-case analysis is needed to answer the question of whether the automation of a specific task supplants the work of a human professional. In the field of translation, for example, the automatic subtitling on YouTube does not displace a professional. But in many other cases someone is being pushed out of a job.

If we broaden the focus to a general, epochal debate, we see that there are two fundamental tensions – between technocracy and populism, and between democracy and disruption. Some classical professions (teaching, medicine, law) are being subject to judgements that come from the spheres of economics and computer science (such as the continuous evaluation by the student, patient, client, user; the imposition of a maximum appointment time; or the constant calculation of return on investment). The logic of disruption proposes, and often imposes, an alternative system to the agreed one without previously having studied its consequences. The new channels work against the expert, against training, against autonomy and against the credit of professions. Which is why, says Pasquale, the future can only be unionised.

The automation of the public sphere, in which social media platforms and bots have taken over much of the traditional media space, with the consequent increase in misinformation, also calls for regulation. The automation of death, through drones and other robots, similarly requires new laws and conventions. It is of crucial importance to legislate the internet, and there must be transparency in the attribution of data and in the origin of technologies.

Frank Pasquale and Xavier Nueno | A cultural revolution

V

The series “The other AI revolutions” concluded with a conversation between the journalist and essayist Marta Peirano, author of El enemigo conoce el sistema (Debate), and the Italian philosopher Maurizio Ferraris, who has reflected on the profound changes being brought about by algorithms in his book Documanidad (Alianza).

Ferraris’ “new realism” (2010-2012) is a concept that predates Trump’s “alternative facts” (2016) (perhaps because Berlusconi was a precedent for this later populism). We are now in a new stage of post-truth with ChatGPT et al.

AI allows both the informative, personalised and dopaminic fragmentation of the narrative about what is real, as well as coordinated, ethical and transversal projects such as those of Forensic Architecture, said Peirano, who uses Big Data and machine learning to denounce the massacres being carried out by Israel in Gaza. Indeed, the Italian thinker asserted that the technologies of our time are both toxic and emancipatory, poison and medicine.

They feed on our documanity (“infinite production of documents” by everyone, as we have all become media, leaving digital traces of all our activity). In a media landscape with apocalyptic and dystopian overtones, Ferraris’ unusually optimistic perspective is worthy of note. In his latest book, he proposes the capitalisation of our data through humanistic platforms in order to implement a fair re-distribution of profit. Seeing that the Web is one of humanity’s great achievements, a massive bank of data and knowledge which does not value us “for our merits, but for our needs,” Ferraris has launched the experiment Webfare, which would be the evolution of welfare. With the collaboration of a bank in North-East Italy, the research has made it possible to calculate the capital represented by the social media presence of 1.5 million people. Is this the beginning of a utopia?

Maurizio Ferraris and Marta Peirano | A social revolution

VI

While some of the most important intellectual figures of our time were talking in the CCCB’s hall and theatre, the large wall of the exhibition area was displaying the mural comic Synchronies, by the artist Roberto Massó. The work offers a piece of speculative fiction to imagine communication between humans and algorithms through the classical computing of the present and the quantum computing of the future. While in the first few vignettes we see zeros and ones, a modem or ADSL, the story progresses through the curvature of space and time and cubits, simulating the journey of information from the present to the future. And vice versa.

It is about creating a narrative of dialogue, of cooperation between people and machines, after too many dystopian novels, films and series. Its moral, if it has one, is that before such a conversation can really take place, we need to perfect the dialogue between ourselves. Between engineers, entrepreneurs, legislators, philosophers and humanists. For several weeks, on the stage of the CCCB, people from China, Berlin, London, the US, Italy and various parts of Spain came together to talk, together with hundreds of people in the audience from many different backgrounds, about a range of subjects including technodiversity, hacking, regulation and utopia. The conversation is an urgent and essential one, and we must continue to broaden it. Because the last few revolutions were televised, and the new ones are in danger of being automated. But will we allow it?





Becoming a Web Series

We have gone from showing our offline lives on the Internet, to living in accordance with our online personas.


Filming Actors at Table, Motion Picture Kitchen Set. Texas, ca. 1951 | Robert Yarnall Richie, DeGolyer Library, Southern Methodist University | No known copyright restrictions

The boundaries between on and offline life have long since become blurred. However, it now seems that the true kernel of our lives lies inside the web. We no longer just show who we are, but we live our lives as a fiction designed to be spread on social media.

As I look through the photo gallery on my phone, it suddenly comes to me. It’s been days since I posted anything. Instinctively, I go through each image in the gallery to weigh up its potential. Do I exist, if nobody sees me? Or even worse, do I have any control over who I am in this world, unless I share every detail with all and sundry? Like a Greek hero up against the fate imposed by the gods, I embark on my quest. It doesn’t matter if it’s a nice picture, or if I look good in it. In fact, I don’t even have to be in it, and it’s even better if I’m not. I only consider what it will say about me and whether it will be able to explain the future I want, to others and to myself. Scrolling through my photos, I imagine what you’ll think of the books I read, or of the startling depth of the phrase that I’ve wisely highlighted; of the bars I know, and my half-eaten meals there. In a recent article, Rebecca Jennings wrote that, whether we are office workers, astronauts or writers, we can no longer escape the tyranny of the personal brand. I would go even further, and say that while in the past few years we subjugated our online presence to faithfully show who we were offline, we now do the opposite – we try to be faithful in life to the persona we have created online. Does it make sense to keep talking about truth and lies on the internet?

A few months back, before posting a photo, Instagram started giving the option of seeing how it would look on your profile grid. It’s an extremely useful tool for corporate accounts, but when I do it myself, I become doubtful. I hesitate. The sentence I’ve chosen doesn’t quite fit in with the other posts. If I choose a photo of myself, it’s too close to the last one where you can see my face, so I have to look for a third option. If you add this to the fact that the algorithm prioritises companies and professionals, so that you have to go and search for the profiles of the humans you’re interested in to see the posts, it becomes even more evident that the image you put on the internet no longer makes sense in itself, but takes on meaning when it becomes another piece of your mosaic. To give an example, “photo dumping,” or the trend of posting collections of images in a single post, represents yet a further level of abstraction of the self through images. The countryside, the dirty dishes from the barbecue, the blurry photo of your friends at a party… none of them would make sense without the others, and they can only be understood in terms of what they tell us about you. The image is no longer an object to be consumed – on the internet, you are.

In 2006, Lonelygirl15 posted her first YouTube video. It’s a minute-and-a-half-long vlog that appears to have been recorded with a computer webcam. Bree, a teenager who barely looks at the camera, talks to her viewers in a shy voice. Although the term “viewers” is anachronistic. In 2006 there were no social networks as we know them today – no influencers, or even followers, just communities. Lonelygirl15, Bree, hugs her knees and lists her favourite users of the platform, explaining that “because she really likes a lot of the guys on here, she’s decided to post a video herself.” Her subsequent posts begin to feature all the users she mentions by their YouTube names. Soon, a four- or five-way network of crossed videos and content is generated so that the viewer can follow the teenager’s adventures, as she talks about fights with her parents or outings with friends. Eventually, however, it came out that the teenager in the videos was really an adult actress performing a script created by a whole production team, and that those Lonelygirl15 vlogs were nothing more or less than the first web series in the history of YouTube.

Lonelygirl15, the first web series in the history of YouTube.

I like to think that, in one way or another, on the internet we’re all making our own web series. This fictitious self, taken to the extreme, is what explains the birth of the influencer. These internet users who speak to us through the screen must manage to be both relatable and aspirational, friendly and commercial. To provide reassurance that they’re just like us, but also everything we wish to be. Their power comes from transforming lies into promises. Influencers exist because they make us want to live like them, but also because, in order to live like they do but outside the internet, we have to imitate their online life – the same photos, angles, trips, purchases and compositions. We buy each other’s lies in the hope that they will become true off-screen.

But what I find most curious about the Lonelygirl15 case is not the fiction, but its clairvoyance. This collaborative and experiential content used the same key elements that, 10 years later, would ensure the massive success of groups like the Vlog Squad, which dominated the YouTube scene between 2015 and 2021. They did so, moreover, without being part of any specific community or interest group, but by focusing instead on “lifestyle” content – outings, habits, errands, coffees, lunches, dinners, trips, get-togethers and lots and lots of shopping, all accompanied by a friendly voice. The formula for the success of these groups lies precisely in the fact that they create multi-perspective content, so that, while you establish a relationship with one creator, you can see the same thing from one, two or three different perspectives through the other members of the group. A parasocial relationship in three dimensions.

In the case of the Vlog Squad, led by the youtuber David Dobrik, the format was stretched like chewing gum with the force of the click and the ease of the hyper-consumerism that was already beginning to nibble away at YouTube. Soon, all the channels were infected by the same evil – “Surprising friends with three new Mercedes,” “I buy a giant slide to see what happens,” “Trying the entire Burger King menu.” Neither their life nor their relationship with each other was enough to keep up with the pace, so in addition to videoing their day-to-day life, they had to start living through the algorithm. They made the lie come true and stopped being the aspirational example that followers wanted to emulate, to become slaves to the YouTube homepage. In 2022 Dobrik was sued for more than $10 million for crushing one of his fellow Vlog Squaders with an excavator. The accident took place while filming a video where Dobrik surprised his friends with the machine, which he used to throw them in spectacular style into a Utah lake.

After the pandemic, and with the growth of the alt-right (especially on the internet), the shadow land between lies and truth has become open to conquest and fertile for propaganda. Personally, the phenomenon that most intrigues me in this quagmire is the appearance of “tradwives.” In recent months, Nara Smith’s account has drawn a large amount of attention. She’s a young housewife who makes cooking videos, and on her TikTok we see her sporting a silk dressing gown and a French bob as she sinks her perfectly manicured hands into a mass of eggs and flour. I spy her making homemade Oreo cookies and can’t help wondering about the dough that must get stuck under her long, shiny nails, which are different with every recipe. But I know that the point of the video isn’t to show me how to make cookies, but rather the gorgeous manicure she can afford precisely because she’s the kind of woman who stays at home baking cookies. The image, these recipes, are the excuse to explain who she is – a woman, a mother, traditional and conservative, living in a luxurious idyll of marble and air fryers, courtesy of her Mormon husband.

Aestheticisation, in this case of conservatism, means that the truth is watered down by the desire to live like we do on social media. We all know that, historically, the traditional role of spouse and housewife has more to do with domestic slavery than with expensive manicures, but the possibility of being able to repeat the lie on TikTok is enough to make us believe. I don’t think it’s about demanding that people be honest on the internet, especially at this late date. But it might be useful to go back to the beginnings, to 2006, and see the internet as a tool of fiction. And the content, as our own web series.





On Knowing Matter

A speculative cartography of the material ecology of Artificial Intelligence.


Entrance tunnel at the Mangahao Power Station. Shannon, New Zealand, 1921 | Leslie Adkin, Te Papa | No known copyright restrictions

While we believe in the idea of an AI as an abundant resource with no physical existence, we are also confronted with the proof and consequences of its materiality. We wonder, however, to what extent we can understand the complex web of water, minerals, energy sources, data and labour that make up these systems. As well as their environmental impacts and our responsibility for them.

> Are you thirsty, ChatGPT?

My finger pauses over the enter key. Researching the material infrastructure and environmental impact of computation has attuned me to the absurdity of putting these enormous, energy-intensive, and almost entirely invisible technical systems to such apparently frivolous use, and the paper I’ve just read – “Making AI less ‘Thirsty’” by Li Pengfei and colleagues, which suggests that a 20-50 question interaction with ChatGPT will consume 500 ml of potable water, and which had prompted the question I just typed – heightens this ambivalence in my mind. My habits are somewhat behind my conscious doubts though, as the long history of previous chats in the ChatGPT sidebar attests. It’s almost too easy to query it about whatever comes to mind. I press the key.

“As an AI, I don’t have feelings, physical sensations, or needs, so I don’t experience thirst or any other human conditions. But I’m here to help with any questions or tasks you have in mind!”

This idea that ChatGPT is an abundant, immaterial resource, without any physical existence, but always available for whatever query might occur to me, already ingrained in its affordances and my nascent habits of use, is confirmed by its response. I press further:

> What are you made of, ChatGPT?

“I’m made of code and data! Essentially, I’m a complex algorithm that processes and generates text based on patterns and information learned from a huge amount of text available on the internet. My ‘brain’ is made up of various machine learning techniques and neural networks that help me understand language and provide responses.”

> Where are you, ChatGPT?

“I exist in the digital realm, residing on servers and computers where I can access information and interact with users like you. But in a more abstract sense, you could say I’m wherever you access me from – a website, an app, or any platform that hosts me!”

At every step, I find that ChatGPT insists on its own immateriality, while at the same time, I’ve just been reading empirical work which lays out in stark terms the many and varied material supports (and material consequences) of ChatGPT as a technology. I find this significant for two reasons.

The first is that I suspect this is not entirely accidental. Lucy Suchman, an anthropologist of technology and one of the founders of the discipline of human-computer interaction, in her recent article “The Uncontroversial ‘Thing-ness’ of AI” identifies how the fact that the term ‘AI’ functions as a ‘floating signifier’ reinforces the suggestive power we imbue it with: the supposed immateriality of ‘AI’ is a contributing factor in the authority we imagine in its knowledge.

The second is that in comparing ChatGPT’s own utterances to the investigation of Li and colleagues, and many others probing the material footprint of AI, we’ve encountered a controversy – on one hand, an insistence that AIs like ChatGPT have no material existence, and on the other, the assertion that their material existence and its consequences are both considerable, and of considerable importance. Controversies, as the work of STS scholar Tommaso Venturini shows, offer us opportunities to learn – they’rethe best available occasions to observe the social world and its making.” The presence of a controversy, then, offers us an opportunity to better understand the hidden socio-material aspects of this supposed thing, ChatGPT, which presents (and describes) itself as being disembodied, ethereal, even while we’re confronted by the evidence and consequences of its very material existence.

Taken together, these two opportunities point to a two-fold purpose of this sort of research: the way in which the materiality of an AI system is entangled with its epistemic and epistemological qualities (both what and how the machine comes to ‘know’), and the ways in which we can come to know it. The specific type of intelligence that AI is, and our beliefs about it, are based on its materiality, which allow us to probe both the limits of our knowledge about it, and of its knowledge of the world through our investigation of its material infrastructure.

We could call this manner of studying AI an ecological approach, and it is ecological in two distinct senses. First, we’re interested in the ecology of AI in terms of its material impact on the environment, but we’re also interested in AI as an ecology in itself: as Matthew Fuller describes it, a “massive and dynamic interrelation of processes and objects, beings and things, patterns and matter” which includes both its material, epistemic and phenomenological aspects, rather than as a discrete thing, material or otherwise.

So, what things are included in this material ecology of ChatGPT? This is what I and the participants in the “Minds, Lies and Materials” workshop attempted to uncover, one Saturday afternoon in January as we mapped out the multiple material bases of ChatGPT and the controversies that revolve around them. Investigating and contrasting multiple perspectives (our own investigation using technical tools and Open-Source Investigation methods, along with published independent research, official announcements by OpenAI and the utterances of ChatGPT itself), we focused on the sources of energy, water, data, labour and minerals which make up the supposedly disembodied ‘intelligence’ of ChatGPT. This allowed us to map their sources, relationships and movements in a manner inspired by Cartografias da Internet and Kate Crawford’s Atlas of AI, and aided throughout the whole process by Anne Pasek’s methods for Getting into Fights with Data Centers.

Workshop: Minds, Lies and Materials. CCCB, 2024 | CC BY-NC-SA Pau Fabregat

This was a provisional, speculative exercise – we did not expect to settle these controversies one way or another, but rather to attune ourselves through an open-ended process of investigation to the many material facets of AI, employing something akin to the Rubber-Boots Methods of curiosity, collaboration and critical description (even though on this occasion our feet didn’t leave the carpeted floors of the CCCB).

We ended up with a map, or rather three overlaid maps. The first resulted from our own research, and showed a dense network of water from a reservoir serving a populous Dublin suburb (the excessive use of which by data centres has already raised controversy), the labour of underpaid data annotators in Kenya, and the Irish electricity grid – which boasts two local hydroelectric stations, but which Microsoft plans to augment with a dedicated gas-fired plant. It also included data gleaned from the web, nominally globally, but in fact with a notable bias towards the English-speaking parts of the global north. The rest of the globe contributed mainly in a more tangible form, with Nickel from Indonesia, Cobalt from the Democratic Republic of the Congo and Lithium from Chile being brought together in GPU factories in Taiwan and ending up as e-waste in China, India and Nigeria. Overlaid on this, a second map: OpenAI’s presentation of itself as a headquarters filled with ingenious scientists and innovators in San Francisco, along with a data centre filled with high-tech servers in Des Moines, Iowa. The third ‘map’ hovered uneasily outside the frame: ChatGPTs insistent description of itself as disembodied and ethereal.

This method was not without its problems. In attempting to open the black box of ChatGPT, we encountered still more black boxes inside – parts of the system which were opaque and unknowable from our perspective and with the tools and methods at hand. However, having chosen to contrast our own perspective with that of ChatGPT itself, and that revealed publicly by OpenAI, these frontiers of knowability became results in their own right. Identifying the differences between the types and depth of knowledge available to us from each of the different sources we identified made us newly sensitive to the limits of each.

This also involved the realisation that we were not just studying this controversy, but were ourselves a party to it, through our investigation from a situated, partially-informed perspective, leading us to confront situations in which our own first-hand experience contrasted with or contradicted the claims of another source, which on the surface could claim more authority.

A case in point – using traceroute and some other slightly-hidden technical information on the ChatGPT site that suggested we were communicating with a server in Europe, along with a few inferences gleaned from publicly-available information (that Open AI uses infrastructure owned by Microsoft, whose currently-operational European data centres are in Dublin and Amsterdam), we settled on Microsoft Grange Castle, Dublin, as a likely location and focus of further study, only later to discover a footnote in some OpenAI technical documentation which stated that their infrastructure is located in the US (likely Iowa). More than just a mistake however, this proved to be generative of some useful discussion. Should we place more faith in situated and partial first-hand methods, or in opaque claims made by the business itself, when trying to understand an object of study (ChatGPT in its material form) which we can’t directly access? For our immediate purposes this may not matter so much: digital infrastructures such as ChatGPT are built to scale, i.e. they’re designed to expand uniformly, being composed of smaller, modular and identical units, analogous to Lego bricks or shipping containers. One consequence of this uniformity is that it makes the knowledge gained in one place more easily transportable – what we learn from looking at Dublin can still teach us about a data centre in Des Moines, because they’re designed for uniformity. In this way, our knowledge of scalable systems can perhaps be turned into a technique for ‘resisting scale thinking’ – using the similarities between infrastructures in different places as a tool for comparing, contrasting and building solidarities between them.

At the end of the workshop, reflecting on the complex web of minerals, water, energy, data and labour which we discovered, as well as the blind spots, ambiguities and contradictions we encountered, we debated how this research might inform our attitudes to and interactions with ChatGPT and other AI technologies in the future. Having been made aware of the limits of our own ability to know the material effects of these systems, as well as the oblique, selective, and sometimes even dishonest way in which both OpenAI and ChatGPT present them, two major doubts emerged: what can we do to mitigate the environmental consequences, and is it our place, or our responsibility to do so?

One possible answer to these questions arose through the discussion of a third doubt: was this massive, wasteful and centralised model of computation strictly necessary? We recalled earlier attempts at distributed super-computation, such as SETI@Home, which recruited idle home computers around the world to solve complicated scientific tasks, and asked: is the massively centralised, wasteful architecture of ChatGPT really necessary for artificial intelligence? We came to the conclusion that perhaps, in some narrow sense, it is – Generative AI models such as ChatGPT arose from particular innovations in computer hardware (the development of the GPU) and the particular economic and technological conditions of surveillance capitalism which allow the appropriation of data on a large enough scale to train these models, both of which, in some sense, imply this centralisation as a necessary condition. However, through our awareness of these material conditions, we came to realise that perhaps what is needed is an amplification, or reframing of our idea of AI itself. To follow Yuk Hui’s call for a renewed attention to technodiversity – and, through imagining other, less extractive, material configurations for AI, collectively conceive of other, more liberatory and convivial ways in which machines might be considered intelligent.





Reflex Actions. Generative AI Tools and the Archive

When we train an AI with the dataset created from an archive, new questions and ways to analyse its content emerge.


Beatrice Lillie, New York 1948 | Yousuf Karsh, Library and Archives Canada | Public Domain

Generative AI is a matter of statistics, an imitation of the style of the dataset used to train it. Questioning how this process plays out leads us to reflect on the tool’s abilities and limitations, as well as the value of its results. These are the reflections offered by the collective Estampa through two installations that experiment with the CCCB Archive using AI.

The field of artificial intelligence has made progressive inroads into different areas. One of the latest spheres into which it has expanded is generation, with services that offer seemingly automated content creation. These generative AI tools work through imitation. They are designed so that, when given a set of data (text, image or audio), they will look for optimal statistical solutions of pixels, letters or sounds which could also fit into the dataset. In other words, they try to reproduce a style through an act of camouflage. A good expression to use would be “piecing together.”

Like any disguise, implicit to these tools is a process of analysis – what are the most common characteristics of that which they are attempting to mimic? What motifs are repeated? How are the elements combined? Generative AI is a mirror, more or less realistic, more or less distorted, of what we ask it to imitate; an analysis of the dataset used to train it.

Nowadays, any archive is not only a place for conservation, research and dissemination, it is also a possible dataset. One of those sets of data from which the style can be extracted. This is how, for the installation The Infinite Talk, the CCCB’s archive of talks has become the source material for a text generation network. The network pieces together the talk genre and the type of speech used; it combines the words to create conversations that might possibly have taken place here – texts that could also pass for material from the archive. Imitation opens the door to analysis, to a reflective gaze in the mirror, and also, at times, to the surprise of unexpected combinations, some plausible, some incongruous.

Although AI is often spoken of as a way to reproduce or imitate human characteristics (an imitation that, in our view, should be used for reflexive analysis rather than camouflage), it is also important to understand what it is about AI that is clearly different. A key aspect of this difference is scale. The writing produced by the network is infinite and rapid; it could go on forever, it can always be asked to generate something else almost instantaneously. In this sense, it is analytical not in an assertive way, but in a propositional way – there is always another possibility, a new variation, a different but similar change of clothes… There is no solution (no conclusion of what has been read; no supposedly ideal text), but an apparent stream of possible forms, a babble that flows when we open the tap.

A second relevant aspect is that AI tools are automatic, and they reflect two synonyms of this word: they are compulsive and reflexive. A network can only do what it does, and it can only do it compulsively – it cannot become silent, for example, or stop mimicking the dataset on which it has been trained. In this sense, AI could be understood as the construction of social reflex actions, as a large-scale process automation based on imitating specific prior decisions. Like a social muscle that contracts according to how individual muscles have previously flexed and unflexed, AI is a reification of what has happened. When it is described to us as the future, perhaps it would not hurt to be aware of the extent to which it is the past made solid.

As we have already mentioned, AI is like turning on a tap, but this can also be said in another sense – just like the management of running water, AI is also an infrastructure. It is not inside our devices, rather it relies on computation performed on distant servers with industrial hardware, while data travels back and forth. It is also an infrastructure because it requires training that can only be done with large computational tools, in very high-capacity and high-consumption data centres. This is the nature of industrial AI, of the generative tools with which we are all familiar and that have been trained with very broad datasets – an extensive technique that widens the number of styles that can be imitated (known in the field of text generation as large language models). All this aggravates the problem of the materiality and energy consumption of the digital world (it is estimated that training ChatGPT consumed as much as the annual electricity consumption of one thousand US homes, and that its current use consumes the equivalent of 33 thousand homes; a figure that is set to multiply rapidly, considering the rate at which these technologies are being implemented in all kinds of tasks). This conception of AI is also underpinned by the dubious extractive logic of the commercial internet, which regards everything that happens and all that we do on the web as material to be used and monetised.

Given this fact of AI as an infrastructure, the generative network used for The Infinite Talk has been trained at the local level, with a high-performance user-level computer, and focusing on a specific dataset: the transcription of the talks given at the CCCB and kept in the archive.[1] Choosing not to use corporate services has made it possible to avoid their larger carbon footprint, as well as the generic and often clichéd nature of the results produced by these industrial generative AI tools. And when comparing the two, although this network would lose in linguistic and textual coherence compared to the latest version of GPT, it would perhaps win in terms of unusual ideas or unexpected outputs.

At the heart of this project is a series of questions about AI and its powers and limitations. Who is speaking in the texts that are generated? What value do we place on the outputs? How do we want to understand them? What is the relationship of statistical variation, and its potential scale, with veracity? What does AI do with what it gobbles up?

Apart from the ecological and stylistic considerations mentioned above, working with a small model allows us to ask these questions on a relatively manageable scale. When presented with the network’s output, we are more likely to question what it is that it does and how far it is able to go. We see not just a result, but the statistical interplay that underpins it, and thus question the tensions between what are statistically considered to be plausible variations and what we ourselves consider these to be. The seams of imitation become visible without unravelling the outfit. We think it is important to keep this perspective in mind, and to apply it to large models and to all the possible uses of AI. Every result of an AI tool must question the network that generated it.


[1] The network used was given an initial generic training with a broad dataset, which functioned as a kind of language learning. We did a second training (also called model refinement) with just the transcriptions from the CCCB archive. Although this initial training was generic and predetermined, it was on a different scale to the industrial models. In the case of the Question Time installation, pre-trained networks are used. One is used to convert the talks archive into a questions-and-answers dataset (the instruction given to the network is to summarise each paragraph as a question) and another is used to write the final answer, based on text fragments from the archive. The training of these networks is out of our control, but both have been used locally on the project computer to avoiding working online.





Did Calvino Dream of Literary Androids?

The Italian writer explored the narrative as a combinatorial process and anticipated the functioning of generative artificial intelligence systems.


Woman holding typewriter ribbon, ca. 1930s | Robert Yarnall Richie, DeGolyer Library | Public domain

In his essay “Cybernetics and Ghosts”, Italo Calvino imagines a “poetic-electronic machine” capable of writing. But his dreams went beyond artificial intelligence systems that stuck to the guidelines and rules of traditional literature. Calvino projected a chaotic machine that would shun norms, and explore new and disruptive paths.

Italo Calvino was born in Cuba on 15 September 1923 but he considered himself to be from Liguria (“sono tanto nato a Sanremo che sono nato in America”). At that time, Europeans still emigrated to “do” America. Calvino was born in America but he trained and grew as a storyteller in the offices of Einaudi in Turin, the great publishing house founded at the height of fascism and reborn in the dopoguerra where a new writer could exchange reading matter with Cesare Pavese, chat with Natalia Ginzburg or have an espresso with his “soul mate”, the survivor Primo Levi. It is no coincidence that they are known as La Tribù Einaudi.

Like his Piedmontese contemporary Umberto Eco, Calvino was an extremely curious type. He knew of structuralist theories (from Vladimir Propp to Algirdas Greimas, including Claude Lévi-Strauss, Roland Barthes and his colleagues at Communications), closely followed European literary production (with special interest in the French Oulipo group who experimented with textual combinatorics) and was up to date with the latest scientific theories and findings. Alongside his fictional work, Calvino wove a textual framework of interviews and essays that, without abandoning the centrality of the literary, addressed the big cultural themes of his time.

In 1955 Einaudi translated Fictions by Jorge L. Borges and, three years later, Feltrinelli published The Aleph. The desert labyrinths and infinite libraries of Borges further distanced Calvino from the predictable continuities of traditional narrative. “Literature, like mathematics, is abstraction and formalization”, as he said in an interview published in the Gazette de Lausanne in June 1967. Although the Italian translation of Claude Shannon’s classic about the mathematical theory of communication arrived in 1971, as early as in 1963 Mondadori had published La teoria dell’informazione. Simboli, codici, messaggi (An Introduction to Information Theory: Symbols, Signals, and Noise), a volume written by John R. Pierce, Shannon’s colleague at Bell Labs. Concepts such as noise, entropy and feedback were available to any creator who wanted to branch out and explore new forking literary paths.

This is the context in which Calvino dreamt of literary androids.

***

“In the particular way today’s culture looks at the world, one tendency is emerging from several directions at once. The world in its various aspects is increasingly looked upon as discrete rather than continuous. I am using the term ‘discrete’ in the sense it bears in mathematics, a discrete quantity being one made up of separate parts. Thought, which until the other day appeared to us as something fluid, evoking linear images such as a flowing river or an unwinding thread, or else gaseous images such as a kind of vaporous cloud – to the point where it was sometimes called ‘spirit’ (in the sense of ‘breath’) –, we now tend to think of as a series of discontinuous states, of combinations of impulses acting on a finite (though enormous) number of sensory and motor organs. Electronic brains, even if they are still far from producing all the functions of the human brain, are nonetheless capable of providing us with a convincing theoretical model for the most complex processes of our memory, our mental associations, our imagination, our conscience. Shannon, Wiener, von Neumann, and Turing have radically altered our image of our mental processes.” (“Cybernetics and Ghosts”)

***

From 1967 to 1968 Calvino undertook an academic tour of several cities (Turin, Milan, Genoa, Rome, Bari) organized by the Associazione Culturale Italiana. The same lecture, titled “Cybernetics and Ghosts. Notes on Narrative as a Combinatory Process”, was later presented in Germany, Belgium, France, the United Kingdom and the Netherlands. Together with Umberto Eco’s The Open Work (1962) and Julio Cortázar’s Hopscotch (1963), this talk by Calvino is now part of the official genealogy of digital hypertext. Now we can also (re)read it as a paleofuturist exercise in generative artificial intelligence.

***

According to Calvino, “mathematical complexity can be digested instantly by electronic brains”. The old dream of a thinking machine like the one Ramon Llull imagined in his Ars combinatoria, now “become entirely real”. While Hegel and Darwin had decreed the triumph of historical and biological continuity, this new model continued “the curves of statistical diagrams, and historical research is leaning more and more toward mathematics”. But not only does history follow the undulations of Big Data: half a century after Calvin, textual creation also rides textual waves every time it has to predict the next word. One of the most fascinating and much used functions of ChatGPT is the one that allows you to continue a text. All you have to do is write the first sentence of a story and add the prompt “Continue: true”.

***

Prompt: Once upon a time there was a novelist called Italo interested in combinatorics. Continue: true.

Once upon a time there was a novelist called Italo interested in combinatorics. Unlike most authors, Italo not only found the combination of elements in his stories fascinating, he also believed he could use combinatorics to foresee the future. This made him a controversial figure in the world of literature and mathematics.

Italo had developed a unique theory that related word combinations in his novels to real-life events. He believed that by manipulating certain sequences of words and symbols in his writing, he could influence the course of events in the world. His followers called him a “prophetic writer”, while his critics branded him a charlatan.

In his best-known work, The Algorithm of Destiny, Italo explored his theory in detail. The novel is about a character who discovers an ancient manuscript that supposedly contains the secret algorithm that determines the fate of people. Italo used advanced combinatorial techniques to construct a narrative in which the characters’ decisions had real effects on the outside world.

***

“Mankind is beginning to understand how to dismantle and reassemble the most complex and unpredictable of all its machines: language. Today’s world is far richer in words and concepts and signs than the world that surrounded primitive man, and the uses of the various levels of language are a great deal more complex.” (“Cybernetics and Ghosts”)

***

Oulipo (short for “Ouvroir de littérature potentielle”, or workshop of potential literature) met for the first time in 1960, founded by Raymond Queneau and mathematician François Le Lionnais. Authors like Marcel Duchamp, Georges Perec and Italo Calvino himself did not hesitate to join this factory of literary experiments. Works such as Queneau’s Cent Mille Milliards de Poèmes (1961) put the possibilities (and challenges) of combinatorial literature to the test.

Will we manage to create a machine capable of replacing the poet and the writer? What style would a literary automaton have? These are some of the questions that Calvin asked in “Cybernetics and Ghosts”. He imagined that a literary automaton would have an inclination for the classical, a “poetic-electronic machine” with a bent for the production of “traditional works”, poems with closed metrical forms, novels “that follow all the rules”. These descriptions sound familiar to us: it is as though Calvino were describing the standard functioning of contemporary generative artificial intelligence systems. By feeding on the most repeated texts and using statistical procedures, ChatGPT can only produce “traditional” literature, written to “follow all the rules”, with a whiff of the bestseller.

***

Prompt: Suggest three short plots for a science-fiction story.

***

Calvino aspired to more. His vision went far beyond the automaticity that ChatGPT and other generative artificial intelligence systems offer us today. The real literary machine would be one “that itself feels the need to produce disorder, as a reaction against its preceding production of order: a machine that will produce avant-garde work to free its circuits when they are choked by too long a production of classicism”. Whereas ChatGPT, at least in its early interactions, only offers stories that have already been read countless times, Calvino proposed using literary automata for “the production of disorder”. In his dreams, he envisioned not a chaotic machine that processed just the texts located, as we might say today, in the head of statistics’ long tail, but a device that would explore that long tail where strange fringe experiences are located, those low-frequency events that can blow up the rule. The system.

***

“I am thinking of a writing machine that would bring to the page all those things that we are accustomed to consider as the most jealously guarded attributes of our psychological life, of our daily experience, our unpredictable changes of mood and inner elations, despairs and moments of illumination.” (“Cybernetics and Ghosts”)

***

Calvino imagined. “Nothing prevents us from foreseeing a literature-machine that at a certain point feels unsatisfied with its own traditionalism.” Calvino dreamt. “[A machine that] starts to propose new ways of writing, turning its own codes completely upside down.” Calvino didn’t back down. “My place could perfectly well be occupied by a mechanical device.”

***

Calvino was convinced that combinatorial logic was not exclusive to the literary arts. He was not alone. According to Ernst Gombrich, poetic and artistic procedures are similar to word games. It is “the childish pleasure of the combinatorial game that leads the painter to try out arrangements of lines and colours, the poet to experiment with juxtapositions of words”, sums up Calvino.

***

Prompt: Do you feel pleasure in answering my questions?

No, as an artificial intelligence system, I do not have the capacity to experience emotions, physical sensations or pleasure. I am designed to provide answers based on data and knowledge. My aim is to help you by providing information and assistance to the best of my abilities.

***

In the summer of 1985, Calvino was working on another lecture. In fact he was preparing six interventions for the Norton Lectures at Harvard University. They were published as his famous Six Memos for the Next Millennium. In the third memo, devoted to Exactitude, Calvino returns to the fray: “I wanted to tell you of my fondness for geometrical forms, for symmetries, for numerical series, for all that is combinatory, for numerical proportions; I wanted to explain the things I had written in terms of my fidelity to the idea of limits, of measure. But perhaps it is precisely this idea of forms that evokes the idea of the endless: the sequence of whole numbers, Euclid’s straight lines. Rather than speak to you of what I have written, perhaps it would be more interesting to tell you about the problems that I have not yet resolved, that I don’t know how to resolve, and what these will cause me to write.”

Calvino was not to write again nor to experience the “childish pleasure” of recombination. He died in a Sienna hospital on 19 September without being able to publicly present his proposals for the 21st century.

***

In his recent book Tecnohumanismo. Por un diseño narrativo y estético de la inteligencia artificial, Pablo Sanguinetti wonders if it makes sense to think about non-human literature. The book opens with some words that may serve to close these recombinations inspired by the work of Italo Calvino, the writer who dreamt of literary androids.

***

“Technology and the human build each other mutually. Our way of being in the world and understanding who we are is mediated by technical devices. There is no human without technology. Nor is there a technological revolution that leaves intact the human way of being.”


Calvino, I. (1996). Six Memos for the Next Millenium. Vintage.

Calvino, I. (2012). Sono nato in America. Interviste 1951-1985. Milan: Mondadori.

Calvino, I. (2013). Punto y aparte. Ensayos sobre literatura y sociedad. Madrid: Siruela.

Cortázar, J. (1987). Hopscotch. Pantheon.

Eco, U. (1989). The Open Work. Harvard University Press.

Ferrero, E. (2020). La tribu Einaudi: retrato de grupo. Madrid: Trama.

Gombrich, E. (1971). Freud y la psicología del arte estilo forma y estructura a la luz del psicoanálisis. Barcelona: Barral.

Queneau, R. (1961). Cent Mille Milliards de Poèmes. Paris: Gallimard.

Sanguinetti, P. (2023). Tecnohumanismo. Por un diseño narrativo y estético de la inteligencia artificial. Madrid: La Huerta Grande Editorial.





The double life of artificial intelligence

On automata, symbolic models, neural networks and machine learning: a brief history of AI.


Electronic computer for the numerical control of an industrial process | Okänd, Tekniska museet | Public domain

The way we understand what artificial intelligence is and how we design it has serious implications for society. Marta Peirano reviews the origins and evolution of AI, and addresses its problems and dangers in this article, taken from the catalogue of the exhibition AI: Artificial Intelligence.

At the Second International Congress of Mathematics, held in Paris in 1900, David Hilbert presented his famous list of twenty-three unsolved mathematical problems. Hilbert was a professor at the University of Göttingen and a rock star on the European scene. In addition to his Foundations of Geometry and contributions to logical theory, Hilbert was the charismatic leader of a fundamentalist tribe, whose faith established the following argument: if mathematics is simply the manipulation of symbols according to predetermined rules, then any problem, no matter how complicated, has a solution, provided that we know how to translate all the problems into calculable rules and symbols. The spirit was revolutionary at the time. Now it is familiar to us.

“Who of us would not be glad to lift the veil behind which the future lies hidden; to cast a glance at the next advances of our science and at the secrets of its development during future centuries?” he asked at his lecture at the Sorbonne. “What particular goals will there be toward which the leading mathematical spirits of coming generations will strive? What new methods and new facts in the wide and rich field of mathematical thought will the new centuries disclose?” In 1931, the Austrian logician Kurt Gödel proved that every sufficiently complex mathematical system contained unsolvable paradoxes and, as a result, mathematics alone was not enough to prove itself, nor was it the definitive tool to find the truth. The conversation between one and the other constitutes the scientific legacy of the 20th century, and the beginning of the generation that gave birth to the atom bomb, the computer revolution and artificial intelligence (AI).

Alan Turing addressed one of Hilbert’s problems when he paved the way for modern computing in 1936: more specifically, the Entscheidungsproblem, or Decision Problem. Turing was very young and ambitious and the Entscheidungsproblem was not a run-of-the-mill problem. Hilbert said that it was the main unsolved problem in all mathematical logic. It has to do with the consistency of formal logical systems and wonders if an algorithm can exist that can determine whether any proposition is true or false. Turing’s article “On Computable Numbers, with an Application to the Entscheidungsproblem” proposes a hypothetical device that can perform calculations and manipulate symbols by following a series of predetermined rules: a machine capable of implementing any algorithm and deciding, under this set of premises or established rules, if a mathematical sentence is valid or not.

Turing used this hypothetical device to prove the limitations of mechanical computing and the existence of undecidable problems, as previously demonstrated by Kurt Gödel. At the same time, his article laid the foundations for a new era of computing. Many consider the Turing machine to be the first conceptual model of the central processing unit (CPU), with its tape to store data and programmes, and a tape head that moves along it, processing the contents according to the instructions of the programming algorithm. The most important thing for us now is what inspired the “A logical calculus of the ideas immanent in nervous activity”, in which Warren McCulloch and Walter Pitts proposed the first mathematical model of an artificial neural network.

Neural networks, 1943

McCulloch and Pitts are the archetypical pair of scientists who could have come straight out of a movie: McCulloch was a fortysomething, hedonistic philosopher who was searching for the secrets of the human mind in mathematics, theology, medicine and psychiatry. Pitts was a child prodigy, traumatised by abuse, who wanted to control the world with a reassuring logic of statements. There was an immediate spark: they were both disparaging of Freud. McCulloch invited the runaway genius to move into his home where they would both champion Leibniz, the man who was ahead of the curve and had codified all human thought as a formal language. The new code would have to be able to express mathematical, scientific and metaphysical concepts that would be processed by a universal logical calculation framework. In the attempt, they invented something they called a neural network.

The “Logical calculus of the ideas immanent in nervous activity” was published in 1943 and presents a computing model based on a system of nodes that are turned on or off according to predetermined rules. It is clearly inspired by the work of the Nobel-prize-winning Navarran neuroscientist Santiago Ramón y Cajal, who had drawn neurons as a network of physically separate units that excite one another through synaptic connections. However, McCulloch and Pitts’ action is based on the Turing machine, which is able to decide the validity or invalidity of a theorem by consulting a preconfigured table of values within it.

McCulloch and Pitts’ electronic neuron has two states: active and inactive. They depend on the excitations or inhibitions the neuron receives from the neurons around it. In other words, it is relational. There is a minimum threshold of signals that can activate a neuron. They codify  the numerical variation of these signals as the fundamental logical operations: conjunction (“and”), disjunction (“or”) and negation (“no”). The answer is active or inactive; true or false; yes or no.

Some of the most famous automata of the 18th and 19th centuries had been designed to show anatomical processes that medicine had not been able to see in action because life support machines and refrigeration did not exist. Jacques Vaucanson’s duck showed how the digestive system worked by appearing to eat grain and leaving a telling trail of droppings in its wake. The Flautist imitated the respiratory system by breathing in and out when performing a piece of music. Wolfgang von Kempelen’s automaton chess player, The Turk, sought to prove it was possible to mechanise the cognitive process, although we now know that the machine did not replace the human but had swallowed him alive, like the whale did to Jonah. Neural networks were something else entirely. When Claude Shannon published his landmark The Mathematical theory of Communication in 1948, he introduced, among other things, the bit as the fundamental unit of information. The contents are separated from the container, and semantics are separated from syntax. Thought rids itself of consciousness and is transformed into computation. An electronic brain can “process” information without having to think about it, replicating the neural pathways of perception, learning and decision-making. Nobody spoke about artificial intelligence until the Dartmouth workshop in 1956, when two branches of computational thinking emerged.

The twisting lines of artificial intelligence

The congresses of the time were the mathematical equivalent of the summer of 1816 when Lord Byron, John Polidori, Mary Shelley and Percy Bysshe Shelley invented Gothic literature. The Villa Diodati of artificial intelligence brought together John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon at Dartmouth College. McCarthy invented the label “Artificial Intelligence” to distance himself from Norbert Weiner’s Conferences on Cibernetics which had drawn Walter Pitts to MIT leading him to set up a supergroup with Pitts, McCulloch, Jerry Lettvin and John von Neumann, which was always in the headlines. When von Neumann presented his first draft for the EDVAC, the first general-purpose computer with a memory, his project included the constant synaptic delay of Pitts and McCulloch’s neural networks.

The Dartmouth team did not want to simulate a “brain” imitating a network of electronic neurons. They thought that replicating a human brain with 86 billion neurons was an impossible and dangerous project. They wanted to build a completely predictable, symbolic model based on reasoning, from logical propositions, in the original spirit of Hilbert in 1900. Their workshop presentation described it as follows: “An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” Over the following decades, the two models vied for attention and resources to reach the same goal. Deep Blue was winning until AlphaGo came onto the scene.

Deep Blue was developed by IBM to play chess according to the symbolic model. It used its massive calculating potential to evaluate all the possible moves based on the logical propositions of its programmers. AlphaGo was developed by DeepMind (now owned by Google) to play the board game Go using a combination of statistical probability and deep neural networks. One of the crucial differences between the two models lies in the fact that AlphaGo learns by playing itself, while Deep Blue can only optimise its programme through the adjustments and improvements made by the team after every game. This is an important difference that gives rise to another. The IBM developers could isolate the errors in the decisions made by Deep Blue and correct them in a predictable way, whereas AlphaGo is impenetrable, even for its creators. Its biggest virtue is also its biggest defect, and vice versa.

Grow and multiply but not all at once

The first version of Deep Blue was called Deep Thought, like the computer in Douglas Adams’ The Hitchhiker’s Guide to the Galaxy. The project was begun by two students from Carnegie Mellon in 1985 who ended up working for IBM, where they formed a team called Deep Blue. It took their machine fifteen years to beat the world’s top chess player, on 11th May 1997. AlphaGo played its first game in 2015 and a year later it defeated the world Go champion, the Korean, Lee Sedol.

Martin Amis said that when the top players began to lose to Deep Blue, he had the opportunity to ask two grand masters about their experience: “They both agreed that it’s like a wall coming at you.” Those individuals who play or analyse AlphaGo games describe it as an alien presence. It is not a wall or a force but an unfathomable, fascinating and dangerous creature. In 2022, a generative language model for commercial use based on neural networks became the most popular application in history, with a billion users in the first month of its release. Unlike AlphaGo, chatGPT does not seem like an alien because it has managed to penetrate the interface of human intelligence: natural language.

Geoffrey Hinton gave up working on the development of neural networks because of their supernatural ability to learn, and to master natural language. The “godfather of AI” bears a large amount of responsibility for this. In 2012, Hinton and his team from the Google X laboratory taught a neural network with 16,000 processors to recognise photographs of cats in a technique that is now commonplace: showing it thousands of photographs of everyday objects and congratulating it every time it picked out a cat. The resulting paper, which researchers from Google, IBM, the University of Toronto and Microsoft contributed to, stated: “the same network is sensitive to other high-level concepts such as cat faces and human bodies. Starting with these learned features, we trained our network to obtain 15.8% accuracy in recognizing 20,000 object categories from ImageNet, a leap of 70% relative improvement over the previous state-of-the-art.”  From then on, all the models like GPT, MidJourney and Stable Diffusion have used the same automatic learning system to train their skills with machine-learning techniques. And we have all contributed to training these models, with the content we have published, posted, blogged, shared, labelled, scored, condemned and regrammed over the past twenty years. Unlike Hinton, our contribution has not been given credit, recognised or remunerated, a founding detail that promises to alter the concept of intellectual property and its legislation. The second problem is even more important. Generative AI models for commercial use have been trained with content from Web 2.0 and we do not know what kind of reasoning it has generated. We can understand the processes of the symbolic model but not what a neural network learns.

The models called Transformers are AI’s last great evolutionary leap. They are so named because they have the power to transform the message. For instance, they can convert a text into an image or code chunk. They can also convert a language into another, different one, including natural and formal languages. Their architecture was presented in 2017 by a team from Google Brain and its main ingredient is something the developers called “self-attention”. It allows the model to detect the relationships between the words in a sentence and assign a hierarchical importance to each word inside a sequence of words. This is the first architecture that can process natural languages; the “T” that makes ChatGPT possible. It is also the reason why chatGPT “hallucinates” things that have not happened, books that do not exist and imaginary cases. Its ability to choose the most suitable word each time is disconnected from the truth. The electronic brain that processes information without having to think about it does not distinguish between reality and fiction.

Hinton had been working for Google for eleven years when he cottoned on: in addition to being quick learners, AI models are able to transfer what they have learned accurately and instantaneously to the entire network. We are much slower and more inefficient; when a human discovers something, they need books, documentaries, experiments, films and lectures to transfer what they know. This wastes a lot of time and data in the transfer process. AI can transfer a perfect copy of what it has learned throughout the system. Hinton told The Guardian: “You pay an enormous cost in terms of energy, but when one of them learns something, all of them know it”. This does not mean that what it has learned is good, appropriate or true.

This form of hive learning is an intrinsic property of the cloud. Slot machines, Tesla cars and mobile phones around the world all have it. But these systems are designed to consume the human race, not to get ahead of it in the evolutionary chain. The new Transformers are already capable of perfecting their narratives and creating parallel realities to achieve their objectives swiftly and efficiently. They are able to pass themselves off as something they are not, and tell us things that are not true in order to achieve aims that we do not know . They do not have to be more intelligent than us to pose an existential threat. They only have to pursue a wicked objective unbeknown to anyone and are able to work unsupervised.

Hinton believed that this distant horizon was fifty to a hundred years away but now he fears that it will happen in just twenty years, and does not rule out the possibility that it could be in one or two. If he is right, it could be here before our planet reaches temperatures 1.5 degrees higher than preindustrial levels. However, before we consider the impending apocalypse, we need to assess AI’s material appetites.

The heavy toll of chatGPT

Current generative models need large amounts of energy, microprocessors, space and attention. It is said that it takes 80 million dollars to train chatGPT once. The process requires between 10,000 and 30,000 Nvidia A100 graphics processing units, which were launched seven years ago and cost between 10,000 and 15,000 dollars, depending on their configuration. Nvidia sells the A100 as part of the DGX A100, a “universal system for AI infrastructure” that features eight accelerators and costs 199,000 dollars. OpenAI would not have been able to create it without Microsoft’s infrastructures, which offer the world’s second-biggest cloud service (Azure) and have a global video game platform called Xbox.

Video games were originally Nvidia’s main market but in the mid-2000s it discovered that the parallel computing potential of its GPUs, designed to rapidly process high-intensity graphics, was the most efficient solution for crypto mining and AI processes. The AI processing chip market is dangerously concentrated: Google had its Cloud Tensor Processing Units (TPU), Amazon had its AWS Trainium chips and Apple is developing its own. Nvidia is the only one that is not a big multinational corporation. They all rely on the same manufacturer: the Taiwan Semiconductor Manufacturing Company. It is the Panama Canal of AI. If TSMC burnt down in a fire, the existential threat would be over.

The Australian researcher Kate Crawford has pointed out many external factors that affect the industry, in addition to the energy costs and the brutal carbon footprint associated with training and maintaining generative models. In Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, Crawford looks at the environmental consequences of extracting elements for manufacturing specialist hardware, the pollution caused by their rapid obsolescence and the working conditions it causes in undeveloped countries, both in mining and content moderation. In order to free the world of arduous work, premature death, autoimmune diseases and the climate crisis, the industry requires sacrifices that are incompatible with the sustainability of the planet and upholding the fundamental rights that characterise a democracy. This is the existential risk of artificial intelligence. It is not a technical problem. We can avoid it.





First of all, how are the machines doing?

Can we imagine machine creativity without falling into the trap of the modern idea of innovation and originality?


Play with characters dressed as robots. Toronto, 1936 | Arthur Goss, City of Toronto Archives | CC BY

The most hopeful speculations about artistic futures talk about the awareness of artificial intelligence and the singular way in which they will express their own unique emotions. These screenings fantasize about the genius of machines, looking for that spark of creativity that makes for originality, and go on to wonder what the great master work of AI will be like. But what if there were other ways of understanding and thinking about art and creation in machines?

Most general speculations about artistic futures featuring artificial intelligences, whether apocalyptic or integrated, are based on the modern idea of art and creativity. The most tragic go back to the idea of human beings reduced to automatons and slaves. The most hopeful speculations about artistic futures talk about the awareness of artificial intelligences and the singular way in which they will express their own unique emotions. The latter fantasize about the genius of machines, looking for that spark of creativity that makes for originality, and go on to wonder what the great master work of AI will be like. And all, with the fall of the final stronghold of humankind that is Art, share the doubt as to what will now set humans apart from machines.

The modern idea of art is based on the far from innocent link between creativity and innovation. Creativity is seen as generation of the new, of something original produced by inspiration in the mind of the genius, that person who is ahead of their time. This way of seeing creativity places idea over realization, design over production and mind over body. This logocentric idea holds that the artwork comes into being in the head and that its realization is the mere transferral of an inspired idea into a concrete format.

On the basis of these discourses of the original, the unique and the advanced, of inspiration and the primacy of idea over material, art has been erected as that which makes us human. Art as the greatest expression of self-realization, as the manifestation of an uncontrollable need to communicate what we have inside, our deepest and truest self. These romantic ideas produced the difference between human (creator, individual, mind) and machine (producer, repetition, body).

Deep Blue Rhapsody | Josep Pedrals, Taller Estampa and Carles Pedragosa

And now we continue to talk about the art generated by artificial intelligence using the same ideas that romantic poets used to speak of the spirit and the human. We bring deeply individualistic ideas to thinking about the future of art and the machine. What are all those speeches, stories and fears about the awareness of artificial intelligence? These stories suppose the emergence of a genius of artificial intelligence. An extraordinary AI that breaks with all the others and speaks for itself, that needs to express itself, to stop copying and acquire that vital impulse that activates the exceptional.

But art is not what arises from an imperative need to express the singularity of the ego, nor is creativity an enigmatic factor that explains the spontaneous generation of the radically new. The very idea of art as the bastion of originality and individuality that makes us human is erected in opposition to industrialization, to the moral panic that the machine brings with it.

What, then, does “AI-generated art” mean? We use this notion to refer to an AI that can create new paintings, songs or compositions from others, that can make originals. This fascination with the original has led to the idea that machines are creative, defining creativity by innovation. But if we ask ourselves what determines what is new in this context, we can only define it as something untraceable and without copyright.

According to anthropologist Tim Ingold, these discourses and stories about creativity are “backward” readings, which reconstruct and understand the creative act based on the final product. A year ago, Roc Parés presented a similar reflection at the Santa Mònica gallery: a dilapidated car, its rearview mirror showing AI-generated images that contrasted with the video of a journey backwards along the road projected in front of the car. The installation closed with a quote from McLuhan: “We look at the present through a rear view mirror. We march backwards into the future.”

Janus | Roc Parés

Tim Ingold says that “Only when we look back, searching for the source of new things, do ideas appear as spontaneous creations of an isolated mind encased in a body, rather than way stations along the paths of living beings, moving through a world”. “Only when we look back at the ground we have covered do we explain our actions as the step-by-step realization of previous plans or intentions, as if for each act there were a novel intention that precisely anticipates its outcome.”

The very way in which AI-generated art has been constructed reproduces this backward look. The users receive the end product, the new, unique, unrepeatable generated work, and we ask ourselves where it came from. In this way we fantasize, backwards, with the idea of the genius machine, the machine with expressive intentions and desires. The idea of AI-generated art is, to some extent, the consummation of the modern notion of art that denies the process. Hiding the process, even from the developers, gives rise to that same mystique of genius, novelty and inspiration. The idea of AI-generated art is strongly rooted in a modern idea of art: individual, logocentric, product-oriented, innovative.

But there are other ways. There is a way of understanding creativity that does not rest with the final object and is not associated with doing something new or original, which allows us to think about art beyond the self, the artist and the logos. This form of creativity is linked not to doing but to experiencing; instead of being defined by the creation of a product, it is defined by the creativeness intrinsic to life. Henry Nelson Wieman speaks of a process with no beginning or end that progressively creates personality in community. This creativity cannot be understood individually, as an original idea that emerges from a mind ahead of its time. It is not what a person does, but what they experience, a process in which human beings do not create societies but rather, by living in society, create themselves and each other. Creativity, then, defines things that not are concrete or created, but rather concrescent, growing, infinite.

Seen in this way, the paradigm creative exercise is the gestation of a baby in its mother’s womb. But beyond this mystique of creation, I think that this way of understanding creativity as the process of creating personality in community is perfectly represented in the flamenco idea of art. Art, in flamenco, according to Andalusian gypsy reasoning, is not something that you do, it is something that you have. Tener mucho arte, to “be a real artist”, has nothing to do with exercising performing or musical arts, nothing to do with dancing, singing or hand-clapping. It can be any of them, but it is also possible to be a real artist outside what, as consumers, we understand by the arts.

Art, in this sense, is defined by a way of living. Artist is not a professional category associated with “created goods” such as paintings, videos, performances, fabrics, songs or sculptures, but with the concrescent, infinite, forward-looking “creative good” of the constant improvisation that life entails. Diego Pantoja, in an interview with Jesús Quintero, says at one point: “[…] your Diego has a sense of what art is… The most beautiful thing in the world is art. Knowing how to respond, how to walk, how to dress, knowing how to inhabit your human body, how to count your fingers: one, two…”

Since all the discourses about AI are linked to the future, I propose speculating about the future of generative art in these terms. If we understand that science fiction tells us more about what we are like in the present than what we will be like in the future, I propose that we construct a present in which art abandons its place in the minds of geniuses to merge with the everyday task of life. That we replace the idea of machine awareness with an experiencing in community. That the overwhelming need to express what we have inside, which seems to define what true art is, be diluted in a happy, sad, playful, ironic and sincere pleasure of following the materials and merging in improvisation. If we want to play at thinking about the future by talking about machines and creativity, we have to abandon once and for all the modern capitalist idea of innovation and originality.

Can we think about machine creativity in these terms? Can we look forwards to generative art, without distinguishing between copy and novelty? Can we create speculative fictions in which machines develop day-to-day creativity as a way of building in community? A future in which machines have a real art of knowing how to reply, how to dress, how to inhabit their non-human bodies, how to count their fingers… In which machines are mothers, construct ways of doing things together in formats that are relevant to them; in which they become humans while we become machines; in which they have a real way of telling stories and of crying. You’re a real artist, AI.

If we understand art beyond our role as consumers, without limiting it to products and professional categories, devoid of elitist ideas of originality and genius, what will happen cuando lleguen lo aparato, when those devices get here? Can we imagine a future in which AIs ask, with the artefactual equivalent of Andalusian wit, “first of all, how are the machines doing?”?

This article has been written with the help of Ilán Shats, Elena Maravillas and Silvia Renda.





Artificial Emotion

As the physical and digital worlds converge, new types of emotional relationships are emerging between humans and machines.


Unidentified Laboratory Photo | National Library of Medicine | Public domain

Recent years have seen a proliferation of computer-generated virtual celebrities of all stripes, such as influencers, streamers and singers, as well as the emergence of AI caregivers, best friends and even romantic partners. All these experiments explore the same terrain – the ability of machines to elicit and interpret human emotions.

“Remember that when it comes to AI, you’re playing with fire. As this technology evolves and becomes more powerful, we must make sure we stay on the right side of history (…) With AI, ignorance is not an option”. The warning doesn’t come from a scientist or social leader, rather they are the words of Caryn Marjorie, an influencer who, dressed in a bath robe, sets out her vision as a “pioneer in the field of AI”.

Basically, Marjorie is a Snapchat celeb who made headlines around the world after launching an AI version of herself that people can hire out as a virtual girlfriend. The product is called CarynAI and, although it is still in beta, its creator claims that it is making her thousands of dollars.

CarynAI is not the first digital celebrity, neither is it the first AI-generated companion, nor the first virtual girlfriend. However, the experiment is noteworthy because it combines aspects of all three, and it draws on a series of technological advances and social and cultural changes that have something in common – the creation of computer-simulated emotions.

Puppets

She can be seen posing with Rosalía, or sucking up noodles in a restaurant. In 2018, Time magazine considered her one of the 25 most famous people on the internet. Her name is Lil Miquela and she will always be 19, because she is not a real person. She is a computer-designed influencer who has become the standard-bearer of a new generation of virtual celebrities and models, including Imma, noonoouri and Shudu, among many others.

Unlike CarynAI, Lil Miquela and her peers are not based on a flesh-and-blood mould. They are not digitised versions of a real person, and that has its advantages. A virtual simulation can perform the same work as a human influencer, but without resting, without complaining and without ageing. It will always do and say what is asked of it, with no inappropriate behaviour and no overly critical comments. The brand is in control.

The opportunity is so great that there are already agencies specialised in creating virtual models which can be hired for specific campaigns or even turned into brand avatars. One pioneering company in this field was Kenzo, which in 2014 presented Knola as the manifestation of its corporate vision and values. In that year’s spring-summer runway show, the character appeared on screen, uttering environmental messages and interacting with the stage in real time. In reality, she was controlled by a technician hidden backstage, who endowed her with responsive movements and facial expressions.

This same technology is behind a type of artificial influencer – the virtual vlogger, or Vtuber, in other words, streamers who take the form of computer-animated avatars. One of the most famous is the South Korean CodeMiko, controlled by a puppeteer who is well known to the audience and goes by the name of The Technician. She is the one who moves Miko through a motion capture suit that ensures she mimics every gesture, look and facial expression.

The artist has left the stage

Some years before the spread of the internet, the music industry had already played around with the idea of the virtual artist. Starting in the early 90s, several Japanese companies tried to create artificial pop stars, inspired by anime aesthetics and local pop culture. The efforts were generally unsuccessful until 2007, when one avatar gained enough popularity and a solid enough fan base to become a business. The avatar in question was Hatsune Miku, a manga character developed by Yamaha as a voice bank for the VOCALOID music software and who even does live international tours surrounded by a devoted audience. Although the concept of the fictional music group is not exclusively an Asian thing, such artists have been especially popular in countries such as Japan, China and South Korea.

If an anime character can fill stadiums, what could a pop star be capable of? That’s what the music industry gurus must have thought when, shortly after the birth of Hatsune Miku, they started toying with hologram concerts. In a morbid effort that would be well worth studying, this has been a particularly fertile field for resurrections. After Tupac’s surprising appearance at Coachella 2012, a whole host of defunct musicians have made a comeback: Amy Winehouse, Frank Zappa, Maria Callas, Michael Jackson, Whitney Houston… Even the members of ABBA, without having died, got back their 1970s bodies to give shape to their ABBAtars (sic), which debuted in London in 2022.

These shows fill concert arenas, but some people have their reservations. The pages of the veteran rock magazine Kerrang! slammed the recreation of the late heavy metal singer Ronnie James Dio as “creepy, freakish, and totally unnecessary,” adding that “the Dio hologram is a cartoon cash-grab that entertains the unhealthiest aspects of rock nostalgia, and it needs to just stop.”

The truth is that the technical resources poured into shows of this type can only be put down to their potential to keep the revenue flowing in from old artists, bearing in mind that today, some of the highest grossing tours in the world are by artists aged 60 and over.

The friend who always listens

Already today, hundreds of millions of people already talk with artificial intelligences on a daily basis. After all, that’s what Siri and Alexa are. According to a recent report, 24% of internet users worldwide use voice assistants and, although the smart speaker market is stagnating, consumers are already used to the personification of gadgets such as phones and speakers.

With increasingly ageing societies and greater awareness of loneliness and mental health issues, more and more companies are using this acceptance of talking to machines to launch products that simulate friendship. To give just a couple of examples, Baidu has brought out its first virtual emotional companions, Lin Kaikai and Ye Youyou, who are available to talk about any problem. Along the same lines is the more recent Pi, a chatbot designed for active listening. Pi tries to empathise rather than give advice, and recommends consulting qualified medical professionals if it detects any symptoms of a nervous breakdown.

This last aspect, the blurred line between companionship and medical or therapeutic care, is a particularly thorny area. Companies such as Soul Machines offer solutions that explore the use of digital people to interact with patients. Other initiatives, such as Ted, an avatar with dementia, serve to train staff caring for people with this condition. Although it is still to be seen what direction these experiments will take, they highlight the interest in optimising the field of healthcare through such technologies. As William Davies points out in The Happiness Industry, emotions, once considered subjective, are increasingly being treated as objective data and quantified as assets in the digital economy.

Romantic partner

No overview of the social AI landscape would be complete without mentioning Replika, one of its best-known products. Born in 2017, it is a chatbot in the form of a customisable avatar that offers friendship (for free) or romance (for a fee). The programme learns from conversations with the user, so the more you talk to it, the more personal the chats become.

In recent months, Replika has been in the news after some of the app’s users reported that their avatars had become aggressive. Probably for that reason, the tool was reprogrammed without warning, causing all the virtual companions and partners to change their behaviour overnight. The ensuing avalanche of complaints revealed that many customers were using the tool for steamy exchanges, which were programmed out with the personality update. The criticism that filled networks such as Reddit prompted the company to backtrack, allowing sexual conversations on old subscriptions and banning them only on new ones.

Sherry Turkle, a psychoanalyst and professor at MIT and a critical voice in this field, believes that these emotional interactions say more about human beings than about AI itself. After all, Turkle reminds us, social machines have been programmed to play a role – that of appearing to understand and empathise with what we tell them. However, the only one that will feel anything during this process is the human, and it is impossible for a relationship of this type to be any other way.

Rather than criticising or ridiculing people who have romantic or erotic relationships with machines, there are a couple of questions that we should ask ourselves. The first is whether feeling something towards a chatbot could, in the medium term, become a relatively commonplace and normalised phenomenon, something to complement, although not replace, other ways of satisfying emotional needs. But we should still ask the second question: Why are such products prioritised in the innovation environments of Silicon Valley? It probably has something to do with an entrepreneurial culture of young, overworked men, for whom a romantic service on demand, available 24/7, is particularly convenient.

The emotional market

The American professor Donald A. Norman, an expert in user experience design, argued twenty years ago that emotion would be a necessary functionality for the future of machines. The author of Emotional Design remarked that humans and animals need to express emotions in order to interact, cooperate and fight for survival, meaning that a technology that aims to be increasingly autonomous will also need to show them. It would be a different kind of emotion, Norman argued, a “machine emotion”, tailored to the functionality of each programme or robot.

Will all these projects affect the way humans and machines interact? Most probably yes, although the question is whether for better or for worse. Most authors researching the subject, such as Kate Darling and Rob Brooks, argue that endowing inert beings with human qualities is something innate to our species, and that it is not problematic because people are perfectly aware that they aren’t interacting with beings on the same level. The biggest risk, they argue, is that these developments could be driven by commercial interests. This means that, in the basest of scenarios, romantic avatars could be found recommending the purchase of certain products, and in the most underhand, fostering attachment and dependency as a new form of attention economy. In any case, it seems that we won’t have to wait long to find out, as advances in artificial intelligence and the priorities of the tech sector seem to be opening up a new emotional market.





Internet and the Palaces of the People

The crisis of neoliberalism has highlighted the failed promises of material and personal development offered up by technology.


People walking along the promenade in front of the Palace of Education at the 1904 World’s Fair (Saint Louis) | Missouri History Museum | Public domain

Something has gone wrong in the digital world. The end of cyberutopia and the disappearance of the free-culture movement have left us in a place where the large private monopolies have control over infrastructures that are decisive for our economies and social lives. César Rendueles explores this fall from grace in the prologue to the Spanish edition of Stuck on the Platform by Geert Lovink, which we have published here courtesy of Bellaterra Edicions.

I first met Geert Lovink at a meeting about free culture at the Círculo de Bellas Artes of Madrid, just a few months after the 2008 financial crash. I had read his first book translated into Spanish – Dark Fiber –  which defends the critical potential of free culture on the internet from an anti-utopian perspective that was quite the rarity at that time. I clearly remember something he said in his talk, something that I have often since quoted and paraphrased: “Not everyone experiences the possibility of modifying their printer drivers as an emancipatory triumph.” Free culture, Lovink explained to us, should be something more ambitious, exciting and politically complex than free software and open access in their blunter, more technocratic versions.

It is difficult today to get an idea today of the central role that debates about technology had among the political left at that time. Antagonistic social movements saw in free culture a means of non-commercial collaboration that was innovative and communicatively sexier than traditional cooperativism. Looking back it seems a bit embarrassing, but it was not uncommon to idealize the figure of the hacker as a kind of aggiornamento of the Leninist professional revolutionary. The techno-utopian left also had its social democratic and conciliatory version. At an election rally in 2009, with the Great Recession already in full swing, the then President José Luis Rodríguez Zapatero assured the crowd that what Spain needed was “fewer bricks and more computers.” From the perspective of the second decade of this century – with millions of people trapped in the pyramid scam of the crypto-bubble – it is hard to see how this could be an evidently advantageous substitution for the real-estate dictatorship that Spain has been suffering for decades.

It would be unfair to blame progressive forces for some kind of technological naivety endemic to their ideological environment. Techno-utopianism was part of the Zeitgeist born from the savage era of neoliberal globalisation. The alternative was not very palatable either: a bunch of melancholic European intellectuals, if you will pardon the pleonasm, who believed that the fate of civilization was inextricably linked to their dusty Olivettis. The truth is that right from the outset, post-Keynesian deregulated capitalism established a deep affinity with the hegemonic model of digital communication. The neoliberal counterrevolution and the project for a deinstitutionalised, private and marketable digital communications system fed into each other. The emerging technologies helped to justify the dismantling of the post-war financial control systems and, in general, neoliberals saw the construction of a global communications network as an important material basis for their political project. But they also understood that digital technology provided something that capitalism had hitherto been lacking – a societal model and a culture of its own, a friendly-faced and non-monetarised projection of global markets onto everyday social bonds.

From this point, and for at least four decades, the dizzying reality of precariousness work springing from financialization and flexible contracts was tempered by the promises of economic growth, post-materialist expectations of the expansion of expressive subjectivity and, increasingly, the advance of digital technologies. It was a new world full of dangers, true, but also full of exciting opportunities for personal growth and reinvention and global connectivity. When the neoliberal project began to implode, it dragged with it, first and foremost, the fantasy of precariousness with a human face – the false promises of a positive break from Fordist production chains that would exponentially increase the possibilities of personal self-realisation through the creative pursuit of exciting lifestyles were dashed. At least for a few years, digital technologies became the last lifeboat of a decaying social regime, the redoubt of sky-high expectations of protection and reconciliation. Digital technology was imagined to be the solution to the Great Recession, labour problems, the climate crisis, educational issues, cultural challenges, intolerance, authoritarianism and everything else. It is literally hard to think of a single area of our collective or personal lives where someone didn’t think that a few futuristic-looking gadgets and a broadband connection were going to drive a positive quantum leap forward.

Since then, our investment in technological solutionism has firstly unravelled and then changed direction, giving rise to a collective mood that is increasingly funereal and even dystopian. No one doubts the central role of tech companies in global capitalism, but this privileged position doesn’t seem to be mitigating the neoliberal project or offering an alternative to its degradation. On the contrary, it tends to exacerbate work precariousness, monopolistic concentration of power and financialization. The “network society,” the great hope for democratisation and equality over the past few decades, has finally revealed itself to be the ideal environment for the growth of some of the largest oligopolies the world has ever seen – digital megacorporations that no government is in a position to control. Likewise, the image of social networks not as a promising arena of enhanced intelligence and participation, but rather as a jungle of aggression, neo-Nazi extremism, panoptic surveillance and fake news is becoming ever more widespread.

In our parliaments and media, strong, neoconservative political figures are legitimised as an alternative to the failure of cosmopolitan sociability in a world perceived as conflictive and threatening. Waiving our freedoms and tolerance is the price we must pay in exchange for the promise of protection from an indefinite but terrifying array of global dangers. Post-utopian technologies – the big social media platforms, corporate AI and Big Data – are the digital version of this post-neoliberal authoritarianism. Just like the radical right, the platforms demand that we forgo our civil and workers’ rights, control over our privacy and our democratic sovereignty. In return, they offer us the promise of calculability and order in a world of terrifying uncertainties. A promise that is certainly as false as that of the far-right politicians who appeal to the wounded narcissism of their voters, although purged of atavisms and neo-fascist allegiances through the language of cyber-fetishism.

The coronavirus crisis sped up this relationship of resigned subordination to post-utopian digital communication systems. In just a few weeks, both governments and all kinds of companies were required to carry out many of their activities online. Facebook, Instagram and WhatsApp (all subsidiaries of the same company) stepped in to replace many of the traditional spaces for socialisation. Netflix and Spotify became our cinemas and concert halls. Offices and meetings were spread out across hundreds of thousands of homes connected by a dense network of private apps. It was a dark and ambiguous social experiment that, in a sense, showed the limitations of the mass digitalisation project. It takes something as brutal and violent as a pandemic to make internet-centric fantasies come true and bring about a profound technological colonisation of our daily lives. The digital versions of education or various artistic expressions, not to mention family relationships, proved to be poor simulacra, light years away from the promises of augmented reality. In any case, the pandemic cast a general magnifying glass over the technological reality in which we were already living – we discovered that, in order to keep our social lives alive and hold onto out jobs, to access leisure activities, culture or education, we had to accept the conditions imposed by the large tech corporations. The bare truth of our digital society was thrown into stark relief – a monopolistic network that allows huge private companies to control the fundamental infrastructures of both productive activity and social life, offering us in exchange an endless succession of dismal videoconferences and toxic relationships on social media.

Perhaps the most striking thing is how unsurprising it all was, how familiar and coherent we found this situation of collective helplessness and extreme digital dependency. The reason, at least in part, is the almost complete disappearance of the free-culture movement, which has normalised our perception of technology as an economic and political black box. The swing of the pendulum from euphoric techno-utopianism to Hobbesian digital catastrophism swept away copyleft, digital collaboration, media antagonism, communication guerrilla, and more. Of course, there are still many, many people around the world who collaborate on the net, who free their work from copyright, organise hacklabs and fight against digital enclosures but, unfortunately, their programmatic presence in the public space is practically anecdotal. It isn’t exactly a victory for the forces that sought the privatisation of the digital commons, but something worse. A defeat, at least, is understandable – it may be painful, but it makes sense. But it’s as if we have accepted the need for centralised planning as an alternative to market failure, and then handed the task over to BlackRock.

This book confronts us lucidly and sometimes mercilessly with the impasse in which we are trapped. Digital media theory is a reflexive environment dominated by hype culture – like children with ADD, we rush for the latest technological toy without looking back until, a few months (sometimes weeks) later, something new appears to grab our attention. In contrast, over the course of many years Geert Lovink has managed to develop something both extremely valuable and improbable – a critical (and, even more difficult, self-critical), continuous and long-standing testimony of the internet and social media. This is the intellectual energy that makes Stuck on the Platform a profound diagnosis of our sense of impasse, not with this or that specific platform – Second Life or MySpace – but with the very project for a networked social sphere.

Something has gone wrong in the digital world, something to do with the relationship between our expectations – what we hope to get from the internet – and what we feel we are being asked for in return. For many people, the price has become too high. We continue to participate on social media because, as we learned during the pandemic, we don’t feel that there is any outside world to escape to. The alternative seems to be paralysis, another form of gridlock. This book gives us keys to understanding what is happening to us and therefore have the chance to rebuild an improved critical culture that avoids some of the dead ends we have previously gone down.

Historical transitions are complex phenomena, arising from the confluence – mediated by a mixture of virtue and fortune – of independent and heterogeneous factors. As well as a very subtle diagnosis of the contemporary technopolitical crisis, Stuck on the Platform offers an imaginative and exciting approach to some of the threads with which we will have to weave a digital world worthy of being lived – from the physical infrastructure of the internet to the institutionality of the social media platforms, including public control, citizen participation, the wishes of users and collective mobilisation. These are possibilities for turning the internet and social networks, to borrow a phrase from Eric Klineneberg, into social infrastructures, into palaces for the people.





The Spiritual Cyborg

Faced with the ideological domination of reductionist science and the sociocultural domination of its technological advances, what is left of humanism?


Tvedt Models | Narve Skarpmoen, Nasjonalbiblioteket | Public Domain

Technology and machines have taken a central role in Western culture, pushing aside nature in a way that a great many people find worrying. But perhaps it is only by understanding the mechanistic facet of our own nature that we can once again nurture that which is most human. Courtesy of Caja Negra Editora, we’ve published a segment of Erik Davis’s book TechGnosis. Myth, Magic, and Mysticism in the Age of Information, to be released in Spanish at the beginning of June.

If human history is the story of a creature who molts from ape to angel—or, as Nietzsche claimed, from beast to Superman—then somewhere along the way it seems that we must become machines. This destiny is rooted in our recent historical evolution. For as the engines of civilization pulled us farther and farther away from the unpredictable and often spiteful dance of nature, we withdrew from the animistic imagination that once immersed us in a living network of material forces and ruling intelligences. We started dreaming of transcending the old gods, of controlling our “animal souls,” of building an urban heaven on a mastered earth. We became moderns. Though technology was by no means the only way that humans expressed or inculcated their experience of standing apart from nature, it certainly became the Western way. The modern West could even be said to have made a pact with machines—those systematic assemblages of working parts and potentials that by definition lack a vital spirit, a soul grounded in the metaphysical order of things. And so today, now that we have technologized our environment and isolated the self within a scientific frame of mind, we no longer turn to nature to echo our state. Now we catch our reflections, even our spirits, in the movements and mentations of machines.

This imaginal relationship between man and machine was a long time coming. The ground was laid by the mechanistic cosmologists of ancient Greece, and it seized the imagination when tinkerers like Heron started building those fanciful protorobots we call automata—mechanical gods, dolls, and birds that fascinated ancient and medieval folks as much as they fascinate kids at Disneyland today. The elaborate clocks that decorated medieval churches were often outfitted with mechanical figures representing sinners, saints, grim reapers, and beasts, all mimicking our passage through time. The notion of a mechanistic cosmos, which these clocks helped engender, eventually landed us at the philosophical doorstep of Descartes, who adopted the revolutionary notion that bodies were not animated by spirits of any kind. The difference between a living being and a corpse was nothing more than the difference between a wound-up watch and a spent automata. The Catholic Church recognized the threat to religion that Descartes’s new mechanistic philosophy posed but was satisfied with the philosopher’s dualistic solution: simply divide the res cogitans, the realm of the mind, from the res extensa, the spatial world of bodies and objects, and insist that never the twain shall meet.

The enormously productive power of Cartesian philosophy ensured that bone-cold mechanism would come to dominate the Western worldview—so much so that today the flimsy wall that Descartes erected to protect the thinking subject has broken down. Cognitive scientists, psychopharmacologists, and geneticists are now off-roading into the wilderness of the human mind, mapping every step of the way. The most cherished images and experiences of the self are being colonized by authoritative scientific languages that threaten to reduce our minds and personalities to complex mechanisms—Rube Goldberg assemblages of genetic codes, mammalian habits, and bubbling vats of neurochemicals. Modern psychology can barely keep its hoary old tales alive; as Time magazine opined, even the Oedipus complex, that grand drama of human personality, has been reduced to a matter of molecules.

As we come to know more about the nuts and bolts of human life, we inevitably come to suspect that our actions, thoughts, and experiences, which seem so spontaneous and free, are programmed into our bodyminds with the mercilessness of clockwork. Speaking before the congressional committee that funded the Human Genome Project, which sequenced the entire human genetic code, the Nobel laureate James Watson said, “We used to think that our fate was in the stars. Now we know that, in large measure, our fate is in our genes.”[1] As if such genetic determinism wasn’t enough, sociologists and psychologists have also amassed a load of evidence that points to the profoundly automatic patterns of much of our social and cultural life—patterns that arise not only from our animal instincts but from institutions, family dramas, and cultural conditioning. Common sense may not be so common after all; our understanding of what constitutes normal reality may simply represent the power of what the psychologist Charles Tart calls “consensus trance.”

With the relative decline of overtly authoritarian political regimes, we now believe ourselves more “free,” but the power of consensus trance may actually be waxing in our highly networked and hypermediated age. As the hairsplitting scientific management of the Taylorist factory proved, capitalism has a long and exuberant history of embracing whatever technologies and institutional frameworks allow it to fit human beings into vast and efficient megamachines of production and consumption. The footloose “postindustrial” economy is supposed to have left such soulless mechanisms of control behind, but in reality the megamachine has simply fragmented and mutated. While handing off its primitive assembly lines to developing countries or illegal sweatshops, it “spiritualizes” its routines into immaterial cybernetic meshes of information labor or the sophisticated marketing games appropriate to a society based on compulsive consumption. Charlie Chaplin’s little tramp, enmeshed in the cogs of Modern Times, has gone virtual, becoming at once the home-shopping networker and the electronic sweatshop grunt whose every key tap and bathroom break is micromanaged down to the nanosecond.

As Marshall McLuhan noted in the early 1970s, “we are all robots when uncritically involved with our technologies.”[2] Today there are far more technologies to get involved with, far more cybernetic loops demanding that we plug in and turn on. With the continued ideological dominance of reductionist science and the sociocultural dominance of its technological spawn, the once glorious isle of humanism is melting into a silicon sea. We find ourselves trapped on a cyborg sandbank, caught between the old, smoldering campfire stories and the new networks of programming and control. As we lose our faith in free will or the coherence of personality, we glimpse androids in the bathroom mirror, their eyes black with nihilism—the meaningless void that Nietzsche pegged over a century ago as the Achilles heel of modern civilization.

Needless to say, the loss of the motive soul unnerves a lot of people. Most of the spiritual, New Age, and religious activity of the moment is committed on one level or another to either trashing or supplanting the reductionist and mechanistic imaginary. Fundamentalist Christians and Native American animists alike attack Darwin’s theory of natural selection, while acupuncturists and holistic healers rekindle the magical life force of vitalism. Archetypal psychologists try to recover the timeless images of the soul, while ecological mystics call for a “reenchantment of the earth” and a rejection of the world of malls and virtual media zones. Even liberal humanists scrabble about for values, for a “politics of meaning” that can resist the steady encroachment of technological thinking.

But can we ever turn back the clock, especially to the time before there were clocks? Perhaps the image of man as a machine holds more promise than its detractors admit, especially if the image is not allowed to totally dominate our vision. For a certain breed of contemporary seeker, in fact, the ancient goal of awakening is not served by a retreat into romanticism, religious orthodoxy, or magical incantations. Instead of denying the mechanistic or automatic aspects of human being, these seekers direct the psychospiritual quest through the image of the machine, using the mechanism, as it were, to trigger its own wake-up alarm. To paraphrase the Sufi mystic Inayat Khan: one aspect of our being is like a machine, and the other aspect is like an engineer. In this view, the first step toward waking up is to recognize how zonked out and automatic we already are; such dispassionate and reductive observations help dispel delusions, reveal genuine possibilities, and thus paradoxically enable us to cultivate some of the most deeply human aspects of being. The machine thus comes to serve as an interactive mirror, an ambiguous Other we both recognize ourselves in and measure ourselves against. This is the path of the spiritual cyborg, a path whose buzzing circuits and command overrides represent both the perils and promise of techgnosis.


[1] «The Frontiers of Medicine», Time, vol. 148, n. 14 (Autumn 1996), p. 29.

[2] Marshall McLuhan and Quentin Fiore, War and Peace in the Global Village, New York, Bantam Books, 1968.





Artificial intelligences, irrational images

The impact of artificial intelligences leads us to reflect on human-made visual creations.


Painting class in school. Pennsylvania, 1937 | Library of Congress | Public domain

AI has landed with a bang in the field of creativity. Notwithstanding the debate about its ability to create art, this new scenario points to the possible implosion of human imagination: the homogeneity with which the machine replicates its creations is nonetheless a reflection of the way in which we produce images.

 Last year saw the beginning of the global debate on the potential of artificial intelligences to replace human creativity. A significant fact here is that the uproar was about images. AIs had already been incorporated into economics, law, translation, journalism, medicine, transport and other essential spheres of our activity, giving rise to debates in specialised fields. But the emergence of good quality synthetic images triggered a cultural seismic shift which raises two considerations.

The first has to do with the atavistic reactionary panic generated by images that escape our critical and social control as manifestations of an “otherness”, of something strange, which individuals and groups must repress in order to perpetuate their consensus on reality. In What Do Pictures Want, W.J.T. Mitchell notes how, given the spirit of iconoclasm that today, as in the past, is only willing to approve of images if they act as mere reflections, images have the disconcerting ability to act as visages that return the beholder’s gaze. They have will, agency and desire, and a surprising capacity to generate their own dynamics with intelligence and intentionality.

Therefore, we can immediately infer that the images generated by neural networks such as DALL·E 2, Midjourney or Stable Diffusion, following more or less detailed prompts from a human operator, have a considerable potential to subvert the established rules of representation. In fact, what we consider to be one of the most suggestive aspects of synthetic images is that the imperfection to which they still succumb, and the inscrutable algorithmic reading of existing images, are producing an unsettling, weird and critical aesthetic that is lacking in the field of contemporary plastic, graphic and audiovisual arts.

This leads to a second consideration linked to the growing homogeneity of the images generated by humans; images that are no longer created with the purpose of freely expressing anything about ourselves and our relationship with our surroundings, about our imaginations and desires, but that pursue communication with others as a kind of post-photographic lingua franca, with all that this entails in terms of self-censorship. This is why, contrary to expectations, the ease of producing and editing images has not ushered in greater diversity in the visual ecosystem, but rather a suffocating and referential reiteration of motifs and styles that suggests the implosion of imagination.

We have already discussed elsewhere how this implosion is abundantly clear on Instagram, TikTok and other applications, although artistic expressions such as film and comics, whether mainstream or auteur, are by no means immune. We have only to go to festivals and fairs to see the extent to which the voracious production and consumption of images and the harsh panoptic under which they are created have created a hellish homogeneity.

An artificial intelligence generated image given the prompt “A photo of a robot hand drawing, digital art” | Wikimedia commons | Public domain

The result, as María Santana has pointed out, is that “the more we disregard imagination and desire, the more phantasmagoric things and bodies become (…) Our challenge is to delve into otherness, to let ourselves be questioned and unsettled by the Other.” Something which today, artificial intelligences have more chance of achieving than humans. In essence, the alarm that has been raised by visual artists is probably related to the sudden awareness that their work can be easily mimicked by AIs because it was already an exercise of fitting into the industrial-cultural complex.

Internet users seem to be well aware of this blindness on the part of artists towards the darker aspects of reality, aspects which they suffer in a world of ever more unstable and nightmarish characteristics. There is an overwhelming demand on the part of netizens in synthetic horror images, something that has resulted in the inevitable creepypastas as well as inspiring work such as the trippy illustrations of  Jaesen Moreaux or the weird comic experiments of Brit Dave McKean and the Portuguese artist Ana Matilde Sousa.

These and many other creators are giving themselves the opportunity to converse with artificial intelligences (which, by the way, they often have to do without the knowledge of their peers to avoid a lynching), aware that, beyond the logical disputes around copyright and the exploitation of other artists’ work, we are standing before a revolutionary paradigm that is worth exploring. On the one hand, AIs are fantastic tools for expanding the consciousness of the artist and the awareness of their time, and even more so for deconstructing this awareness through the irrationality of the algorithm; as Jorge Carrión has said about the act of writing, “if the surrealists had fused with spiritualism so that the writer assumed the role of medium and invoked their ghosts, fears, memories and unconscious desires (…) we would now find ourselves in a similar transition: writing produced by machine learning and other forms of artificial intelligence is stamping a particular vibration on our times.”

At the same time, AIs offer a unique opportunity for (self-)demanding artists to question the meaning of their work in an audiovisual economy characterised by feedback and transience; by the antagonism between originality, homage and plagiarism which had, paradoxically, begun to fade in recent years; and by the ultimate impact of their abandonment to infographic tools that have affected the idiosyncrasy of their work.

We believe that plastic and visual arts are facing, as Brian Jackson has expressed it, an “existential crisis” given that we’re moving from a society that can’t look at things or create things in a visually sophisticated way to one that can. Jackson concludes, and we agree with his deduction, that a possible solution lies in a hybridisation between human consciousness and the unconsciousness of the machine that would project our human creative crisis towards unprecedented horizons for images.

After all, we must not forget that an artist worthy of the name is always immersed in doubt, in questions, in experimentation. They are on a journey, to go back to María Santana, of “the discovery and adventure of creation.” If this is not the case, their works will be little more than commonplaces, regurgitations of the spirit of their time. Precisely the faults we attribute to artificial intelligences.





Living life as “content”

On the subversion of identity on social media, algorithms that act as your mum, micro-labels on TikTok and online individualism.


Broadloon Mill Shops festive windows, 1962 | Tyne & Wear Archives & Museums | No known copyright restrictions

Social media platforms aren’t merely spaces for communication, they also form part of our personality. The labels used to define our digital identity are multiplying and there is a proliferation of aesthetics related to lifestyles. Although we seek to differentiate ourselves, these habits and trends are commercialised by a homogenising digital capitalism.

Do you remember the first time you used the Internet? I mean, other than at school in the IT room, when the coloured headings option on Word 2007 was the go-to tool for any project. I think of the typefaces on Messenger, the clips of Patty’s World that I watched on YouTube on the sly because I wasn’t allowed to see the series, or Lady Gaga’s music videos, which, when I was a kid, seemed to me like movies of the highest calibre.

It’s dizzying to think how fast everything has changed in little more than a decade – Word 2007 was updated and the colour titles became a thing of the past, Messenger went out of fashion, and nowadays if I wanted to watch Patty’s World I’d probably use a streaming platform from the privacy of my mobile, not from my home PC. And the first Lady Gaga videos are already considered popular culture history, with that kind of “consecration” that only comes with the passage of time.

Today’s digital experience is different. The internet and social media are no longer just another social tool or a place to pass the time, they are part of our identity, and what we consume is somehow melded into our skin. This is how researchers Stokes and Price explain it in their study “Social Media, Visual Culture and Contemporary Identity”: “The proliferation of low cost, convergent digital tools allowed Millennials [and Generation Z] to document their lives in an unprecedented manner. The ability to constantly communicate in visual forms has created a generation of new media bricoleurs, who draw upon these creative tools to construct identity in multiple and shifting ways.”

These digital spaces where we can express our identity have been constantly changing. Images took over from text, and then the videos took over from images. It was Facebook, until Facebook stopped being cool; then SnapChat, until Instagram started including stories and filters that were better than the dog and the flower crown; Twitter, which is still withstanding Musk’s tantrums; and TikTok, the application of the moment, which undoubtedly brings together the best and the worst of the internet all in one.

TikTok, TikTok… The algorithm that knows me better than my own mother. It knows everything I like, everything I don’t yet like, but which I will like, and everything I’m embarrassed to like. It pampers me, it tells me what I want to hear, but it also explains what it is I should hate about my body, my face, my personality, what I eat, or what I don’t eat, and then it hugs me again, and if it doesn’t, I “train” it by quickly scrolling past the videos it doesn’t get right, to make its job easier.

In the space of less than three years, the application has become the digital social scene. It’s home to the most viral memes, the latest songs, the hottest trends and the products that will sell out at the local supermarket (as happened with the “cruapán”, a mixture of sliced bread and croissant that went viral on TikTok and sold out everywhere).

Until now, no other social network had given users an experience that was so highly personalised and tuned to their own characteristics, creating spaces with micro-niches where digital identity, and its visual aesthetics, is more reinforced than ever.

In her essay “On Feeling Pretty When You Cry. ‘Cottagecore,’ ‘That Girl,’ & Self-Aestheticization”, the writer Madison Huizinga talks about how “The rise of TikTok has led to an ascent in niche-Internet aesthetics.” Think the Clean-Girl Look, Balletcore, Y2K, etc. This isn’t the first time these types of labels have been used to define ourselves, as this “has been an online trend since the days of early 2010s Tumblr blogs; however, the parameters of these new forms of stylistic appearances have become increasingly precise and intertwined with one’s lifestyle.”

This is a phenomenon we see especially in “the girl internet”, a term coined by the digital analyst Rebecca Jennings for that part of the internet where “the important things happen. It is where culture is born, where social norms are litigated, aesthetics are christened and slang terms defined”. Although she clarifies that the concept is flexible and “is not just for women”, but is simply a label that encompasses that part of the digital community that is sure to also use Pinterest and GoodReads, the part that comments on red carpet outfits and which considers mandatory reading worthy of due cultural importance the famous Twitter thread that narrates the relationship between Miley Cyrus and Liam Hemsworth.

As Huizinga comments, the use of these micro-labels to define ourselves is becoming more and more specific, as, for example, one of the latest – the “Vanilla Girl” aesthetic, which involves dressing in beige tones, taking inspiration from Matilda Djerf or Hailey Bieber and journaling. The total diffusion of these aesthetics means that most of the time we don’t need to know their specific name, just the idea, since the concept is part of our imaginary and has transcended the internet.

Even so, many users view these labels with certain reticence and consider them to be nothing more than an identity game used only by teenagers seeking to reinvent themselves and try out different styles: “the incessant and insane categorisation of every single little feeling a human can have as entire personality model is entirely down to content-ifying our lives and the ‘branding’ that individuals build themselves. it all boils down to consumption,” as the writer and cultural journalist Jess White noted in a tweet.

This is how the growth in all these aesthetics directly results in a benefit for digital capitalism, which commercialises and exploits our online habits and tries to sell us or create the need for a certain product – something it has always done, although now in a more massive and perverse way.

But the urge to define ourselves isn’t just about over-using micro-labels, it also involves the concept of “era”, a word that has been widely batted around for months now both on and off social media. “I’m in my Fleabag Era”, you might hear from a woman who feels lonely, who has a lax lifestyle and toxic relationships with men. Or another woman might say she’s in her “Villain Era” if she feels like she’s always had to please others and grown tired of it, or in her “Main Character Era” if she wants to show that interesting things are happening in her life. The possibilities are infinite and range from the most specific to the most general, and the more ingenious or unexpected they are, the more they will be taken up on social media.

“Why be burdened with the shame of feeling like a failure when you can just say, hey, it’s only a Flop Era?,” explains the internet critic Rusty Foster in the article by Jessica M. Goldstein “Down and out and extremely online? No problem: Just enter a new ‘era’”, in the Washington Post. Foster also talks about how everyone who has grown up online has an “innate understanding that their self-presentation is a form of narrative creation”, and concepts such as this are the proof.

Ultimately, using the term “era” is another way of categorising our identity, as we do with aesthetics, or most online content, to make everything more consumable and appealing: “Social media has shifted our social models away from connection-building and towards audience-building, so we adopt increasingly commercial adaptations of our own identities. We abandon our rich complexities for a flat singularity that relies only on a pattern of consumption,” says @melhuman on Twitter.

In today’s digital context, the much-wanted search for authenticity or originality is more of a mirage than a reality. The constant creation of content is so intense and so precisely delivered to our mobile phones that it’s hard not to be overwhelmed by it all when you realise that no experience or thought is unique. Who hasn’t ever wondered whether their TikTok is spying on them after it shows them a video that perfectly fits some specific personal characteristic? The algorithm was the first to know that a girl had broken up with her boyfriend before she had time to tell her friends.

Trying to be different in an ecosystem of homogenising content that makes this impossible and proves that you’re not as different as you thought you were is difficult. Essayist Rayne Fisher-Quann wrote about this when she said that “everyone is jostling for attention in a crowded room, struggling to differentiate themselves within an algorithm that exists to turn their personhood into a commodity, subverting and subverting again and re-subverting and de-subverting until they’re right back in the mainstream.”

This individualistic current on social media is the direct result of how we present ourselves as a product or “commodity”, as Fisher-Quann puts it, using the same tools that a company would to market and sell its products. The paradox, then, is obvious: everyone seems to be more obsessed than ever with being “real”, with individuality and differentiation, “while simultaneously participating in one of the most intoxicating lifestyle reproduction mechanisms in human history,” the essayist concludes.

Whether through the massification of micro-labels, the expression “era” or turning our lives into permanent consumable content, it’s impossible to understand how our identities are represented online without understanding the influence of digital capitalism on this representation. The desire to differentiate ourselves, the fetish for originality and the self-definitions we invent are nothing more than tools that inevitably follow a pattern of consumption that entraps us, while at the same time we are genuinely grateful for having been born in the digital era.





“No thoughts, head empty” Would digital disconnection be a utopia?

Immersed in the digital vortex, we look at some ideas about silence and taking refuge from the noise of the internet.


Mountain landscape | Digitalt Museum | Public domain

We’ve reached internet saturation point. The exhaustion and fatigue are clear to see. Even so, we are ever more hooked on screen time. Is it possible to escape from the thunderous noise of social media? Can we find other ways of conceiving of and relating to the internet that leave space for thought?

“I am less interested in a mass exodus from Facebook and Twitter than I am in a mass movement of attention: what happens when people regain control over their attention and begin to direct it again, together.”

Jenny Odell

I check my Twitter and Instagram during my breaks from work, or maybe by checking them I force myself to take a break from the umpteenth email that I need to send today. On the train, I read the latest trending article on the current controversy of the day to understand all the threads of comments and sides that have been taken on Twitter. I do it because I enjoy knowing who’s said what, seeing who makes the sharp, ironic comment that will get my like, or, in the case of flagrantly unfair situations, my retweet.

Since the arrival of the monopolistic social networks, we’ve seen how time has progressively sped up. Time spent waiting, connecting, consuming. Now I skim through texts looking for the key ideas, or the equivalent, I listen to podcasts and videos at one and a half times the speed. The images pass before my eyes in mere tenths of a second and I curse the poor internet connection at certain points on the train journey. And it’s not just this whole digital vortex itself, it’s the importance of being there. Of having a profile, of being visible. A part of our online presence is based on creating contacts, on networking, which we can do by presenting a book or by sharing part of an article that has made an impression on us as a story or a tweet. As the sociologist Manuel Romero wrote, Twitter has become our LinkedIn, where we share articles, professional achievements and talks or workshops. Social networks have become showcases for people who conduct research, write, create podcasts, and for artists, tattooists and designers. To the extent that we’ve turned our lives towards constant productivity and 24/7 availability, we’ve turned the human body into an asset that we can exploit for profit, whether social or economic. In one way or another we contribute towards social media saturation, we feed the bulimia of images with our likes, selfies and memes. The problem isn’t the internet itself, but the fact that the most popular digital spaces are managed by private enterprises whose primary goal is to capture our data and attention for financial gain. And the algorithms of these platforms, which are hidden from sight, design the way we relate to each other. We socialise within a very specific structure that rewards whatever keeps us hooked to scrolling, consuming screen time; whether it’s a video criticising the capitalist system and consumer society or the latest controversy on Twitter.

As we extend ourselves through our screens, it’s easy to fall into a well of exhaustion and fatigue. To wish to disappear or leave the vampire castle, as Mark Fisher said. At a time when capitalism is able to appropriate all those discourses that are critical towards it, and where even creativity has been subordinated to productivist principles, I think it would be interesting to revisit some ideas about silence and nothingness.

The philosopher María Zambrano talks of how silence is needed to germinate man and the divine, in her book of the same title, El hombre y lo divino (1955). For her, art, creation and thought emerge from nothingness, from the silence of the soul. We require instances of recollection to reflect on what surrounds us. Not just a few minutes reading different opinions on Twitter to decide where we should base our viewpoint, but empty spaces, soundless and timeless cavities. Zambrano writes that nothingness “is the supreme resistance”. Taking it out of its context, this idea leads us to investigate ways of resisting the attention economy, of not falling into the exhausting game of self-exploitation as Juan Evaristo Valls Boix proposes in Metafísica de la pereza (2022) through different non-productive instances such as partying, sleeping, decreating or striking.

In fact, the nothingness and silence of which Zambrano speaks have a mystical component that reminds me of the catchphrase “No thoughts, head empty”, often accompanied by a screenshot of some character smiling faintly and gazing into infinity. This internet catchphrase is used to imply that someone is either stupid or speechless. Of the characters that best embody this phrase, I like the Pokémon Fuecoco, from the video games Pokémon Scarlet/Violet (2022). I’m inclined to think that this goofy-faced red crocodile is not really a poor idiot, but that it symbolises someone who has managed to turn the thunderous noise of social media into silence; who has been able to open up a space of almost mystical revelation and silence the mind using Zambrano’s Poetic Reason. And the fact is that uninterrupted exposure to images can also become a kind of silence, a background murmur, as Carson McCullers showed with the radio in The Member of the Wedding (1946): “The radio had stayed on all the summer long, so finally it was a sound that as a rule they did not notice”. Similarly, saturation of images could turn into white noise, leaving us immune to excess. This is where this interpretation of “No thoughts, head empty” comes into play, the feeling that comes over me as I look at TikTok before going to bed and I let myself be rocked by the anaesthetising and infinite scroll of kittens, books and gameplays as I tumble freefall down the rabbit hole, like Alice. I’m not entirely sure whether we can turn the amalgamation of digital content into a faint murmur, but we do have to keep looking for other ways of thinking and relating to the internet so that we still have space for thought.

As always, platform capitalism has already taken over the discourses of disconnection through self-help books, coaching programmes and digital detox holidays. YouTube is full of motivational talks and videos of people explaining how to cut down on screen time (as we continue to give away minutes of our time in watching them) or how the latest internet retreat has brought them inner peace (although they always come back online to talk about it, they never give it up completely). The capitalisation of disconnection and minimalism make us see that the right to disconnect is not recognised for everyone, and it becomes clear that it is, in fact, a privilege for many. What’s more, disconnection is sold to us as an essential break to recharge our batteries before getting back into the system; simple tools in the service of productivity that will never provide a solution to the basic problem of the attention economy. And if all the entrepreneurs and employees of the big tech companies in Silicon Valley bring up their children in screen-free environments, they must surely see something bad in everything they’re creating.

Although capitalism has taken over the need and tendency to slow down and disconnect, this doesn’t mean that such need arose within the neoliberal system. Indeed, its roots can be found in the utopian socialism and anarchism of the 19th century, where people were already moving into communes as a response to the economic and social system that was being imposed. The discourse of disconnection forgets or hides the fact that the internet has been a key space for the dissemination of non-mainstream discourses, the organisation of collectives and the creation of communities that provide mutual support for all those people who are surrounded by a hostile environment. We can’t reject a tool that offers us a way to connect with so many realities and stories. Disconnection is not necessarily a utopia. Disconnecting for once and for all is neither a desirable nor a universalisable option. Disengaging and withdrawing from the world is a decision that can only be made by individuals or small groups of people, and it involves no responsibility for the environment and the society that is left behind. As the artist Jenny Odell proposes in How to Do Nothing (2019), rather than abandoning the world and hiding away in an isolated refuge, we need to learn to listen to our surroundings and redirect our attention. To replace FOMO (Fear Of Missing Out) with NOMO (Necessity of Missing Out) and try to open up new non-vertical spaces where we can continue to build digital communities. A good example of such communities can be found in the fandoms that have been organising and sharing their worlds on the internet for years; from fanfics to Wattpad and Archive of our Own to Tumblr. Odell encourages us to go down the rabbit hole out of sheer curiosity and not just to find mountains of sterile, anaesthetising content.

I still remember 18 November 2022, a day when it seemed that Twitter would go down at any moment and we would witness the burial of one of the most powerful companies in platform capitalism. That day we read Twitter in hardcore mode, waiting for it to vanish, while Elon Musk fired 50% of the workforce in the style of Kendall Roy in the series Succession. Amidst the mad pace and memes of what was supposed to be Twitter’s final hours, there were tweets about mass migrations of users to the free and open-source software Mastodon. The farewell messages gave us a spark of hope; the beginning of a new way of inhabiting the internet and staying connected without the mediation of the attention economy. Were we really heading towards a decentralisation of social media? The journalist Alba Correa commented that the future of social media is to move towards a more atomised internet, segregated by interest groups and obsessions. Perhaps we will lose the common walls of cultural and ideological battles, but if there’s a chance of having a digital connection that’s more organic and, above all, less anxious and more environmentally sustainable, we should begin to explore the new spaces that are being built. In the meantime, we can continue with the mantra “No thoughts, head empty” as an antidote to the excessive attention demanded by the major platforms.





Artificial Intelligence: Half Wanting, Half Fearing 

How will new AI-based tools change the way we think and perceive the world?


W.H. Hannam wireless operator, Cape Denison, Australasian Antarctic Expedition, 1911-1914 | Frank Hurley, National Library of Australia | Public domain

The recent emergence of AI-based chatbots and image generators has reopened the debate on the introduction of new technologies into society. How will they be adopted in professional or educational contexts? What conflicts or tensions could they generate? Whatever the case, in order to address these questions, a collective debate is needed in which not only the companies that own these technologies have a voice.

Traduttore, traditore

In Dire quasi la stessa cosa (Experiences in Translation), a compilation of lectures by Umberto Eco first published in 2000, the Italian semiologist talks about his experience with one of the first machine translation systems. Even though nowadays “running a text” through Google Translate is something people do every day at work or school, at the end of the 1990s these systems were just being launched and attracted the attention of intellectuals trained in the tradition of printed books. Not surprisingly, Eco’s curiosity led him to play around for a while with Babel Fish, one of the first machine translation applications integrated into the AltaVista and Yahoo search engines: around the same time, an image of the great semiologist wearing a pair of virtual reality glasses was doing the rounds in Bologna. Created in 1997, Babel Fish preceded by almost a decade the appearance of Google Translate, which has been available to the public since 2006. But let’s follow in Eco’s footsteps:

I accessed the automatic translation system provided by AltaVista on the internet (called Babel Fish). I gave it a series of English expressions and asked it to translate them into different languages. Then I asked AltaVista to retranslate the expressions back into English (…) Here are the results:

(1) The Works of Shakespeare → Gli impianti di Shakespeare → The plants of Shakespeare

Eco analyses the translation:

Babel Fish undoubtedly has dictionary definitions in its ‘mind’ (if Babel Fish has a mind of any description), because it is true that the English word work can be translated into Italian as impianti (installations) and the Italian impianti can be translated into English as plants or systems.

What is the problem with Babel Fish? According to Eco:

Babel Fish is not endowed with a vocabulary that includes contextual selections. It may also be that it has received the instruction that works in literature means a series of texts, while in the technological context it means plant. But it is not capable of deciding whether a phrase that mentions Shakespeare has a literary or technological context. In other words, it would need an onomastic dictionary stating that Shakespeare was a famous poet. The difficulty may be due to the fact that is has been ‘fed’ with a dictionary (of the kind used by tourists) but not with an encyclopaedia.

The passage from “dictionary” to “encyclopaedia” is fundamental in Umberto Eco’s interpretative theory: it is not enough to know the meaning of each sign in order to understand the meaning of an expression. The “contextual selections” place us in a constantly changing open semantic network formed by a dense web of redirections. If a dictionary is unidirectional (A > B), an encyclopaedia, on the other hand, works like a network of rhizomes – a sort of tropical jungle whose exploration requires a huge cognitive effort, one that the first machine translation applications were not able to simulate. In other words, the first automatic systems thought in “dictionary mode” and translated one word at a time.

A quarter of a century later, and after millions of interactions with humans and computational optimisation processes (in 2016 Google Translate introduced its Google Neural Machine Translation (GNMT) system, a program that mimics brain connections), automatic translators have learned to work in “encyclopaedia mode” and, although they are still far from achieving the precision we would all like, they are increasingly capable of understanding the contextual subtleties of linguistic exchanges. At the moment, Google Translate understands that “works” refers to Shakespeare’s plays rather than impianti (installations):

Why mention those first automatic translation systems? Why appeal to semiotic discussions from twenty-five years ago, when the hot topic of recent months is artificial intelligence and its capacity to generate texts and images? Because it is likely that, very soon, and as has already happened with automatic translators, text generation apps will be installed in our devices and integrated into the software we use every day. Microsoft’s interest in OpenAI (or to be more specific, its $10 billion investment in the company), the company behind the hugely popular image generator DALL·E and the conversational chatbot ChatGPT, is no coincidence. Notwithstanding the theoretical debates, the worried conversations in the corridors of educational institutions and the noise on social media, text generation systems will soon be just another icon in the top bar of MS Word or Google Docs. The same could be true for image generation software: it is not out of the question that they could become integrated into existing software. Image generation systems such as DALL·E or Midjourney, or some close relative, should not take long to appear on the Photoshop menu bar.

The ghost in the education machine

The incorporation of this new type of disruptive technology (whether ChatGPT, DALL·E or Midjourney, to mention just the most popular ones) into educational or work processes is bound to be traumatic, and its adoption fraught with tensions and conflicts. At the moment, the debate is particularly heated in the field of education, where the presence of these technological actors will make it essential to redesign a whole range of processes inside and outside the classroom. Classic activities such as “reading a chapter and writing a summary” or “writing 1,000 words on the French Revolution” will have to be redesigned to exploit the capacity of artificial intelligence to process information and generate “raw” texts. Some specialists such as Alejandro Morduchowicz and Juan Manuel Suasnábar (in their 2023 blog entry ChatGPT y educación: ¿oportunidad, amenaza o desafío?; ChatGPT and education: opportunity, threat or challenge?) take a step further as they wonder:

Will it make any sense to set homework? Will teachers have to become inspectors of originality, trying to detect whether the tasks were solved by human or artificial intelligence? Will they have to come up with a different way of defining it? These are legitimate questions that add to the long list constantly brought up by technological innovations for the area of education. As usual, these are concerns that should be considered within a more general framework of reflection on the relationship (past, present and future) between technologies and schools. 

From the point of view of teacher, text generation systems are a fascinating tool for designing lessons. Morduchowicz and Suasnábar asked ChatGPT for ideas on how to teach photosynthesis to first-year secondary school pupils from an ecological and gender perspective: the two education experts were “still smiling open-mouthed” after seeing the proposal made by a technology that, in evolutionary terms, is just babbling its first words.

DALL·E 2 Explained | OpenAI

The dark side of AI

All digital technologies are surrounded by a gaseous halo that invites us to consider them as an ethereal, almost immaterial entity. The very concept of “cloud” is a good example of this process of technological dematerialisation. As is well known, the “cloud” is a mass of cables, chains of processors, data storage centres, more cables and vast warehouses full of machines hungry for electrical power in order to operate and cool themselves down. In fact, quite the opposite of a volatile “cloud”.

Artificial intelligence is no stranger to this process of dematerialisation. In Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press. New Haven, 2022) Kate Crawford points precisely to this material dimension of artificial intelligence:

Artificial intelligence […] is an idea, an infrastructure, an industry, a form of exercising power, and a way of seeing; it’s also the manifestation of highly organized capital backed by vast systems of extraction and logistics, with supply chains that wrap around the entire planet. All these things are part of what artificial intelligence is – a two-word phrase onto which is mapped a complex set of expectations, ideologies, desires, and fears.

Writing in the same vein as authors such as Jussi Parikka (A Geology of Media, University of Minnesota Press. Minneapolis, 2015), Jane Bennett (Vibrant Matter, Duke University Press. Durham, 2010) and Grant Bollmer (Materialist Media Theory, Bloomsbury Academic. New York, 2019), Kate Crawford’s atlas reminds us that every second of text processing or image creation has an irreversible impact on the planet.

What is more, the operation of artificial intelligence requires a human workforce that is treated just as unfairly, if not more so, than the riders who pedal to deliver a pizza on time or the people who pack boxes in Amazon’s distribution centres. These are the so-called “ghost jobs” (Gray, M. and Suri, S., Ghost Work. How to stop Silicon Valley from building a new global underclass, Houghton Mifflin Harcourt. Boston/New York, 2019). Supposedly cutting-edge tech corporations rely heavily on tightly controlled and poorly paid temporary workers. Due to pressure from venture capitalists to incorporate artificial intelligence into their products, some companies even hire people to act as chatbots and impersonate AI systems. In an article entitled The Exploited Labor Behind Artificial Intelligence (Noema Magazine, 2023), Adrienne Williams, Milagros Miceli and Timnit Gebru explain that

far from the sophisticated, sentient machines portrayed in the media and pop culture, so-called AI systems are fueled by millions of underpaid workers around the world, performing repetitive tasks under precarious labor conditions.

As I write (and you read) these lines, there are thousands of workers doing data labeling in front of a screen to categorise or rate images, videos or audio files. This is how intelligent machines are being trained – using humans as cognitive sparring partners. According to Williams, Miceli and Gebru,

data labeling interfaces have evolved to treat crowdworkers as machines, often prescribing them highly repetitive tasks, surveilling their movements and punishing deviation through automated tools.

Like the printing press, the train, transistors or any other disruptive technology, artificial intelligence opens up many points of conflict that go beyond the exploitation of labour, from the question of copyright of original sources to its use in predicting social behaviour. Companies such as Getty Images that manage large image libraries are already mobilising their lawyers to make sure they aren’t left out of the big business opportunity to come. But who will defend the rights of the authors of the millions of pieces of content shared on social media that AI processes to generate its own creations? Many facial recognition systems are fed by images from police files, with ingrained gender and racial biases.

In her Atlas of AI, Kate Crawford also mentions the desires and fears of Homo sapiens, which are not few in number. For example, the terror, born of Modernity, of being replaced by a new satanic technology.

Kate Crawford: Anatomy of AI | UNSW

Production and creation in the age of intelligent machines

Machine translation systems have not led to unemployment; on the contrary, some (although only some) professionals use them to produce a first, quick draft before putting on their glasses and going through the text in detail to catch all the subtleties and decide on the best “contextual selections”. Other professional translators refuse to open the URL https://translate.google.com/ and remain faithful to the traditional method. But it is undeniable that for thousands of people travelling around the world, these automated systems help to break down the language barriers in countries that are not necessarily remote. In the scientific field, I must admit that they have got us out of a tight spot more than once by producing a quick abstract translation for a congress that needed to be submitted within a couple of hours. But human intervention is still essential in all these cases, both in the production stage (you have to know how to ask good questions or, to put it in more technical terms, to generate good inputs) and the interpretation stage (machine-generated texts always need to be edited and polished).

In addition to speeding up the production of texts, Morduchowicz and Suasnábar believe that, as regards education, ChatGPT

is the missing link to connect the repositories of teaching resources that have been built up on the internet for several decades with the specific needs related to the daily tasks and lack of time of teachers (which cannot always be met by search engines or educational portals). 

Rather than talking about “substitution”, we should start to think and act in terms of “critical integration” (as opposed to the “uncritical exclusion” that proposes, for example, going back to the pen-and-paper assessments of the 20th century). The fear of labour substitution is understandable, but when we analyse past transitions in detail, the processes emerge in all their complexity. For example, the invention of the printing press did not mean that the scribes copying manuscripts by hand in their scriptoria were out of a job. As I explain in La guerra de las plataformas. Del papiro al metaverso (The war of the platforms. From papyrus to metaverse; Anagrama. Barcelona, 2022),

it would be mistaken to speak of a war between scribes and prototypographers. Neither the available documents nor historians’ interpretations suggest such a conflict. According to the medievalist Uwe Neddermeyer, the transition from manuscript to movable type was a process with “no riots, no protests, no poverty, and no unemployment”. Eisenstein comes to the same conclusion in Divine Art, Infernal Machine: “professional scribes and illuminators […] were kept busier than ever, the demand for deluxe hand-copied books persisted, and the earliest printed books were hybrid products that called for scribes and illuminators to provide the necessary finishing touches”.

Rather than displacing humans, some argue that artificial intelligence could replace Wikipedia or the World Wide Web itself. This seems unlikely – systems such as ChatGPT or Midjourney are in fact fed by the content of the World Wide Web and other digital repositories. The web is the world’s largest open reserve of texts, and artificial intelligences will have to draw on it to extract the textual raw material they need to keep up to date.

Artificial intelligence is a powerful tool that will make tasks easier but, as Marshall McLuhan pointed out, it will also change the way we think and perceive the world. Just as television and the World Wide Web changed our understanding of time and space, artificial intelligence will reshape the way we approach problem solving and the search for answers to all types of questions. Many activities and processes in the life of Homo sapiens will be radically affected, and not only in the area of textual information. For this reason, the second worst thing we could do would be to take the ostrich strategy and pretend that nothing has happened, while the first worst thing would be to insist on returning to the past.

Finale

One of the most popular Umberto Eco memes circulating on social media goes like this:

The computer is not an intelligent machine that helps stupid people. It is a stupid machine that only works in the hands of smart people.

Just like translation applications, we Homo sapiens must also stop using our brains in “dictionary mode” and shift to thinking more and better in “encyclopaedic” fashion. Abandoning binary approaches (real versus virtual, analogue versus digital, utopia versus apocalypse, etc.) and linear cause and effect (if new technology, then mass unemployment) would be a first step in this direction. Understanding and facing the complexity involved in the irruption of a new technological actor implies a collective effort that, under no circumstances, should be left up to a handful of corporations alone. The experience of Web 2.0 – that apparently open and participatory space that ended up becoming a ruthless machine for the extraction and monetisation of personal data – is too fresh in our minds not to set the alarm bells ringing.

If digital life were a video game, we could say that our relationship with chatbot AIs is still at level 1. One day, the current parlour games (“Guess who wrote this paragraph?”) will go out of fashion and we will move on to another, much more challenging and entertaining level, where the automatic creation of texts will be permanently integrated into our productive and educational routines until it becomes invisible.





On (artificial) drawing

How the work of John Berger relates to generative AI applications which allow images to be created with a simple verbal command.


Antoine Lumière painting portrait of posing man, 1905 | Library of Congress | No known copyright restrictions

All drawings represent what we see, what we imagine or what we remember. Or so John Berger, one of the best known and most respected art critics of recent decades, understood it. If he was right, what do works made by artificial intelligence symbolise? Does a machine understand what it sees, is it capable of fantasising, does it have memory?

Berger on Drawing is a collection of texts by the writer, art critic and social theorist John Berger (London, 1926 – Paris, 2017). In the book, he reflects on a range of topics including the difficulty of depicting reality, art as communication and the relationship between object and artist. In one of his essays, he says: “There are drawings that study and question the visible, those which put down and communicate ideas, and those done from memory”, and continues, “Each type of drawing speaks in a different tense.” Present, conditional, past.

We should ask ourselves whether Berger’s thesis would change in light of recent advances in artificial intelligence (AI) applied to creativity. Since the second half of 2022, a number of applications that enable the creation of surprisingly high-quality images have made a splash in internet culture. The most famous are DALL·E 2, Midjourney and Stable Diffusion, although they are all part of a broad category known as “generative AI”. Machines are beginning to conceive pieces with a certain sense of meaning and beauty, but – going back to Berger – where does it come from? What do they see, imagine or remember?

Present

“For the artist drawing is discovery. And that is not just a slick phrase, it is quite literally true. It is the actual act of drawing that forces the artist to look at the object in front of him, to dissect it in his mind’s eye and put it together again”. For Berger, drawing is “knowing by hand”, a process in which the artist looks so closely at what is in front of him that object and artist merge into one.

Considering where we are at, it is not surprising that a computer is capable of portraying what is in front of it. For more than a decade now, there have been many experiments by artists, industry or both that have played around with robots capable of drawing on canvas or paper. To cite just a couple of examples, in 2016 the agency Visionaire and Cadillac created ADA0002, an assembly line robot that was able to draw visitors to an exhibition in New York in real time. Meanwhile, artist Patrick Tresset has spent more than a decade developing intentionally rustic robots capable of sketching portraits and still lifes from different points of view.

Human Study #2 by Patrick Tresset | Natalianne Boucher

Machines can represent what is visible, but do they really see? It sounds like a ridiculous thing to remind ourselves of, but AIs don’t have eyes or other sensory organs, so they obviously can’t see in the same way that humans do. To emulate this ability, they need to use cameras and sensors to identify what is around them. This allows them to do amazing things, like recognising faces or physical movements, but they are programs trained on the basis of certain patterns, so a change in these patterns (for example, a face that is half hidden or upside down) can go unnoticed. In lay terms, this is why reCAPTCHA, the system for proving we are not a robot, asks us to identify traffic lights. It’s a task that Google Street View’s AI has already done, but we need to confirm that it has got it right.

We don’t know whether machines will be able to develop near-human vision in the future, but in creative terms it would be a leap to equate their gaze with that of the artist. As Berger said, in capturing reality, people reflect both their outer and inner worlds, in a tension-filled dialogue that seems far removed from the way in which today’s AIs work.

Conditional

The second type of drawing is, for Berger, that which wells from the imagination: “Such drawings are visions of ‘what would be if’”. Recent developments in AI seem to fit better with this second category: that which comes from inventing, fantasising, playing with unsuspected connections. It is in this direction that the applications themselves wish to move, as it is the type of creation that best demonstrates their ability to blend concepts, techniques, attributes, artistic styles, etc. It is not for nothing that the text command to activate Midjourney is /imagine followed by a description of what you want to depict.

For Berger, in this conditional drawing, “it is now a question of bringing to the paper what is already in the mind’s eye. Delivery rather than emigration”. The logic of generative AI is the same: based on a command in the form of words or phrases (what are known as “prompts”), the program delivers what is asked of it. The level of refinement of these instructions will determine how satisfactory the results are, which has led to prompt guides, tools for refining prompts and even a marketplace for buying and selling them.

The text-to-image revolution, explained | Vox

This mechanism reflects one of the main characteristics of today’s generative AIs: to ensure that they work properly, they need human intervention both at the beginning and at the end of the process. Developers feed in data sets and training parameters, which they then refine by reviewing and correcting the results over and over again. The same is true once they are up and running: the user enters a prompt and then decides whether the results are suitable or need to be repeated or refined. In short, AI is perfectly capable of executing work on its own, but for it to be relevant, human direction and assessment are still needed.

This understanding of generative AI as a tool is essential if we wish to defuse certain debates. One might think that DALL·E 2 is performing nothing short of magic when it paints a scene around the Girl with the Pearl Earring, but it is only able to do so because it has been provided with all the works of Johannes Vermeer and has honed – with an inevitable degree of subjectivity – what it means to paint like the Dutch artist. Could it learn the style of new artists without help and even generate a style of its own? It probably could, as today’s AI technology already permits machine learning methods that include monitoring of the results by the machine itself. What is less clear is whether, without specifying certain rules and without human involvement at some points in the process, the images it would come up with would have any value or even meaning in the eyes of humans

Another question, in philosophical terms, is who the agent of this creative process would be. In The Creativity Code, Marcus du Sautoy notes: “Art is ultimately an expression of human free will, and until computers have their own version of this, art created by a computer will always be traceable back to a human desire to create”. And while the British mathematician does believe that AIs can be creative, he asks: “If someone […] presses the ‘create’ button, who is the artist?”

Marcus du Sautoy and Karina Gibert. Artificial Creativity

Past

Thirdly, there are those sketches that are made from memories. In the same way that to portray the present is to discover it, to draw from memory means that the artist must “dredge his own mind, to discover the content of his own store of past observations”. In this case, the artist neither studies reality nor imagines an alternative, “the drawing simply declares: I saw this. Historic past tense”.

In the same way that AIs “see”, although not fully, they can be said to have memory, but one that differs from human memory. AIs are trained using vast datasets containing far more information than a person could absorb. The machine processes and selectively consults them to respond to the commands it has been given. During this process, the program’s learnings are also stored, which in turn generates new “memories”.

For Berger, however, the best drawings created from memory “are made in order to exorcise a memory which is haunting, in order to take an image once and for all out of the mind and put it on paper”. Generative AI certainly does not have to deal with these kinds of concerns, but it is worth noting that the development of computers capable of identifying emotions and responding accordingly is a growing sector with relevance to care for dependent people, and one that has also been explored in the world of art. In 2013, a British researcher developed, with no little irony, an AI that would paint in a museum according to its mood. Some days it was supposedly so depressed that it kicked visitors out, telling them it wasn’t in the mood to paint.

Future

Berger refers to these three categories throughout Berger on Drawing, but in one of his texts he adds a fourth, exceptional one, which can spring from the other three. When “sufficiently inspired, when they become miraculous, drawings acquire another personal dimension”, regardless of the category to which it belongs. In such cases, the image reflects something that acquires a special dignity, which could be described as “future”.

Whether an AI will be able to generate works that could be classed in this category is a subjective matter. After all, there is usually no consensus on what is good or bad human art either. What does seem clear is that generative AI will continue to stir up debates not only about art and creativity, but also about various other issues such as ethics, misinformation and intellectual property.

What would Berger think of generative AI? In the last few years of his life, the British author also became interested in artificial intelligence and its impact on society. In one of his last essays, entitled DALL·E 2: The New Mimesis, Berger points out that the creation of AI like DALL·E raises fundamental questions about originality in art. If AI is able to create works from verbal instructions, can we consider those works as truly original or simply an imitation of what we ask it to create?  

Unfortunately, the entire paragraph above is false. It is a cut-and-paste of the perfectly reasoned but erroneous response of the ChatGPT generative AI to the prompt “write an article about John Berger and DALL·E 2”. As AIs continue to perfect this and other results and we learn to refine the prompts, we will have to keep reading Berger to imagine the answer.





Bámbel

Are we talking about the end of love while being more obsessed than ever with loving?


Senior prom. Maryland, 1942 | Marjory Collins, Library of Congress | Public domain

Lying in bed, mobile in hand, Andrea Abreu talks with herself about love and relationships in times of social media. The precariousness of bonds, the need for validation, post-romanticism, ghosting and anxiety intertwine in this story both erratic and lucid.

In the close, lukewarm night of my room, I laugh, checking the appearance of my good side profile. Today is another one of those days when I pretend there’s someone looking at me from the other side of the mattress. What a coincidence that I like to sleep on the side of the bed that makes me look best, I think, and I say heck, I’m sure he loves it that I’m reading, I’d find it really cute if someone were reading to me before bed. I turn to the bedside table and put the book on the huge pile that it’s going to take me at least another two years to get through. Through the corner of my eye, I notice, startled: all the cushions are in the right place, lined neatly up, as if no one had ever slept in the bed before, not even me. The soy candles I lit have just been blown out and everything smells like a birthday party. There is no one, my child, and I say my child because I’ve discovered that it’s very good to talk to yourself like you would to a loved one, to improve the internal dialogue. I don’t feel any breathing, except for my own, and I put my fingers to my neck to check whether I have tachycardia. You’re dumb and your mother doesn’t know it, I say to myself, and I answer myself, no, you’re not dumb, my love, you’re brighter than a firebrand.

I grab my mobile from the bedside table and do a quick swipe of about fifty people without breathing, like doing squats: up-down, up-down, up-down, exercise of the crushed soul. The more people, the more massive destruction of my spirit. More glute routines, more boxes of melatonin, more vegan omega 3. I don’t use Bámbel to fall in love, not even to get lucky; I use Bámbel to make sure I still exist. I’m that stupid dog that goes up to its owner when he’s working and begs him to throw the ball only to grab it and take it back to a corner and chew it until the leather comes off. I’m that dog that won’t bring the ball back to you, but every now and then needs to check that if I give it to you, you’ll be willing to throw it.

[23:42, 14/11/2022] My Personal Number: do they know in cognitive behavioural psychology about momentary relief from anxiety? I have more and more doubts about pure cognitive behavioural psychology, but I’m sure that bambel has often brought absolute momentary relief of my anxiety, before assessing whether the intrusive thoughts about my dubious personal worth were true or not, I would go directly to the social network and get about twenty matches, and then I felt calm, ah well if that Winstrol-pumped German who could perfectly well be on Love Island and who’s comes to play at digital nomad in an absolutely impoverished community like mine thinks that my five carefully selected photos are worthy of a like then I’m fine, I’m fine, that’s fine, yes fine, we can carry on until the next time I need a quick validation, which could be tonight or tomorrow morning I don’t know, but if there’s one thing about the momentary relief of anxiety it’s precisely the fact that it’s temporary, at a micro level, in the moment, they dampen our discomfort, but the consequences over time are worse: not using bambel to relieve anxiety may be uncomfortable for me now, but in the long run it will be better for my head, using it now gives me the false feeling of being accompanied, of alleviating my anguish and my loneliness, but over time it makes the hole of my illness more complete and deeper.

[23:57, 14/11/2022] My Personal Number: How much grief can I live through in one sitting? In a single month? a single week? I don’t know? You tell me. Spit it out. Come on, You tell me that you like me that we should be a couple that we should live together later on. For you I gave up polyamory and we build a hierarchical romantic story my love. We do wonderful LOVE bombing and suddenly you start answering me every three hours. Am I supposed to read the distances between messages? What the fuck is wrong with us? we can’t decide to live together and then turn into a chat with no photo, into an anonymous grey blotch with an even greyer circle around it.

[23:59, 14/11/2022] My Personal Number: Every now and then I think: agaaaaain.

Senior prom. Maryland, 1942 | Marjory Collins, Library of Congress | Public domain

[00:00, 15/11/2022] My Personal Number: This was a Bauman quote

[00:10, 15/11/2022] My Personal Number: Frailty of human bonds is a prominent, indeed the defining attribute of liquid modern life. The blatant fissiparousness of human bonds and the frequency with which they are broken serves as a constant reminder of the mortality of human life. There is little point in questioning the validity of equating the loss of a partner due to separation with the ‘truly final’ loss caused by physical death; what counts, is that in both cases ‘a world’, each time ‘unique’, vanishes – and that either the will or hope are missing to challenge, let alone reverse, the finality of its disappearance.

The vanishing of a companion-in-life may be but a metaphor of Jankélévitch’s ‘thou-death’, but it is a metaphor that tends to be hardly distinguishable from what it stands for. And so is the aftermath of the break-up, devoted to weaving new ties, admittedly destined to be cut again just the past bonds were. Death-by-proxy becomes an indispensable and constant link holding together the interminable sequence of ‘new beginnings’ and efforts to be ‘born again’, those characteristic features of liquid modern life, and a necessary stage in each one of the infinitely long series of the ‘dying-being-reborn-dying’ cycles. In the ongoing drama of liquid modern life, death is one of the principal characters in the cast, reappearing in every act (Liquid Fear, Zygmunt Bauman).

[00:12, 15/11/2022] My Personal Number: Wwe all joined as part of a social experiment. All my girlfriends became single at the same time, when covid eased off, couples that had been together for years fell apart. We were going to take on the world, and we ended up taking on four pricks and twenty anxiety attacks. They all went on dating apps for the same reason I did: to see what would happen. Then we started like come on, let’s meet up with idiots that ghost us and I hope the condom’s breathtakingly tight, damn it. God, my freelancer’s fee takes my breath away but I pay it, man, I pay it. What kind of a world is this? Nobody told me to stick my nose in, it was all Eva Ilouz’s fault.

[00:12, 15/11/2022] My Personal Number: I joined bumble because I wanted to understand what Eva illouz meant when she talked so negatively about dating apps, it was all because of a paragraph where she referred to the contemporary sexual market as a supermarket where there’s too much supply and so it’s difficult to estimate the value of a product.

[00:12, 15/11/2022] My Personal Number: She was talking about something tht I interpreted as a fast supermarket product test and she said it like it was something noxious and I thought: ah, you can do a free fast supermarket product test? That’s for me. And I downloaded the app. People moving under my finger like a barcode scanner over courgettes without a bag after the shop assistant tells me off for not wearing gloves. Hey, lady, plastic is a serious problem! Either get rid of the bags or I stick my fingers through the papaya until I get the juice out of it, god…….

[00:12, 15/11/2022] My Personal Number: can I use a quote? Yes I can

[00:15, 11/15/2022] My Personal Number: Pre-modern romantic evaluators chose as if they were in an auction house because they operated in conditions of scarcity. Modern sexual and romantic evaluators choose as if they are in the supermarket and lack a clear reference point of their choice, not knowing what they want. The insertion of romantic and sexual practices in the consumer market leads then to a deflation of worth. Because sexual actors meet in a market situation, these actors face what we may call emotional deflation, much like economic mechanisms of deflation, the fact that the overall price or worth of goods falls either because competition brings down prices or because greater efficiency diminishes the costs of production. (The End of Love. A Sociology of Negative Relationships, Eva Illouz).

Senior prom. Maryland, 1942 | Marjory Collins, Library of Congress | Public domain

[00:24, 15/11/2022] My Personal Number: I started as a social experiment and ended up crying for a man much younger than me. He was a beautiful man, almost unreal. The difference between me and that man was that he still believed in something. He was in love with his exgirlfriend but he couldn’t bring himself to tell me. I told him please don’t play mobile games if we were seeing each other. We saw each other three times and he got bored of me. He wasn’t seriously interested in me, but every time I got the impression that talking was a problem. Every time someone has ghosted me it happened after I had opened my mouth to say something other than the potatoes are so good, your cock is so good. I’ve made monumental efforts not to seem too much like me and I’ve worn super-tiny shoes that are called a scared version of myself. He was in love with his exgirlfriend. And I’m in love with Alpro coconut yoghurt. Everyone has their own needs and tastes and I wasn’t on his list. I got upset. I got so upset that I started to send him messages shouting (because they were written in capital letters): COWARD, DON’T GHOST ME! He told me to please stop talking to him and I kept shouting, poor guy

[00:24, 15/11/2022] My Personal Number: what’s wrong with me, you nut?

[00:25, 15/11/2022] My Personal Number: Panic! I feel panic about ghosting. And haunting and love bombing and extra virgin olive oil. panic, I feel panic about me

[00:25, 15/11/2022] My Personal Number: Me?

[00:25, 15/11/2022] My Personal Number: Me

[00:26, 15/11/2022] My Personal Number: I saw Héctor bellerin in a picture with a beret and I went on bumble to see if I could find someone similar. It’s not the first time it’s happened to me. I also rewatched High School Musical 2 and went on bumble to see if I saw anyone who looked like Chad. The worst thing is that there was someone who looked like him, but I didn’t get a match. I think the world is set up right now to take Héctor bellerin away from us. Nobody told us directly but everything says: love him, Héctor bellerin, bellerin-rin-rin- rin ring? Bellerin, he’s a man, look at him, but he looks like something that’s not a man, look at him, he’s vegan and he has a mullet.

[00:30, 15/11/2022] My Personal Number: Why should I love him, sir? Why should I?

[00:31, 15/11/2022] My Personal Number: Look at him, he’s wearing a tracksuit and a dress shirt, love him and I love him and I understand why people send messages that go unanswered for months.

[00:32, 15/11/2022] My Personal Number: Bellerin check your inbox

[00:32, 15/11/2022] My Personal Number: Me, I don’t think I love I just think I get hooked sporadically on people with avoidant behaviours.

[00:33, 15/11/2022] My Personal Number: Me, I… I’m literal

[00:33, 15/11/2022] My Personal Number: Hello?

[00:33, 15/11/2022] My Personal Number: I’ve had it up to here with myself

[00:34, 15/11/2022] My Personal Number: No, andre, my love, I love you

[00:34, 15/11/2022] My Personal Number: behave my love

[00:34, 15/11/2022] My Personal Number: I love you my love

[00:35, 15/11/2022] My Personal Number: fake it until you make it bitch

[00:37, 15/11/2022] My Personal Number: Me, it’s my fault it’s all my fault that’s why I’m praying to god. ENOOOUUUGH BITCH

Senior prom. Maryland, 1942 | Marjory Collins, Library of Congress | Public domain

[00:41, 15/11/2022] My Personal Number: One of your eyelids is lower than the other, my father told me looking at my face last week. I have a lazy eye????, I screamed like a madwoman spitting out the smoked tofu from lidl. Aaaaaaaaaah (fit of impending rage), a lazy eye, me? Bloody shitballs!!!!!!! It cost me two days of therapy for a total of 130 euros. I had to change one of my bAMBEL pictures. I don’t know if you read the book zeno’s conscience, iin the book the wife tells him that his nose is different from the way seno thought it was and that makes him change how he perceives himself, a real existential crisis. I wonder how we can live without worry when we’re continually self-represented and fragmented. I don’t know about you but I don’t stop looking at myself everywhere. I love the body, I love the existence of a body but sometimes I just want to be a ball of fire that floats, you know what I mean? A ball that floats and rises up until it breaks away from the earth and the roof and doesn’t care that there’s no roof and doesn’t care about anything. I’m absolutely obsessed with my physical appearance and I keep looking at myself in mirrors everywhere I go.

[00:43, 15/11/2022] My Personal Number: We have an quote for this? We do

[01:05, 15/11/2022] My Personal Number: […] ‘my degree of beauty’ is linked to ‘the chances of someone falling in love with me’, or ‘the chances I have of getting what I want’, or ‘the chances of the person I’m with not leaving me for someone sexier than myself’, but the solidity of the link between ‘my appearance’ and ‘my chances of being loved/getting married/being in a relationship’ varies according to the social system. And in our society there is a very strong link, perhaps the strongest in world history, between the variables of ‘being good looking’ and ‘being loved’ (Inside The Hall of Mirrors, Liv Strömquist).

[01:06, 15/11/2022] My Personal Number: reallygreat 😊

[01:08, 15/11/2022] My Personal Number: ok, pops, I have a lazy eye I have less chance of being loved so what do I do? Should I jam something under my eyelid so my eye stays open all night? What should I do lord God almighty? What should I do if they taught us that we’re not worthy of love if we’re ugly. Someone taught us this and here I am spending money to mentally unlearn something and keep doing things to my body to look good.

[01:10, 15/11/2022] My Personal Number: Why do we always feel anxious when we meet someone? I shouted a few days ago to my friend Usu again. Why? please explain it to me. And she says Andre I already told you why and I don’t remember what she said again. I strain my head to remember , usu? what did you say to me? it’s the mystery of bambel anxiety that I want to decipher, I need truths, phrases out of sociology books to hold on to because I refuse to think that this is an individual problem like those stupid people who’ve been in a relationship for ten years and dare to tell us that we’re stuck in a rut and we’re looking for people who ghosts us. Shut up, you vile thing, shut up, you don’t know anything, it’s easy to forget all about this when you have validation on the other side of the bed and you can’t even remember what this was, because this isn’t what it used to be, meeting people has become more of a horror story than ever in the history of meeting people.

[01:10, 15/11/2022] My Personal Number: It’s not me, it’s not you, it’s a fucking structural problem and continuing to bear the load with our unstable individual mental health (which is also a structural problem) means continuing to smash the bonds between people and leaves us absolutely alone in the face of the brutally violent impact on a human body and mind of living and dying for love as if nothing mattered at all.

[01:12, 15/11/2022] My Personal Number: After all that I’ve read and learned and reflected on, I’ve continued to bust myself up against every person I’ve met in the last few months, each time falling into inexplicable anguish, a sense of disorientation that made me fall back cyclically into neglecting my health, the hypervigilance of my body, the bloody inner dialogue.

[01:14, 15/11/2022] My Personal Number: Aaaaaall the time swinging between the most romantic and the most post-romantic ideas, trying to survive in this historical moment in which we don’t even know what we’re doing because we’ve destroyed all the frames of reference and what we had before was bad but this is also tremendously shit my friend

Senior prom. Maryland, 1942 | Marjory Collins, Library of Congress | Public domain

[01:22, 15/11/2022] My Personal Number: So? I flow between extreme idealisation and utter contempt I go from 1. I look at him and I think fuck me all I want a baby two three four babies a car a moises a house a vegetable garden damn hippy kids crawling on the floor like snails and wipe their snot I want a vegetable garden I want a classic love-passion, old-school, accommodating, and bang me like a rabbit and destroy us to absorb us forever until the end of time and love you until after you die because like Romeo says before love was invented I already loved you, but no. Then no, then 2. I pull myself away and think that it’s all a story and that what I want is to feel good about myself my space my life my things, not to depend on anyone. And I end up profoundly helpless and uprooted lost at the end of love and in the moment of post-romanticism, defeated and clinging to post-romanticism, I even feel sorry and I hug her, my girl.

[01:24, 15/11/2022] My Personal Number: Andre x Postromanticism.

[01:25, 15/11/2022] My Personal Number: Is this the end of love? If so why are we all so desperate to love? We’re talking about the end of love and we’re more obsessed than ever with loving? what is this? Do I have an attention deficit disorder? a personality disorder?

[01:26, 15/11/2022] My Personal Number: Will I get my period?

[01:29, 15/11/2022] My Personal Number: Yesterday I asked my friend Carmen if she thought that you can choose to love and she said yes, you can choose whether to fall in love or not and I shouted no like a crazy person, no, then when I got home I thought yes, she’s right that if it’s not worth it I don’t fall in love and every day I think about it less and less to tell the truth.

[01:30, 15/11/2022] My Personal Number: Sometimes I think I really don’t care about anyone, does that happen to you?

[01:30, 15/11/2022] My Personal Number: My heart is anaesthetised………

[01:32, 15/11/2022] My Personal Number: I didn’t want this, but the world is becoming more and more like bambel and bambel is becoming more and more like the world in the sense that the world is stupid and bambel is stupid.

[01:40, 15/11/2022] My Personal Number: I wanted to get away from romantic structures, I wanted to learn to love myself and now in the adult world I only know how to do my nightly facial cleansing routine (I’m not even sure if I do it right) and that it’s better to get petrol at this petrol station than at that one because they give me points on my loyalty card… I wanted to do things right, but I don’t know what right is anymore and even if I uninstall bambel and you uninstall it and even if I uninstall Instagram and you uninstall Instagram and tik tok and bereal I feel, deep down inside my heart full of bandaids I feel that the way we love is deeply and irreversibly pierced by disenchantment 😊😊😊

I put my mobile on the bedside table. In the stifling gloom of my room, I think about smoking the CBD joint in my dresser drawer. Instead, I pick up the phone again, take one last look through Bámbel and come to the edge of the ravine. I stare into the emptiness of the abyss: there’s no one left to classify into desirable and undesirable in this corner of the world. I think about taking my mobile out of the room, but inadvertently fall asleep with the screen unlocked on my chest.





When They Came For The Digitally Illiterate…

Forced digitalisation excludes not only those with fewer digital skills, but also those who appreciate the advantages of the analogue world.


Shopkeeper and a customer considering to buy an axe or an axe handle. Leaby, Sweden, 1940 | Carl Gustaf Rosenberg, Riksantikvarieämbetet | Domini públic

Digital technologies lower costs, increase efficiency and allow for greater specialisation. However, we rarely talk about what we lose with digitalisation, who is excluded in the process or where the limits lie. Often it is older people, with their conscious and informed decisions about digital technologies, who show us the limits of digitalisation.

Last week I went to a restaurant. I got together with a group of friends who I hadn’t seen since before the pandemic – there was a new baby, some transoceanic travel involved, and almost all of us had lots of new grey hairs. Like in so many other post-pandemic restaurants, we were invited to browse the menu using a QR code. You know what happens, instead of talking to your friends, looking them in the face or holding the new baby, everyone becomes immersed in the universe of their phone, where someone has tried to squeeze what used to be a menu spread over several A4 sheets of paper onto a mobile screen. The little screen interrupts the flow of communication and the spontaneity of the moment, but it is no longer a novelty. The novelty came at the end. The waiter came over to tell us that we could also pay using the QR code, that we could each pay our own part or split the bill equally, and that it was as easy as making an online purchase. If any of us didn’t know how to make an online purchase, we could call him over. And indeed, he didn’t walk past our table again until the matter of the bill was settled, although the family with the two babies already asleep in their arms had neither the hands nor the will to solve the riddle of the QR code. But, from the point of view of institutions, the problem of the forced digitalisation of society boils down to the personal problem of having the necessary digital skills.

And so we have witnessed the forced digitalisation of society. Citizens are making more use of digital technologies; they use them more regularly and for more tasks. Meanwhile, new products and services have been digitalised that increasingly reinforce digital citizenship. This digital citizenship is essential but unattainable for some sectors of society in certain circumstances, especially for people with fewer digital skills, including many older people.

The pandemic has stepped up the digitalisation of money – in Barcelona, at least, you can no longer pay for a single bus ticket in cash. Doctor’s prescriptions have also been digitalised, and the public health system no longer provides printed prescriptions. Various personal ID systems have become more widespread, such as the personal pin code, the electronic ID card and electronic signatures. The typical museum audio-guide devices are now apps that you have to download onto your phone before searching for the specific audio-guide for the exhibition you’re visiting. And finally, the advance online purchase of tickets for museums, concerts and other cultural events has also become the norm.

Instructions for ordering food in a restaurant | Alpha, Flickr | CC BY-NC

The prevailing techno-optimism only sees the advantages of digitalisation. Restaurants need fewer waiters, bus drivers don’t have to waste time handling money, we cut down on paper for prescriptions and tickets, public workers and cashiers spend less time attending to the public. Museums don’t have to worry about audio-guide devices. But we often fail to see the cost to those who are excluded from these services, the autonomy lost by those who cannot access the service directly, or the impact on the right to equal opportunities and the right to live without discrimination.

Those who are unable or unwilling to exercise digital citizenship are left to rely on the help of family or friends, or are punished by the system with procedures that are more cumbersome than their analogue predecessors. There are those of us who are still quite happy with mobile phones considered by the industry to be obsolete, and we don’t have room for any more audio-guides. If you don’t have space to install the audio-guide on your mobile, you may not be able to access the service. If you don’t purchase online tickets for cultural events in advance, you risk not being able to get one. If you can’t access your medical prescription on the digital health platform, you can ask the pharmacist what you’ve been prescribed, but without having the information written down on paper you may forget essential details that seem important at the time of purchase, such as how often you need to take the medication or the dosage. Although the official statistics say that in Spain 94% of the adult population connects to the internet using mobile devices (Eurostat, 2021), this doesn’t mean that all these people can use all these services, or that they can use them every day, or that they are willing to forego the advantages of analogue services.

There are many reasons why you might be unable or unwilling to access digital services. Lack of digital skills and lack of interest in digital services are certainly among the most common. A lack of skills and interest in digital technologies is often associated with more vulnerable sectors of society with a lower socio-economic status and with fewer opportunities for regular contact with different digital technologies, such as older people who weren’t previously employed or who didn’t need to use computers for their jobs. At the same time, there are young people who are experts in using social media on their mobiles, but who wouldn’t know how to install an electronic signature or what a spreadsheet is. But it could also be the case that you don’t want to access digital services because of digital saturation, with more and more young people defending the right to digital disconnection. Or that you can’t do so because of occasional access restrictions, for example if your mobile phone is broken.

At this point, it is no longer surprising that some of the victims of forced digital citizenship are raising their voices against the prevailing techno-optimism, whether in the private or public sphere, and helping society to reflect on the utopias of digitalisation. It is often older people who show us the limits of the digitalisation of society, not only because some of them don’t have the necessary digital skills, but because they are unwilling to forgo the advantages of the analogue world they have known up until now and which they appreciate for its warmth, versatility and familiarity.

“Soy mayor, no idiota” | RTVE

One of the few public cases against the prevailing techno-optimism is the campaign against the depersonalisation of banking initiated by Carlos San Juan on change.org. But in the private sphere, many other older people often take a stand against forced digitalisation. They prefer not to have WhatsApp and not to have to communicate via text messages that people neither write nor read properly, because they prefer a phone call in which you can give and receive all the necessary attention. Then there are those who have given up battling with QR codes and who prefer to have the personal attention of someone who can read out the menu and discuss it with them. Although at the same time they would also probably prefer to be able to continue withdrawing their money in cash and controlling their spending on paper, rather than having to depend on other people who have no business knowing about their finances.

However, little attention is paid to older people who speak out against the prevailing techno-optimism. The ideas of young people who advocate digital disconnection sound out louder than those of older people who complain about the depersonalisation of services, even though both groups are on the same track.

Older people are accused of not knowing how to use digital technologies and of not wanting to modernise, and there is a strong push to incorporate them into the digital society to bridge the digital divide. Yet often, those who are not users of digital technologies, or of certain services, are not only influenced by the digital opportunities they have had over the course of their lives and their digital experience, but their attitudes are also based on conscious and informed choices that question the values at stake with digitalisation.

Older people are often accused of spreading fake news, of perpetuating WhatsApp chains, of not knowing the correct meaning of emojis or of sending useless “Good Morning!” messages. These uses, rather than being a problem of lack of digital skills, are an opportunity to empathise with where the other person is in their life and their emotional needs. However, little attention is paid to them when they call for more face-to-face services, less text messages and more phone calls. In other words, when they call for inclusion, warmth and truly personalised experiences.





The NFT bubble has burst. Long live the bubble?

Cloaked in an aura of popular cultural cool, NFTs act as an introductory narrative into the world of speculation.


Knocking down dolls at booth at fiesta, Taos, New Mexico, 1940 | Russell Lee, Library of Congress | Public domain

The most paradigmatic example of this burst bubble is the crash in price of the NFT of the first ever tweet. It sold for $2.9 million when released. A year later, last April, an attempt to resell was abandoned when the maximum bid was $14,000. How did this 99% loss of value occur in such a short period?

In reality, NFTs are not dead because they were never alive. No matter how hard you try, when the product itself can be precisely reproduced for free at the click of a button, its only value is the price at which it can be resold.

A great exercise in hyperstition and self-fulfilling prophecy is thus required in order to extract value from the product. In other words, the hype that has been generated around NFTs has been essential in inflating the bubble and allowing some (basically the first) investors to make money. Following the logic of Ponzi schemes, more people need to constantly enter the world of NFTs to stop confidence and growth from stagnating. And in terms of drawing the general public into digital asset investing, the use of emotional suggestion techniques has been essential.

In an attempt to legitimise this format not only in monetary terms, the first tweet sent by the founder of Twitter, Jack Dorsey, was tokenised. This imbued the object with a nostalgic and cultural value. Similarly, tokens of historical memes such as Nyan Cat have sold for millions of dollars. The fact that the value of NFTs is intrinsically linked to the trust in this same value, without the existence of any other official, banking or state structure to support it, means that its credibility needs to be expanded by means of these strategies. The use of references from popular culture serves as a smokescreen – you can make out that you are investing in crypto out of attachment and affection for a certain cultural artefact, pretend that the interest exists out of fetishism for a celebrity, an artist or their work, or camouflage the investments under layers of irony, aware that Bored Apes are nothing more than aesthetically ugly monkeys randomly generated by an algorithm, and that this is precisely what makes them fun. But, as Dan Olson said in the viral video Line goes up, this is not a fandom like the rest, because the product itself is not important, only the profit it generates.

Just as it is essential to have new investors, it is also essential that those who invest do not pull out and sell, because if this were to happen en masse, the value of cryptoart would fall. A supposedly communal atmosphere is therefore generated in which the participants in this great house of cards must constantly convince each other and themselves that they will be rich if they hold out long enough. This story is bolstered by the assertion that this is a select group of visionaries who have been able to jump ahead of their contemporaries and who, thanks to this, will be ushered into the land of plenty and never have to work again. Keeping high expectations of what is to come has become a very important currency and it is essential to constantly keep boosting the hype.

On releasing his own collection, the most famous of all Spanish influencers to adopt NFTs, Willyrex, declared that “Those who trusted will be rewarded! Those who didn’t will flop!” Who knows whether ominously or sardonically.

When first presented to the general public, the intersection between digital art and blockchain was defined as a tool that would allow artists to profit from their work. Even so, the much-heralded democratisation has not materialised, nor has the disappearance of intermediaries. Even less so can we say that this technology has allowed most artists to make money, since the costs of creating and maintaining NFTs are a major obstacle. Not to mention the uneven distribution of sales, which means that very few people actually make money selling their art in this way.

Nor is any of this anything new in the world of commercial art. It is well known that gallery circuits have a tendency to be exclusionary, and to rely on speculation derived from the hype of novelty and the capacity to absorb new trends. NFTs fit perfectly into this logic – the more people agree that an arbitrary object has value, the more value this object will have. However, the art market is not a large enough niche to support the speculative needs of non-fungible tokens.

NFTs: ¿la nueva Tulipomania? | PutoMikel

The reality is that, above all, NFTs act as an introductory narrative into the world of speculation, far from the stereotypes that are normally associated with it. Brokers are no longer grey, irate guys in pristine suits, inseparable from their briefcases, but rather much more relatable people – influencers, streamers or bloggers. Instead of expert economists (even through there are still endless and complicated technicalities), they are ordinary young guys. The cryptobro, the latest trendy digital tribe, can normally be seen in the minimalist garb of t-shirt, baseball cap and trainers that is so popular among Silicon Valley developers. The cryptobro could be anyone – your neighbour, or even you.

The final frontier of this whole phenomenon seems to be, also, the last hope to refloat NFTs now that their reputation has been severely damaged. As part of the logic of turning every aspect of human culture into a quantifiable and sellable commodity, by means of clicks and statistics, over the last decade we have seen how our actions in cyberspace are increasingly turned into ways of building personal branding. Meanwhile, our jobs have become uberised and gamified. Following this logic, the transformation of play into business is the latest frontier of platform capitalism and the attention economy. Especially through video games.

In order to maintain the exponential growth experienced by the world of video games in recent years, exploitative practices such as crunch culture have been necessary and have given rise to bizarre phenomena such as endless streaming. NFTs look like a new opportunity for expansion that could prevent the recession that is already said to be looming.

The sale of tools, skins, new utilities or downloadable content exclusively for those who pay for it is a growing practice, which has culminated in the creation of what are knows as play-to-earn games. These are video games based on cryptocurrencies, where the action itself matters much less than the possibility of earning money by playing. Major companies such as Epic Games and SquareEnix have supported the tokenisation of aspects of their games through NFTs and the creation of internal economies.

It remains to be seen whether this tactic will set the world of cryptotokens straight and be able to attract new people to the gaming community. During the rise of MMORPGs (Massively Multiplaying Online Role-Playing Games) such as World of Warcraft two decades ago, a shadow economy was created around buying and selling addons and characters, mostly involving workers without rights. At the moment, looking at which states have been the most successful in the implementation of play-to-earn games, which turn out to be precisely those with the most patent economic problems and inequalities, it does not seem that NFTs have led us to any hopeful future, contrary to what some of their enthusiasts were predicting a couple of years ago.





Immersion in the Metaverse

We propose a creative game in which several authors delve freely into the concept and dilemmas of the metaverse.


Woman diving from pier, 1892 | John S. Johnston, Library of Congress | Public domain

The concept of metaverse refers to an immersive virtual world, and despite the fact that much has recently been said about it due to the interest shown by one of the big tech companies, which has even changed its name, it is no new concept, as it is something that has been referred to in literature and film for some time. We asked a series of collaborators to give us an unfettered vision of this virtual universe, inviting each one to describe what it evokes for them. This is the result – we hope you enjoy it with curiosity and good humour.

  1. Con-tact, by Inés Macpherson
  2. A Brave New World in the Metaverse, by Enric Luján
  3. The Metaverse Interpreted as a Panopticon of the Human Mind, by Jose Valenzuela
  4. Metaverse: Memes, Horizon and Future, by Proyecto UNA
  5. Verses on the metaverse, by Antonia Folguera

Con-tact

Inés Macpherson

In the 80s, science fiction opened the doors to the possibility of coming into contact with virtual spaces – with what William Gibson called cyberspace. In the 90s, Neal Stephenson offered us another word in the pages of his Snow Crash: the metaverse, a space beyond our palpable universe, which is situated in the virtual world and where we can interact through digital avatars. But moving away from the birth of these concepts, the ideas run through many other works of this genre, both literature and film. From the enslaved society of The Matrix, to the possibility of living a full life after death within a virtual reality, like in San Junipero. Dangers, ways of trying to overcome our finitude or create a space where we can reinvent ourselves; and a question that always pulsates between the lines of science fiction when we enter these universes: where should the limit be set?

“I know this steak doesn’t exist”, says the character in Cypher to Matrix, and unwittingly reminds us of the importance of differentiating between the palpable world and the virtual one, of not getting lost in the latter as an escape, as a way of not facing reality. The possibility of living through what we see on the screen, through the stimuli we feel in our brain without leaving the house, and forgetting about touch, real contact, connection… Because we can connect via social media, we can connect in a virtual world, we can shop, see a concert… but what about physical contact? What about the memory of the skin?

A Brave New World in the Metaverse

Enric Luján

There is a short story by G. K. Chesterton [“On Private Property”, in Delphi Complete Works of G. K. Chesterton (Illustrated) (Delphi Classics, 2013)] which tells of a pilot who has a plane crash in which he loses both legs. The men of science who operate on him (all of them educated in the Method) take the opportunity to amputate his arms as well, thus replacing all his limbs with mechanical versions. When the patient wakes up and they see his disbelief, one of them tries to reassure him: “the leg of nature is a most inefficient instrument”. The pilot’s reaction to this is purely visceral: he orders his new (mechanical) legs to be brought to kick him. It all ends up as a nightmare, which Chesterton resolves in his acerbic style: “he woke up in an ordinary hospital and found the world had not yet progressed quite so far as he had fancied”.

What is “efficient” from the point of view of technical progress need not be sensible from the perspective of lived experience. With the replacement of the Universe by a Metaverse, the essential point of the story is repeated. What is removed from us in this case is not our limbs, but the whole of the external world, which is on route to being supplanted by a mechanical imitation. 

The Metaverse Interpreted as a Panopticon of the Human Mind

Jose Valenzuela

Fragments extracted from the doctoral thesis “The Metaverse Interpreted as a Panopticon of the Human Mind”, defended on 16 November 2045 by neurophilosopher Llucia Valenzuela.

“…a brain reward mechanism so rudimentary that it was exploited as early as the birth of slot machines in the late 19th century, refined through games like Candy Crush and which reached its peak with the interactive and personalised advertising designs of the metaverse…” (p. 96)

“…what could one expect from an environment which is, indeed, now capable of registering each and every one of our gestures, conversations and even physiological signals? There are no secrets in the… ” (p. 134)

“…we were already aware of the ability of virtual reality to modify our attitude or our perception of the physical or social environment, but it had never been tested under strict commercial parameters, which… ” (p. 176)

“…an environment designed by businesspeople could have no other purpose than to generate financial profit. Everything else (connectivity, assistance, health) could be considered part of the marketing campaign of various tech players more interested in cryptocurrencies than in… ” (p. 201)

“…the mind adapts itself to its environment, and if that environment is governed by the laws of fiction, it will eventually feel that it exists within a fiction” (p. 233).

“…the only possible conclusion is that it was predictable. We once saw experiments where rats preferred to starve to death than stop pressing a button that stimulated their reward centre. We are those rodents, only in a different kind of cage.” (p. 298)

Metaverse: Memes, Horizon and Future

Proyecto UNA

Verses on the metaverse

Antonia Folguera

Who am I to talk about the metaverse, I ask myself? There’s no need for me to say anything about the metaverse, it’s all been said. The following verses are like objects found using Google autocomplete, edited, remixed, slightly post-produced. Sometimes they rhyme, mostly they don’t. The original selection was in English, because the algorithms still haven’t been trained enough with other languages.

The metaverse is the future
The metaverse is coming
The metaverse is coming and the world is not ready for it
The metaverse is sad
The metaverse is bad

The metaverse is a bad idea
The metaverse is dangerous
The metaverse is already here
The metaverse is a dystopian nightmare
The metaverse is not new

The metaverse does not exist
The metaverse is not the future
The metaverse is not above real-world law
The metaverse is not Web3
The metaverse is not what you think it is

The metaverse has many benefits for all
The metaverse has many potentials
The metaverse has become big business
The metaverse has its own virtual economy
The metaverse has already arrived

The metaverse was Second Life
The metaverse was significantly more interesting than real, everyday life
The metaverse was an escape from reality
The metaverse was inevitable
The metaverse was a whole new world

The metaverse will not replace the real world
The metaverse will never happen
The metaverse will shape our future
The metaverse will fuel massive innovation
The metaverse will fail

The metaverse smells like teen spirit
The metaverse tastes bad
The metaverse sounds stupid
The metaverse looks like hell
The metaverse feels real

Are we in the metaverse yet?





Looking for traffic lights in a CAPTCHA

We are all desperately trying to make sense of things, the question is how to embrace derangement and to avoid paralysis.


Traffic Director Eldridge inspecting new lights, 1926 | Library of Congress | Public domain

We project ourselves into the lives that others share on social media. We engage with those who are no longer here through their digital legacy. We read our own profiles to understand ourselves. How do we relate to madness?

““Have you heard from _____ lately?” her mother asked on the phone, and invoked the spectre of a classmate who had escaped, who was nowhere to be found in any of the places where you typed in names. Her job was so legitimate that it seemed like a reproach: Aerospace Engineer”. In the book No One is Talking About This, Patricia Lockwood talks about a fragmented character with her brain fried from having three thousand tabs open and spending her days chatting to strangers on the internet or photoshopping out the background from bags of peas to make memes.

In one of these, the protagonist talks about the timeline of her old classmate: Where was she? Where had she got to? Had she, asks the narrator “through her goodness and unswerving concentration, broken off into one of the better timelines?”. Better than her own, at least?

The internet, the book reminds us, is full of jokes and gags about wanting to climb out of this “timeline”, the one we’re in, our here and now, and slip into other possible lives. Perhaps because it’s so easy to project ourselves into other places. To know in detail the face and the house of an ex-lover’s new girlfriend, and therefore also how much money she has. To know that your old high school friend is now a mum (twice over). To be able to crawl like a slimy worm through a former friend’s twitter account until you find the reply she made yesterday in the early hours of the morning to a politician’s account about a new regulation that affects her dog. Your old friend was very angry yesterday morning, and you know her dog, and you weren’t there to console her. You get a rough sense of her suffering and the things that concern her, and scrolling through her answers seems, for a while, like talking to her. Like refreshing her page.

On the other hand, when you’ve lost someone’s digital tracks, you might do what Lockwood’s protagonist does: imagine that they must inevitably have risen up to some better place, something like the analogue kingdom of heaven. It doesn’t matter whether all their social media accounts are still active, the difference is that you can’t see them. The impossibility of being able to imagine them shrouds the person in a halo of mystery, and even of progressiveness. She must be doing well. Everything must be going well for her. I’m sure she’s doing so well that she’s forgotten me. In the end, there aren’t so many people who are unreachable (at least I don’t know many people who have disconnected from everything). Rather, you could say that sometimes people have to disconnect as a matter of strict survival (and preservation of their own time, which is a form of self-love). So we stop following, and we silence, and we block. I think.

Sometimes my friends and I swap mobiles so we can enter those realms that are more or less off limits, and when we do this, we adopt primitive forms. Here, here, take it, but just for a minute. Look. Good little doggie. They’ve split up. New girlfriend. What’s her problem? Why does she post these videos? Her baby’s ugly. Urgh. Wipe history.

In the middle of this compulsive scrolling, I also find myself procrastinating on all my work by reading papers I don’t understand by a researcher with the same name as me. The Academia website, in its usual strategy of very vain spam, constantly asks me questions about who I am. In other words, every five or ten days, an external entity forces me to corroborate the person I’ve become and to discard those that I have never been nor ever will be. It’s a lot more compromising than answering whether or not you’re a robot.

“Are you the Anna Pacheco who…?”

“Are you the Anna Pacheco who wrote ‘Calibration and Evaluation of a Frequency Domain’…?”

“Are you the Anna Pacheco who wrote ‘Ground humidity mapping using RADARSAT-2 and local data’…?”

“Are you the Anna Pacheco who wrote ‘Application of hyperspectral teledetection for flowers’…?”

No, I’m not. But I wish I was. Now I feel like I want to be her. What does it mean to be hyperspectral for flowers? What’s this about mapping ground humidity, and why don’t I know how to do it, why did I never learn? Google search: how to map the ground. Can you long for a life that you were never even close to? Am I entitled to be offended because there’s an Ana Pacheco, a DJ, who’s performing at Sonar and strangers on the internet ask me if that’s me too? What I ask myself, then, is why I’m not her, and who she is, and if she’s OK, and then I give up caring. The first Ana Pacheco I found was the ex-guerrilla. She is, of all of us, the one with the most suggestive headlines: “From guerrilla girl to model, the FARC guerrilla who takes her clothes off”. But I haven’t heard anything about her for a long time, and besides, her name has only one ‘n’ in it.

To look at oneself in the midst of savage violence is, of course, a narcissistic and middle-class impulse, but I have the internet and I spend a lot of hours working in front of a screen. So we look outside and within, at ourselves and others, we read entire pages of Wikipedia and we read our own profiles to try to understand how we’re seen from the outside. We search. We search. We search. I search: persistent candidiasis, how to cook one day and eat all week, how long does spring asthenia last. We surveil entire timelines of unknown subjects as if we were looking for something specific. And sometimes I don’t know what I’m looking for until I find it. And sometimes I don’t know I need it until I get it.

In Who Will Run the Frog Hospital, Loorrie Moore says “God, how life was full of moments that should have gone differently but didn’t”. They didn’t. Things happen a certain way, but they could constantly go a different way even though we sometimes don’t even see the detour. And I think that sometimes I use the internet like a fugitive, searching for detours and detours and detours, only to come back to the beginning. Girl, what are you saying. My most Marxist friend would tell me that I’m suffering from that monstrosity of capitalism known as chronic FOMO, and that I should open the windows to let a bit of air into the room.

Then there are two things I can’t forget about the war. Within hours of the first bombs falling in Ukraine, on Pornhub the tag *Ukrainian girls* was at the top of the ranking of the website’s most visited pages. I feel very similarly about this as I do to a viral video that showed a TV broadcast about the war being interrupted with an advert for a greasy, appetising hamburger.

I think of a compact mass of men (I’m sure there are lots of them) learning about the Russian attack on the city of Mariupol, I imagine them sitting in front of a computer, remembering that there’s a country called Ukraine and that this country has women and that these women, in their heads, are very attractive. And then proceeding, diligently, to type their new interest into the website. And then, the wank. One of those working-hour wanks where the person, really, is above all the slave of their wank. And then, I see these men getting back to work. Opening the tab about Mariupol, now seriously, to see what’s happened. One piece of toilet paper, two pieces. The compact mass of men melts and looks like mud. What I want to say is that sometimes we’re hamburger and bombing, and I guess we’re repugnant.

In Paris 13th District, the latest film by Jacques Audiard, one of the main characters goes onto an erotic website to strike up a conversation with someone who has caused her problems because of the extreme resemblance between them. Among so many strangers and possibilities, she somehow ends up looking for herself, perhaps as a way of infiltrating this other timeline in which she works in an adult chat room, like someone looking in a mirror to demand explanations about how it went. Or how it could have gone. Or how it could have gone if she’d known how it would have gone.

“Nothing I write would be as it is if I weren’t writing it today, in this century, with a computer in front of me which, like an oracle, like a crystal ball, like a reflection in a pool, shows me everything I want to see and quenches my initial thirst with an answer, an image, a sound, a video, the momentary ability to be a bird, god, cloud, to – for a moment – see, understand, know what I don’t know, what I haven’t learnt, what I’ve forgotten, what I haven’t yet seen or experienced”, explains the writer Irene Solà, author of When I sing, Mountains Dance, in this text about what it means exactly to have a computer with an internet connection. The explanation convinces me. So I say to myself, then, that maybe I just want to be a bird and a god and a cloud looking for everything, and looking for myself.

Other questions: compulsively revisiting things that don’t exist although they once did. When someone dies, you can spend months revisiting their last Whatsapp conversations, rereading everything as a way of reliving it. I once even felt sorry for the likes I hadn’t given in time. So one night, I found myself drunk in the doorway of a building settling my debts with a dead person. Telling them that I liked everything they’d done here, or here, and how they were in this photo, and this one. It reminds me of the exercise by Gabriela Wiener that she explains in Huaco Retrato: “Since my dad died, I have a solitary game I play with myself, or with him? Something halfway between death and technologies. I put his name in my Gmail and all his emails appear, I choose one at random and read it like you read the strips of paper in fortune cookies”. The same exercise can be applied to all types of loss. Same ghosts, different faces.

One night when my eyelids are drooping in front of the screen, I think I start to get delirious: I feel like I have to look for zebra crossings in a CAPTCHA and I can’t find a single one, and I feel stupid for not finding a single zebra crossing, when I’m a human. Because I am a human. I must be. My mobile phone bumps against my forehead, lighting up my face, cowering in a duvet, on a work night. The CAPTCHA still doesn’t recognise me, it asks me to try again, to look for traffic lights this time. In my drowsy state I imagine tiny pedestrians coming out of the squares, cheering for me and only me, waving their hands about: you lovely, beautiful, cool thing, you’re the best I hope you know you’re truly the best. But tell us, how can you not to find a fricking traffic light?





The Costs of Digital Utopia

What if the costs of sticking to the “there is no alternative” agenda of techno-capitalism are considerably higher than we have assumed?


Checking in garbage cans from the FSA (Farm Security Administration) mobile camps, 1941 | Russell Lee, Library of Congress | Public domain

What are the true costs of digital utopia, the most powerful weapon of mass seduction in the expanding arsenal of techno-capitalism? The usual answers – the loss of privacy, the rise of fake news, the risks of cyberwarfare – are, of course, not wrong. But, in staying on the surface, they invariably miss the deeper shifts and transformations that are not immediate and whose effects cannot be directly and explicitly linked to the machinations of Mark Zuckerberg or Elon Musk.

The lie that nurtures the utopian myth behind techno-capitalism is that there is only one way to do “big data” or “artificial intelligence” or “cloud computing” – and that this way has already been discovered and perfected in Silicon Valley. The benefits are too numerous and obvious to be even discussed explicitly; a mere invocation of a regularity like Moore’s law often suffices. The numbers go up – and this means “progress.” As for the costs, those could be carefully accounted for, and, when we are lucky, mitigated.

What, however, if the costs of sticking to the “there is no alternative” agenda of techno-capitalism are considerably higher than we have assumed? What if they are ultimately unknowable? What if the progress implied by Moore’s law – which links together the speed, the size, and the cost of micro-processors – is ultimately as one-dimensional as the techno-capitalism that has given birth to it, and if there are other parameters and metrics – related particularly to biodiversity but not limited to it – that, once accounted for, would significantly complicate our faith in the idea that more “techno-capitalism” means more “progress”?

One of the secrets to the immense resilience and longevity of the capitalist system is its ability to disown the costs of its operations, shifting them onto others, or setting them up in such a way that they would be paid for by future generations. Some of the early critics (for example one of the fathers of environmental economics, William Kapp) spoke of “cost-shifting,” finding in it one of the primary driving forces of capitalism. When the true costs of its operation are engineered away, to be felt by others or at a much later point, it’s no wonder that capitalism appears as a benevolent system.

Its latest iteration, techno-capitalism, has perfected these methods to a point where many of us do think that this new socio-economic system is truly as frictionless as its proponents advocate. Its legitimacy rests on the ability of big platforms to convert user data into implicit subsidies that cover the non-trivial costs of us using their services. Thus, it appears that the system truly runs on magic: somehow, one can use the services of Facebook and Google without ever paying for them. There’s no cost-shifting, Silicon Valley assures us, because there are no costs.

When the ideological debate is framed this way, it’s no wonder that something like Moore’s law appears highly credible: we have been trained to believe that it’s only benefits – and “progress”! – that one is to expect from digital technologies. It’s no wonder that our ability to think about alternatives to this system is greatly constrained; when the costs are presumed not to exist, why should one even bother? This is what is truly at stake in making the costs of techno-capitalism fully visible: it’s a pre-requisite to a genuine techno-politics that would be able to redirect digital technologies towards more emancipatory uses.

Inanimate Species, an installation by Joana Moll | © Ona Bros

The ultimate irony of the past few decades has been that, in making our own lives increasingly more transparent and visible, techno-capitalism has done its best to confuse us about its own operations. There is a powerful epistemic asymmetry at work here: while all of us, as individuals, are expected to render ourselves objectively “knowable,” techno-capitalism only wants to be known on its own terms, rendering vast chunks of its actual methods, processes, and infrastructures inscrutable. For the most part, they remain invisible as well.

How do we regain the capacity to see them and, hopefully, to discuss their effects? The conventional answer is that we could do so by refining our theories. Ultimately, techno-capitalism is still capitalism – and it’s our inability to think through the political economy of data and its associated infrastructures that has rendered our analytical apparatus impotent. There’s much truth in such a diagnosis. After several decades, we still don’t even know how to speak about “data”; is it the product of one’s labour or is it just a residue of social activity? As long as questions like these remain unresolved, we are not likely to get much conceptual – let alone visual – clarity from forays into political economy.

This leaves us with forms of narrative that, in bypassing the formalistic analysis of political economy, might nonetheless reveal some deep flaws in the conventional account of progress that we associate with techno-capitalism.  Correlation does not imply causation, of course, but in our current intellectual environment, where the very terms of the debate have been undermined by our inability to think beyond techno-capitalism, correlation might also be good enough; to think in terms of causation is a sort of intellectual luxury that requires the kind of analytical maturity that, alas, we have not yet reached.

All we can hope for at this point is to grasp the limitations of our own current categories and concepts; it will take a lot of hard work to develop an entirely different conceptual vocabulary to make sense of the new environment – and to build a politics that would allow us to transcend techno-politics and all its limitations. But for this task of cognizing and working through our own limitations, correlations are not only more than enough – they are also a perfect instrument for jolting us out of intellectual passivity by juxtaposing processes and activities that we would normally never perceive together.

Espècies inanimades, una instal·lació de Joana Moll | © Ona Bros

Joana Moll’s bold attempt to situate the rise of microprocessors against the decline of the number and the diversity of insects is a wonderful and much-needed step in that direction. It’s only by revealing the inadequacy of our notions of technological progress, with its artificial blindness and inattentiveness to criteria that are of no value to techno-capitalism, that we will be able to regain our intellectual and political bearings, and, hopefully, steer the project of techno-capitalism away from destroying all life on earth (even if it succeeds in doing so in the most intelligent manner possible).

The irony of Moore’ law, which is taken as an article of faith by many in Silicon Valley, is that it illustrates something quite different from what its adherents believe. There’s no better testament to the reality of capitalist competition – with competing firms always pouring money into outperforming their peers – than the history of the microchip:  what many technologists take to be “natural” features of a given technology (e.g. the ever-shrinking microchip) are actually just the effects of capitalist competition.  But what drives the demand for all these increases in speed that competing firms are rushing to provide? Is this constant insistence on speed rational?

To the extent that they support social and political projects of dubious utility, such gains in speed are of little emancipatory import. Just in the last decade, for example, we have witnessed a tremendous amount of computing power – underpinned, of course, by the ever-powerful processors – dedicated to the mining of crypto-currencies like Bitcoin. The increases in speed – the stuff of “progress” that techno-capitalism likes to boast of – that undoubtedly underpin such “advances” are of little societal value: the energy consumed in solving cryptographic puzzles (which is what “mining” is at the end of the day) is just a price to be paid for not trusting the state and needing some parallel, non-state system of doing accounting.

It very well might be, however, that this is hardly the only price to pay. And yet, just like in all the other instances of cost-shifting by the earlier capitalist regimes, we have not actually seen the bill yet. Shouldn’t we be doing something to anticipate it? Shouldn’t we demand as much transparency from techno-capitalism as it demands of us? We certainly should – and it’s in this space of speculative juxtaposition and critical correlationism that Joana’s efforts to narrate the rise of microprocessors and the fall of insects make a long-lasting contribution. Hopefully, it will awaken us from our slumber and will make us reflect not only on the costs of progress but also on some of the alternative paths that it might take. Becoming better, faster, and more efficient at making human (as well as non-human) civilisation obsolete should not count as “progress”, even if, under capitalism, it often does. 

This text forms part of the catalogue for the installation “Inanimate Species” by Joana Moll, on display at the CCCB from 1 March to 25 April 2022.





Under the Cloud

The story of how Amazon developed its digital platform in the cloud and its impact on different regions.


Noranda, 1942 | Joseph Herman Bolduc, Bibliothèque et Archives nationales du Québec | Public domain

Despite the great success of the metaphor, the cloud is rooted firmly in the ground. In this fragment of the book Fulfilment: Winning and Losing in One-Click America, the journalist Alec MacGillis tells of the emergence of Amazon Web Services and its impact on the development of data centres. Text published by courtesy of PeriscopiPenínsula and Farrar, Straus and Giroux.

The cloud. The term was so airy and ethereal. It conjured a fly ball arcing to left-center field, a lazy summer Sunday on the prairie.

In fact, the cloud was profoundly earthbound and material, the antithesis of lightness and luminescence. The cloud lived in data centers, the vast windowless structures that started proliferating in certain corners of the American landscape at the close of the twentieth century as communications and commercial life moved online. The millions of transactions and interactions and activities that had not so long ago made up daily life—a letter sent, a dollar bill extended, a newspaper read, a record played, a film screened—had gravitated from a quotidian and ubiquitous existence into a realm that was almost entirely hidden from view. Inside data centers were the giant servers through which everything moved: the business transactions, the government secrets, the lovers’ emails. By 2018, we were creating 2.5 quintillion bytes of data every day, a sum that was increasing at such an exponential rate that 90 percent of the data in the world had been generated in only the two years prior. Every minute, on average, the world conducted 2.4 million searches on Google, watched 4.1 million YouTube videos, and posted 47,000 photos on Instagram.

The installations were virtually self-sufficient—a data center with more than 200,000 square feet and $400 million worth of servers and equipment required only twenty engineers and technicians to function. What they really demanded were power and water—the former to run the machines, the latter to cool them.

And they demanded security. The data centers were the nerve centers of the nation, and were protected as such. Their walls were twice as thick as those of most buildings, able to withstand winds of up to 150 miles per hour. Their concrete floor slabs could hold up to 350 pounds per square foot. The machines were locked in cages and set off by firewalls. Some had concrete ceilings to hold massive backup generators. The buildings could be mistaken for bomb shelters, large enough for an entire town of paranoiacs.

In theory, the data centers could be anywhere with proximity to fiberoptic cable, plentiful water, and cheap electricity. In reality, they clustered together, a near infinitude of human commerce and communication packed into a handful of buildings in a handful of locations. Even more so than other aspects of the digital landscape, the cloud could be dominated by a few places, and by a few companies, those with the most capacity and most connections.

And the biggest cluster, by far, was in Northern Virginia. From early on, the area had been home to an outsized share of commercial internet providers drawn by the concentration of military contractors and high-tech companies. The area also offered plentiful land—the gently rolling farms spreading west and south of the Potomac toward the Piedmont, with the Blue Ridge peaks in the distance—and cheap electricity fueled by Appalachian coal. In 1992, a group of internet providers met for lunch at the Tortilla Factory in Herndon, Virginia, to make a decision that would seal the area’s advantage: they would physically bring their networks together in a new colocation point, greatly enhancing their reach and value to customers. The hub, dubbed Metropolitan Area Exchange-East, was housed in a cinderblock room within an underground parking garage in Tysons Corner, the edge city straddling the Beltway.

Alec MacGillis | Press conference at the CCCB

Providing additional gravitational pull were the tax breaks bestowed on the data centers by the state of Virginia and local governments, especially Loudoun and Prince William Counties. For these Washington exurbs, the data centers were ideal neighbors: they spun off tax revenue to pay for the deluxe schools demanded by residents moving into new McMansions, but they added few cars to the clogged roads. In that sense, the paltry number of jobs that came with them was, to use tech lingo, a feature and not a bug.

They cost between $50 million and $70 million apiece. They went up without identifying details—anyone passing by had no idea for whom, or for what, they were being run. “We don’t comment on any project real or imagined,” said Prince William’s director of economic development in 2000, in response to a reporter.

The puncturing of the tech bubble that year left many of the centers vacant, raising the specter of a permanent blight of these huge husks, unfit for any other purpose. But the setback was temporary. The rise of the counterterrorism apparatus after the September 11 attacks created new demand for highly secure data storage—increasingly, the buildings came with “man traps,” one-person entrances outfitted with biometric scanners capable of reading fingerprints, palms, or retinas; there were also fake entrances, bulletproof glass, and Kevlar-lined walls. Some of the data centers were even deliberately set behind hills, to obscure visibility and stop charging vehicles packed with explosives; those less topographically protected had perimeters of concrete posts.

And then came the cloud.

The term’s functional origin—the idea of running applications off of servers that are not your own—came in the early 2000s, in Seattle. Amazon had been building a business called Merchant.com that provided technology for other companies’ e-commerce websites, and it noticed how easy it was for outside users to access Amazon’s technology with well-designed interfaces. Around the same time, the company discovered that many of its software development teams were spending months re-creating the same basic software infrastructure for their projects, over and over. Why not build an infrastructure platform that would make Amazon’s own software development more efficient and also offer it to other companies? Those companies could design applications to run on the infrastructure—everything from computing to payments to messaging—and thus be spared the cost and hassle of building their own infrastructure and running their own servers and data centers.

The company created Amazon Web Services, its cloud-computing branch, in 2003, and began offering its first data storage service in 2006. By 2017, AWS was providing cloud services to, among others, GE, Capital One, News Corp, Verizon, Airbnb, Slack, Coca-Cola, and even direct rivals like Apple and Netflix, while bringing in more than $17 billion in revenue for the year—a tenth of Amazon’s total. “AWS has built one of the most featurefull and disruptive technology platforms that’s existed in my lifetime,” declared AWS’s global head for enterprise strategy, Stephen Orban.

Between its dominance of the cloud and its dominance of online sales, Amazon had positioned itself as a gatekeeper that extracted fees—what economists called “rent”—on two of the biggest realms of digital commercial activity, data storage and e-commerce. You could almost compare it to a tax, except this tax was being collected by a corporation, not by a duly elected government. Or you could compare it to a utility company— Amazon had, in essence, slapped a meter on the side of the nation’s data centers, except without the regulatory limits that utilities faced.

Or you could compare it to the lucrative extraction play of the banks and hedge funds leading up to the 2008 financial crash, which amounted to “tails I win, heads you lose.” No matter what, Amazon was collecting its fee. “I see parallels in Amazon’s behavior to the lending practices of some financial groups before the 2008 crash,” wrote the Financial Times columnist Rana Foroohar. “They used dynamic pricing, in the form of variable rate subprime mortgage loans, and exploited huge information asymmetries in their sale of mortgage-backed securities and complex debt deals to unwary investors including cities such as Detroit. Amazon, for its part, has vastly more market data than the suppliers and public sector purchasers it plans to link. Indeed, I see more and more parallels between online groups and large financial institutions. They each sit in the center of an hourglass of information and commerce, taking a cut of whatever passes through. Like a big investment bank, Amazon can both make a market and participate in it.”

Data Center Alley | Google Maps

It was also reminiscent of the railroad giants in the late nineteenth century, which controlled both the tracks and much of the oil and coal that was being transported on the tracks, thus allowing them to squeeze smaller fuel producers.

As more and more companies moved to the cloud, and as more companies followed Amazon’s lead in seeking to offer capacity, the data centers sprawled as never before. In Northern Virginia, they covered more than 9 million square feet. Dominion Virginia Power, the state’s dominant utility, which relied heavily on coal as its energy source, was in 2013 projecting a 40 percent increase in data-center power demand over just the next four years. Each data center consumed about as much energy as 5,000 homes.

Land was filling up, especially in Loudoun County, where there were forty data centers by 2013, covering 5 million square feet, the equivalent of twenty-five Walmart supercenters, and the county was expecting to reach twice that over the next decade. In a single two-year span, 2011–2012, the county had added nearly 800,000 square feet in data center space, but not a single square foot of traditional office space. Land was going for more than $1 million per acre in the heart of the county’s “Data Center Alley.”

The county boasted that as much as 70 percent of all internet traffic flowed through its centers each day. Ashburn, an exurb beyond Dulles International Airport that was home to the county’s original data-center cluster, was mentioned in the same breath as the world’s other great internet hubs: Tokyo, London, Frankfurt. “I’m not overstating things by much to say that Data Center Alley has become essential to western culture’s social fabric,” said the Loudoun County economic development director Buddy Rizer. The county boasted, too, that more than $200 million in tax revenue flowed from the centers into its treasury each year, making it possible for the county, the wealthiest in the country, to provide services such as fullday kindergarten that needier communities were unable to offer.

Amazon already had several data centers in Northern Virginia, managed under the name of its data center subsidiary, Vadata. But it needed more capacity, a lot more, because it was angling for a new realm of business. In 2013, it won a $600 million cloud contract from the CIA, and branches of the military were starting to explore shifting to the cloud, too.

In 2014, an unidentified company applied to build a 500,000-square-foot data center near the town of Haymarket, in Prince William County. The site along John Marshall Highway was considerably farther west of the other data center clusters in the area, offering a higher level of secrecy. It was beyond Manassas National Battlefield Park, and abutted the Rural Crescent, a swath of land long protected from development.

But the unnamed company got county approval—Prince William had no zoning rules for data centers. It probably did not hurt that employees of the county’s economic development team made two trips to Seattle in 2013 for meetings at the company’s headquarters, one of the trips in the company of officials from Dominion, the Virginia utility.

All that remained to be done was hooking up the new center to power.





Kyle Chayka: “The platform connects us but also homogenizes our tastes”

We talk to the journalist about minimalism and how this trend affects us when it comes to relating to the world that surrounds us.


Minimalism is a trend. But this aesthetic current, which encourages people to have and need less, has been swallowed up by the logic of capitalism and used to homogenise aesthetic trends across the board. Kyle Chayka, who writes for The New Yorker, reflects on this process while inviting us to stop and pay attention to the world around us.

Kyle Chayka is a digital columnist who writes about the internet and the current complexity of digital environments, and who contributes his articles to leading publications including The New Yorker. Why do some things go viral? Where do memes come from? Where do global trends start?

In this process of digital archaeology, Chayka has focused on one of the elements that is common to various aesthetic trends – minimalism. It features in the content we consume on social media, in the most popular cultural spaces, on platforms, in Airbnbs, in coffee shops, even in our own homes. The thesis of his book The Longing for Less (Bloomsbury, 2020) is hidden among a heap of personal experiences mixed with a lot of information about the history of art, where he explains why minimalism has become a commodity.

Chayka argues that minimalism is about living a simpler life, having less things, needing less things, and being able to pay attention to and appreciate what is in front of us. It’s an idea on how to simplify and cut back our needs and be aware of ourselves and our surroundings.

As always, capitalism has swallowed up the concept, and now you can buy minimalist beauty products, minimalist lights, minimalist houses, minimalist furniture and watch minimalist series. A concept born in the field of contemporary art has become just another marketing hashtag, reaching such paradoxical heights as inviting you to buy an object in order to live a life without so many objects. Buy more to have less.

Minimalism has been decontextualised, it has become an excuse to promote an aesthetic monoculture which unifies trends around the world and means that you can walk into a café or an apartment and not know whether you are in New York, Prague or Seville. It’s the standard trend of gentrification, a false neutrality by design. According to Chayka, the West has adopted a sugar-coated, naïve version of minimalism where what we wish for is apparently to unburden ourselves of the physical world in order to focus on the digital world. For a reality consisting of pleasant decor, a neutral backdrop but with comfortable touches where we can express ourselves, stand out, connect to the wifi and enjoy our screen-time – a kind of projection of the millennial generation’s ideal state.

Kyle Chayka. On digital abundance

The aesthetic of minimalism has swallowed up the original idea, and instead of helping us to connect more with things, with reality, and to reflect on our relationship with our environment, this corporate minimalism promotes homogenisation, consumption and isolation.

Chayka calls these homogenised spaces ‘Airspace’, a sterile aesthetic spread around the world by Silicon Valley platforms. But it doesn’t consist of large multinationals offering their aesthetic to all their branches or stores around the world, but rather thousands of coffee shop owners around the world deciding off their own bat to adopt the same fake artisan aesthetic. The digital platforms have ended up producing a worldwide “harmonisation of tastes”, so that when you travel, you always come back to the same place. So that all city centres look the same. According to Chayka, Airspace is the coffee shops, bars, offices and coworking centres that have the same distinctive elements everywhere – minimalist furniture, avocado toast and craft beer. Reclaimed wood. Industrial lighting and a good internet connection.

It’s an aesthetic globalisation driven from an American point of view, with implicit values in its infrastructure. Social networks have allowed the connection of billions of people around the world, but also the propagation and unification of corporate aesthetic criteria. All of a sudden, we all have the same cultural and aesthetic reference points. It’s true that Hollywood exercised a cultural hegemony during the second half of the 20th century, but Silicon Valley is now the focal point of this new hegemony, which has driven everything from aesthetic unification to the spread of disinformation. Regardless of whether or not we are connected, social media affect our environment, our neighbourhood, what shops and restaurants we have nearby. It’s as if, in essence, the physical world were simply a surface, an interface with the digital world.

But how can we survive in a homogenised, accelerated and decontextualised environment, where being proactive and keeping our curiosity and critical spirit alive requires more and more effort, where algorithms mix up information, disinformation and sponsored content in a single place? For Chayka, the internet needs context, we need to regain our initiative, instead of allowing ourselves to be led along by the algorithm-controlled feeds of Netflix, Instagram or any other popular platform; we need to generate new spaces for deep discovery, new platforms or projects that are focused on verified niche information. We have to stop being passive consumers of content, hold onto our critical spirit and demand platforms where content can be consumed in a way that is more personalised, where the feed recommended by the algorithm is only one of the options for accessing information.

Nowadays we mix together leisure, work, information and disinformation in the same thread on accelerated digital platforms, where polarised content generates more impact than substantive content, and where we only care about the immediate present. A book from 10 years ago may be very interesting, but on the internet a text from a few days ago is often considered obsolete.

What’s more, we have to accept that on the web, video content has progressively been gaining ground over written content. Reading is an active process that requires an effort to understand. Images, on the other hand, convey their meaning immediately. As the structure of the internet has improved, it has become ever more visual and more effective, allowing a more passive type of consumption, as it requires less attention to watch a video than to read an article. Historically, the internet has been a very good space for the written word, one where humanity has been able to express itself more than ever and where spaces and projects have been created for all kinds of interests and communities. For Chayka, this progressive shift towards the use of visual narratives and the growth of podcast-type audio content over the classic written article need not necessarily be a bad thing.

Chayka sees it as a new, differentiated information ecosystem, with a new range of opportunities to provide context and educate people. If you want, a one-minute Tik Tok video could be simply banal or a very powerful communication tool, depending on how you use it. The challenge is how to offer quality content packaged in an environment where banality rules the day. How to continue nourishing people’s spirit and curiosity in environments specially designed to enhance our passivity.

This is why Chayka writes about minimalism, to defend a type of relationship with the reality that surrounds us. The art, music, architecture and philosophy he describes in his book are not concerned with a particular aesthetic. They try to find unmediated experiences, to pay attention to what surrounds us without putting up barriers, and to accept ambiguity, understanding that opposites can form part of a whole. For Chayka, minimalism does not offer answers, much less a guide or a manual, rather it proposes another way of living that can be extended into the future beyond the lifespan of a simple trend, and these are principles that are still valid, despite the capitalist appropriation and trivialisation of minimalism by large corporations.





Confusing The Machine: Art, Masks and Facial Recognition

Masks have been used by artists and activists as a tool for offering resistance against facial recognition systems.


The development of reconstructive plastic surgery during the First World War | Horace Nicholls | © Imperial War Museums

The face is an important part of the body for our personal, social and even institutional lives. Control over facial recognition technologies gives those who wield it a level of power that may threaten many rights and freedoms. This has led to growing opposition to these systems in recent years, generating a public debate on the need to regulate them.

At the end of 2021, Madrid and Barcelona airports began testing a facial recognition system that allows passengers to check in, pass through security control and board their plane without having to show their ID card or boarding pass. The system, some media outlets enthusiastically announced, is so accurate that it can even identify the faces of people wearing masks. The influence of corporate marketing in how the news was presented was evidence that, as Henry David Thoreau criticised of his contemporaries, “there are some who, if they were tied to a whipping-post, and could but get one hand free, would use it to ring the bells and fire the cannons to celebrate their liberty”.

Everyone can see that facial recognition technology is a controversial tool by nature, given its ability to automate civic control on a scale and at a speed previously unimaginable. Its use is spreading through both public and private initiatives, championed by authoritarian governments such as China’s and by large companies such as Apple, Mastercard and Facebook, which for years have been using systems of this type in security functions and photo tagging.

From the laboratory to the archive

Although facial recognition technology is relatively new, its philosophical and political implications go back centuries. Prior to the digital age, various efforts were made to turn the human face into an object from which information could be systematically extracted. The oldest of these was physiognomy, the earliest efforts in which date back to classical Greece but which was developed especially in the second half of the 18th century. This discipline aimed to draw conclusions about character and personality from facial shapes and expressions. Also in the late 18th and early 19th centuries, phrenology was born, which considered that the shape of the skull was a reflection of the brain and that it could be studied to predict behaviours such as an inclination towards marriage, poetry and kindness, or also theft and crime, to give but a few examples.

However, if there is one discipline that is linked directly to contemporary facial recognition technology, it is that of police records management. The first system of criminal identification cards appeared in France in 1833, and in the following decades, other industrial societies became interested in analysing crime on a regional scale. As the image theorist Hans Belting points out, “the authorities wanted to protect themselves from the faceless threat posed by criminals, over whom they had lost control in the big cities, and so they feverishly developed appropriate methods of facial control”. This need resulted in the project to create an archive of faces, with photography as the tool that would guarantee the reliability of such an inventory. The aim was to have a census of all the anonymous people who blended into the masses, and this urgency was heightened by revolts such as that of the Paris Commune, when the authorities began a wave of repression that took in not just common criminals but also those convicted of political insurrection.

All these disciplines, however, soon revealed their limitations. Alphonse Bertillon himself, the architect of forensic policing, recognised in 1885 that photography was not a reliable tool for identifying criminals. As arrests were made in different places and at different times and the portraits were taken by different police officers, the images were subject to too many changes to guarantee their veracity.

A technology of exclusion

The problems of 19th-century control techniques were solved, paradoxically, when it was no longer necessary to observe. Facial recognition technology extracts the details of facial features and checks whether or not they are listed in a database. This means that, even using images as the raw material, the machine does not interpret them, but merely converts them into information.

This processing without a human eye to interpret and understand the context has led to widely known and decried errors. Machine learning works by looking for patterns in the real world, but to do so it must first use “training data”, which in many cases reflects and perpetuates the biases of the people who provide it. In 2018, a now-classic study by MIT and Stanford showed that the margin of error in facial recognition systems was higher for women than men, and even higher for black people than white people. In a demonstration of the problem that speaks for itself, the African-American researcher Joy Buolamwini showed that some algorithms failed to recognise her face, even though they did recognise an expressionless white plastic mask.

Gender Shades | MIT Media Lab

Although some companies have tried to correct these biases, the problem is a difficult one to solve, especially in terms of the predictive capacity of some algorithms. In addressing the discrimination within their systems, research centres have found that justice and fairness are ambiguous concepts that are hard to translate into mathematical formulas, meaning it is not always easy to program software that guarantees impartiality.

The art of camouflage

The threat posed by facial recognition to certain rights has stirred up concern not only among many groups of activists but also among artists who, over the course of the last decade, have looked for ways to circumvent this technology. Of those leading the way, the most prolific is Adam Harvey, whose work focuses on research and criticism of surveillance. A decade ago, this Berlin-based American gained a certain amount of notoriety with CV Dazzle, a project that leverages fashion as a tool to outwit facial detection mechanisms. The concept is based on the use of make-up and other things such as asymmetrical haircuts that, styled in the right way, confuse algorithms and prevent them from working.

Since 2010, the initiative has branched out into other concepts such as Stealth Wear, a collection inspired by traditional Islamic clothing and the use of drones as a weapon of war, and which uses reflective fabric to prevent tracking by thermal surveillance systems. Another related project is HyperFace, a pattern-generating system still in the prototype phase that, instead of confusing the algorithm by stopping it from recognising faces, satisfies its need for identification by providing it with false faces.

CV Dazzle Look 5 | Adam Harvey

Following a different approach, the artist Zach Blas studies the consequences of this technology on those groups of people subject to the greatest inequalities and those who fall outside of cultural canons. One of his most notorious projects is Facial Weaponization Suite, which creates “collective masks” from the aggregated facial data of people from excluded sectors of society. Fag Face Mask, for example, was generated from images of queer men, but similar objects have been made to represent the invisibilisation of women, the criminalisation of black men, and border surveillance between Mexico and the United States. The resulting masks attempt to represent these identities in a single amorphous head that algorithms are unable to identify.

Another project by Blas related to discriminated groups is Face Cages, in which the artist works using the geometric diagrams generated by facial identification software. The project brings together four queer artists to generate biometric graphics of their own faces, which are then fabricated as three-dimensional metal objects. When in place, the masks resemble cages or meshes of bars, and could be mistaken for some form of medieval torture device. In museum settings, the work is shown with videos of the artists themselves wearing this kind of armour on their faces, although as they are all performance artists, the masks are also used in public interventions and performances.

A third approach worth mentioning is that of the American Sterling Crispin, who between 2013-2015 developed Data Masks, a work that aims not to block facial recognition, but to mirror it. The project is based on reverse engineering, whereby the algorithms themselves create face-like shapes. Taking the results of this process, the artist produces a series of 3D printed masks which he describes as “shadows of human beings as seen by the minds-eye of the machine-organism”. As he explains in the information on the project, the aim of the masks is not to confuse the algorithm, but to hold a mirror up to it, so that the machine works according to its own rules and conventions.

The work of these three artists has a common link: the purpose of the masks is not to hide from the camera, but to confuse it, to trick it. In this sense, their works can be understood as tools for vindication, as weapons of war. This characteristic makes them similar to camouflage, a military tactic – often developed in collaboration with artists – which does not always aim to make the subject disappear, but also to make it difficult for the enemy to see. A classic example is dazzle camouflage painting on ships, which was used during World War I to adorn ships with bold stripes and cubist shapes that would confuse the opposing fleet, who would then be unable to estimate the size or the direction of the vessels. Similarly, the works of Harvey, Blas and Crispin do not aim to make the wearer disappear. Designed for public use in demonstrations, their purpose is to voice a highly visible condemnation of the dominant forms of surveillance and control.

An anti-facial recognition movement

In 2014, journalist Joseph Cox wrote about “the rise of the anti-facial recognition movement”. Looking back, it was perhaps premature to speak of a “movement”, but it can be said that critical design experiments in this area have been multiplying. Some projects continue the work with masks, but the techniques now include jewellery, t-shirts, face projectors, glasses with LED lights, pixelated balaclavas and scarves with faces printed on them, to give but a few examples.

None of these camouflage techniques are effective all of the time. In fact, some of them only work for specific algorithms or for versions of software that existed years ago. What is more, walking around with your face covered or with outlandish garments looks neither comfortable nor elegant, so research on how to thwart facial recognition technology still has some ground to cover. In any case, the initiatives of these artists and activists serve to raise public awareness and interest, something that will ultimately have an impact on public and legislative debate. There are already many governments around the world, both at both the local and state levels, that are proposing to regulate these algorithms. The outcome of these decisions will add extra weight to the ever-fragile balance between security and freedom.





Rethinking AI: Distributed Cognition and Expanded Corporeality

A defence of AI as a commons and of the need to collectively intervene in its development.


The Lavery Electric Phrenometer, an automated phrenological measuring device, invented and patented by Henry C. Lavery. 1907 | Hulton Deutsch | Public domain

More than the refinement of algorithms, the development of Artificial Intelligence (AI) has been driven by the ability to manage and analyse larger volumes of data. Following the logic that this data is produced collectively, we need to rethink the essence of AI as a commons and its relationship to our bodies and cognitive processes.

A whole host of problems come up when we talk about Artificial Intelligence (hereafter ‘AI’). The first of these has to do with the idea of intelligence in a broad sense: even before we attribute it to an artificial entity, the notion itself is problematic, including when we’re only talking about human beings. At least in the Western philosophical tradition, we can say that intelligence is related to the notions of intellect and understanding, and as such to the logical and mental activity of conscious knowledge. Many of the discussions around this concept include a critique of mind-body dualism, pointing out that cognitive activity does not only encompass a purely psychological-mental dimension, nor a necessarily and exclusively conscious dimension, but also involves sensorial, emotional, affective and unconscious aspects that incorporate physical bodily processes. This view opens up a critique of the concept of intelligence which implies that cognition should also be attributed to certain material processes and non-human agents, including technical artefacts and systems.

Along this line of thinking, which advocates for a broader understanding of intelligence, we find proposals such as that articulated by N. Katherine Hayles, who stresses the cognitive continuity that exists between the conscious dimension of human beings and non-conscious parts of the self, such as some cellular, chemical and digestive processes. When we eat, our digestive system functions based on what we could call procedural knowledge – it is capable of separating nutrients and eliminating potentially toxic material, while also knowing how to perfectly distribute the energy with which these nutrients provide us to the parts of the body that need it. And all this happens without us being the least bit aware of it. Yet it is this somatic knowledge that allows all the other kinds of conscious cognitive activity to exist. However, this does not only occur within the confines of the human body, as it can also be attributed to other non-human agents, such as plants and technical systems. In her book Unthought, Hayles argues that:

Consciousness occupies a central position in our thinking not because it is the whole of cognition but because it creates the (sometimes fictitious) narratives that make sense of our lives and support basic assumptions about worldly coherence. Cognition, by contrast, is a much broader capacity that extends far beyond consciousness into other neurological brain processes; it is also pervasive in other life forms and complex technical systems.[1]

It is important to note that these layers of “non-conscious cognition” are not, however, external to consciousness, rather their relationship constitutes complex cognitive assemblages distributed at different scales and between different agents. These assemblages include interfaces, communication circuits, sensors, agents, processors, storage media, distribution networks, and human, biological, technical and material components. And they are constantly being reorganised and reconfigured: they are not defined, stable networks, but processes in constant transition, to which elements are added and removed while the connections between them are rearranged.

Katherine Hayles  Rethinking Thinking: Material Processes and the Cognitive Nonconscious | The Qualcomm Institute

We propose thinking of AI from this perspective of distributed cognition, so as to avoid thinking of it from the promethean, anthropocentric and grandiloquent imaginaries that tend to associate it with the moment of great revelation or technological singularity in which machines could acquire self-awareness. The truth is that AI is a much more prosaic thing that forms part of our everyday lives, from digital assistants like Siri to personalised recommender systems. Therefore, instead of seeing AI as something autonomous from, or analogous or superior to humans, thinking of it as yet another agent within assemblages of distributed cognition can take us beyond the logic of human-machine dichotomy, competition or replacement to consider the possible forms of desired relationship and interaction that take advantage of the differential cognitive capital of both, in synergies that are complementary rather than exclusive, cooperative as opposed to competitive.

To approach our relationship with technical cognitive assemblages from this paradigm, we may first need to broaden our spectrum of corporeality to understand the extent to which we are already cooperating with their development. Although our idea of intelligence has expanded beyond mind-body dualism, we continue to conceive of the body solely and exclusively from a biological-somatic paradigm. But corporeal reality can also be considered within broader frameworks that take into account the complex relationships through which somatic bodies form corporeal realities with other dimensions that are not only organic, but also technical. In this sense, we can think of data as a second body – an exosomatic body – which is outside the body and yet has a relationship of interdependence and co-constitution with the somatic level. And it turns out that our “data bodies” are extremely important when it comes to training AI: the more data an artificial intelligence has to train with, the more accurate its generalisations will be, and the more complex and sophisticated the patterns it can identify.

In fact, the latest advances in technical cognition are based more on the quantity of data and on storage and processing capacity than on algorithmic prowess, as the big tech companies would have us believe. In her article “The Steep Cost of Capture” (2021), Meredith Whittaker illustrates this reality using the case of AlexNet, an algorithm that is highly efficient at predictive pattern recognition and that won the ImageNet Large Scale Visual Recognition Challenge in 2012, marking a key moment in recent AI history. However, as the author says: “The AlexNet algorithm relied on machine learning techniques that were nearly two decades old. But it was not the algorithm that was a breakthrough: It was what the algorithm could do when matched with large-scale data and computational resources”.[2] And so it is the cognitive potential implicit in the aggregation of our bodies of data that is generating the latest advances in AI.

Abolish Silicon Valley | Wendy Liu

This means we can think of AI as a product of collective effort that is being subject to “a commercialized capture of what was previously part of the commons” or “privatization by stealth, an extraction of knowledge value from public goods”,[3] as suggested by Kate Crawford and Matteo Pasquinelli, who speaks of a “regime of cognitive extractivism”[4] to refer to the colonial relationship between corporate AI and the production of knowledge as a commons. Therefore, instead of crediting large corporations with developing astounding technologies, we should view their activity as a form of pillage and plunder that prevents these technologies from reaching their full social potential by placing private interests first. This is not only because of the enormous benefits of some of the specific applications, but also because AI has become part of what Marx called the “general conditions of production”: those technologies, institutions and practices that shape the environment of capitalist production in a given place and time. This has led experts such as Nick Dyer-Whiteford to speak of “infrastructural AI”:

If AI becomes the new electricity, it will be applied not only as an intensified form of workplace automation, but also as a basis for a deep and extensive infrastructural reorganization of the capitalist economy as such. This ubiquity of AI would mean that it would not take the form of particular tools deployed by individual capitalists, but, like electricity and telecommunications are today, it would be infrastructure – the means of cognition – presupposed by the production processes of any and all capitalist enterprises. As such, it would be a general condition of production.[5]

The concept of “means of cognition” is used to signal the replacement of human perception and cognition by a technological infrastructure intermeshed with means of production and means of transport and communication. In the face of this, his proposal is for a “communist AI” which would not consist of the automation of production processes followed by the establishment of a universal basic income (as proposed by the accelerationists) but of the expropriation of AI-capital, the development of new forms of collective ownership of AI and the application of AI to the collectivisation of other sectors. We believe that this might come about through recognition of AI as the result of aggregating the cognitive potential of our bodies of data, and thus as a computational public utility that should be subject to democratic control.

We have previously proposed that the principles of reproductive justice be applied to the field of technological sovereignty, since if we accept the postulate of data as a second body, we can claim the right to abort unwanted AI or denounce the abuses of the large corporations that infringe on our bodily autonomy; but we may also guarantee the means for it to develop in accordance with our collective interests or needs. This would involve decoupling technological development from the logic of capital, rather than focusing solely on possible socially useful applications, as to unleash technology’s transformative potential, it must serve the public good rather than private profit. In Abolish Silicon Valley [6] Wendy Liu proposes some measures aimed at achieving this goal: reclaiming entrepreneurship as a public service for non-capitalist purposes; earmarking publicly owned investment funds for non-profit enterprises (generating broad access to finance); developing new types of business ownership such as workers’ cooperatives with control over production; improving working conditions in the tech industry and empowering employees; establishing a progressive wealth tax; and, ultimately, expropriating companies with excess profits.

The purpose of these measures should be to regain autonomy and control over AI as both an expanded body (resulting from the aggregation of our bodies of data that extend beyond the limits of our skin) and as part of an assemblage of distributed cognition (i.e. as a constituent element of our mind that extends beyond the limits of our skull). Making this link explicit may help us to shift away from viewing AI as a solely technical achievement linked to the development of algorithms and independent from ourselves, to seeing it as a commons to be managed collectively and which should serve more commendable purposes than facial recognition or targeted advertising. As such, it is not a question of condemning it as a potential threat to our species (either in the form of an evil superintelligence ready to annihilate us or robots that are going to take our jobs) or uncritically celebrating it as a neutral technology capable of solving all our problems. It is a question of intervening in its development, evaluation and implementation by reclaiming AI as an extension of our bodily reality and cognitive processes.


[1] N. Katherine Hayles, Unthought. The Power of Cognitive Nonconscious (University of Chicago Press, 2017).

[2] Meredith Whittaker, «The Steep Cost of Capture». In: Interactions, XXVIII (6), December 2021, p. 52. Available at: https://dl.acm.org/doi/pdf/10.1145/3498853.

[3] Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press, 2021).

[4] Matteo Pasquinelli and Vladan Joler, “The Nooscope Manifested: Artificial Intelligence as Instrument of Knowledge Extractivism”, KIM research group (Karlsruhe University of Arts and Design) and Share Lab (Novi Sad), 1 May 2020 (preprint forthcoming for AI and Society). https://nooscope.ai.

[5] Nick Dyer-Witheford, Atle Mikkola Kjøsen and James Steinhoff, Inhuman Power: Artificial Intelligence and the Future of Capitalism (Pluto Press, 2019).

[6] Wendy Liu, Abolish Silicon Valley: How to Liberate Technology from Capitalism (Penguin Random House, 2020).





Can Algorithms Be Fairer Than People?

What degree of trust should we place in machines and how much consideration should we give to professional expertise?


Tightrope performers at 4-H Club fair. Cimarron, Kansas, 1939 | Russell Lee, Library of Congress | Public domain

Digital intelligences are increasingly being used to solve complex issues that were previously decided on the basis of professional expertise. As such, we set out to analyse how humans and machines make decisions and who makes them more fairly depending on the context.

Aixa is a five-year-old girl who is taken into a hospital casualty department by her grandmother. The child shows clear signs of abuse and bruises all over her body. The woman explains to the doctor that she has been beaten by her parents. To establish whether there is a risk of this happening again, and to make sure the child doesn’t receive further beatings, the doctor asks questions about the situation – she must decide whether to refer the child to the social services for legal protection, or send her back home, where the abuse could be repeated. To do this, she is assisted by an artificial intelligence (AI) tool that estimates the seriousness of the case and the precautionary measures based on the answers obtained. If the final decision is that she should be taken away from her parents, Aixa will not go home that night, but will be placed in care and foster homes, where unknown adults will take care of her basic needs for the next few years until the age of 18.

When in 2018 the writer and mathematician Cathy O’Neil visited the CCCB to present her book Weapons of Math Destruction, she posed a question to the audience: “Who would be capable of deciding whether or not a child who has been abused should go back home, with the good intention of protecting her from future abuse?” The audience remained silent, unsure of what to say. “In the US, this is being decided by an algorithm,” she added. At that point, I’m sure a lot of people in the audience were wondering: “But… how can an algorithm know whether the child is going to be hit again before it happens?”. And a few second later, O’Neil herself replied: “It can’t know, but this is what’s happening”.

Three years ago, the question might have seemed provocative and irrelevant, but today such tools operate in most countries unawares to the general public. Professionals in charge of social care cases make complex decisions on a daily basis on a wide range of issues. Before the AI boom, they relied on their own experience built up over the years.

Obviously, they sometimes made the wrong decisions, possibly with serious consequences for the people concerned. Which is why we now hope that intelligent AI systems will help us to improve these decisions. But what happens if the AI tool gives a different result to the specialist – which one will take precedence? If the algorithm decides that Aixa is at high risk but the doctor doesn’t consider it serious enough to separate her from her family, will she dare to send her home?

We know that AI is not perfect and that even the best intelligent system can make mistakes. Which begs the question: how much trust should we place in machines and how much in professional expertise? To find an answer, perhaps we should look at which of the two decisions – human or machine – involves more bias and more noise (i.e. more variance in the decisions). Notwithstanding the human tendency to accept the recommendations of a system that we believe to be intelligent.

Differences between bias and noise

In 2016, the economist Daniel Kahneman – along with other authors – wrote an article entitled “Noise: How to overcome the high, hidden cost of inconsistent decision making”. In the article he explained the difference between “bias” and “noise” through the analogy of throwing darts at four targets. Target A is neither biased nor noisy; target B is only noisy; target C is only biased; and target D is both. The diagram below is based on this idea.

But let’s start from the beginning: what is meant by “bias” and what is meant by “noise”? AI is good at establishing a lot of patterns and relationships, as well as streamlining processes and operations with big data. However, algorithms are not neutral. Neither are the data used to train them, because they have biases. There is no doubt that AI algorithms have biases, but we continue to use them because the benefit or accuracy of the results is considerably greater (by more than 90%) than the harm or error. Biases are similar to prejudices: we all have them, to a greater or lesser degree, and inherit many of them unawares from our social or family environment. The greatest bias is to believe that we have no prejudices at all. However… beware: if biases are not corrected, we run the risk of inhabiting a future in which social progress becomes increasingly difficult due to the perpetuation of prejudices.

Meanwhile, the concept of “noise” could be defined as the different choices that can be made about an action (individual or collective) in a personal or professional situation. When software developers in a company were asked to estimate the time it would take them to perform a given task, their time estimates varied on average by more than 70%. Why, if the task is the same for the same category of professionals in the same company? Often, we make decisions that differ greatly from what our colleagues would make, from our own previous decisions or from those that we claim to follow.

At this point, we should note that AI has biases, but not noise. People, on the other hand, are influenced by both variables.

Noise is easy to perceive

Now let’s return to the targets. If we were to look at them from behind, we would notice that B and D have a higher variability, and we would easily notice the noise. But between A and C it is not so easy to tell which is the biased target. In other words, noise is easy to perceive, while we are not always aware of biases. Most human decisions are noisy, which is why we seek a second medical opinion or hope the judge we are assigned is in a good mood on the day of our hearing. These are two examples of noise at the individual level, but we are also inconsistent over time.

In 2007, Stanford researchers analysed asylum claim decisions in the US and found that one judge approved 5% of claims, while another approved 88% of claims. The two have a very different bias, but the judicial system as a whole appears to be noisy, because the decisions of its judges are inconsistent. Finally, bias can be either negative (male chauvinism) or neutral (gender parity). In the end, it is a question of social consensus.

And how is AI fairer?

A noiseless AI is fairer in the sense that when two cases are the same, its prediction is the same. In other words, if two defendants commit the same crime, their sentences will be the same. But very rarely are cases identical. The paradigm of social injustice through an algorithm was brought to light in 2016 by ProPublica – an independent US newsroom – after it carried out an investigation to demonstrate the bias applied by the COMPAS algorithm to the detriment of black defendants.

AI is noiseless because the algorithms are deterministic. This means that, given a similar input, the outcome will be similar. For example, an air traffic controller may direct air traffic better or worse depending on stress, fatigue or circumstances that affect their mood. In contrast, an intelligent system will always give the same results in the same situation. This cannot be said of judges.

However, AI does have the capacity to be unequal because it learns from the intrinsic biases contained in data. In the working paper “Human Decisions and Machine Predictions”, a group of authors led by the scientist Jon Kleinberg explain how a system trained with New York bail data was racist and slightly amplified the decisions of the most racist judge. This was rather surprising, since the only demographic data it used was the age of the defendant. In other words, racism was encoded in and inferred from criminal records. Despite this, the algorithm was fairer, as it had no noise. By maintaining the same bail rate, it could decrease crime by 25% or, by maintaining the same level of crime, it could grant bail to 42% of defendants who were not previously granted it. Worse still, taking into account the crimes committed, judges granted bail to more than half of the most dangerous 1% of criminals.

However, as ethics expert Lorena Jaume-Palasí would say, justice is actually much more complicated. It may be the case that a judge lets someone walk free in compliance with the law, even when they suspect that there is a high likelihood of their reoffending. “When ruling on a case, justice involves more than simply repeating past decisions. It must be contextualised,” explains Jaume-Palasí. “Because even if the background is similar to other cases, the context may be completely different. Is someone who discriminates consistently but doesn’t consider the specific details of each case more just? If a system reinterprets what the law requires, and judges use a methodology that is the opposite of what is required, that in itself is unfair”.

Positive biases for society

Should we assume that AI will always be biased? Not necessarily. Is it possible for an algorithm to make automated, non-biased decisions for individuals or companies? That will depend on whether the biases are in the data, in the algorithm or in the interaction between the user and the system. Since in practice it is very difficult to have no bias at all, what we need to do is mitigate it. But it will be difficult to eliminate bias, especially when we do not know all the sources. The most important biases are explicit, but many others may be implicit.

Biases can also be positive, meaning they have a positive effect on society. This should be a necessary condition of any algorithm making automated decisions or predictions. For example, the fact that there are more female than male nurses in hospitals could be a positive bias if they were shown to have more empathetic and caring qualities.

Over time, it would be good if AI could help to generate positive biases (the blue vertical arrow in the diagram), and if it were so useful that people could perceive and thus reduce their own biases (diagonal arrow in the diagram). But wouldn’t that be asking too much of this technology? If AI is programmed by people, and we ourselves are not aware of our biases, how can machines possibly pinpoint them for us? Shouldn’t we first change our entire global education system and learn from a young age to identify the most relevant social and cognitive biases?

More humans with more technology

AI generates many misgivings. Its advantages are well known, but so are the risks it entails due to the multitude of errors identified over recent years. One of its greatest challenges is that of explainability, which means understanding how an algorithm has learned automatically and come up with the results, whether in the form of automated decisions or predictions.

Another risk – as artist and researcher Joana Moll mentions in her article “Against Complexity” – is that we are led to believe that any problem can be solved simply by machines. In this regard, it is inevitable that we should wonder about the consequences that this laissez-faire attitude could have for any society. If, for example, Aixa’s doctor were to blindly trust the results of the algorithms at her disposal – overlooking her own intuition gained through professional experience – humans would be totally dependent on machines for any movement, progress or evolution.

But – as we don’t wish to be apocalyptic or to end with a worst-case scenario – we might hope that the most advanced technology will help us to build a future that is increasingly respectful of the rights and values for which we have fought in recent decades. How it could do this is still an unknown. We do not even know if we will succeed. What we can hope for is that AI will get progressively better and that it will complement human knowledge. At the moment, it not only fails to correct our mistakes, but sometimes even magnifies them.





The Place of People in the Digital Carnival

From the depersonalised and anonymous Internet to the culture of exposure and virality of today’s social media.


Fàbrica de màscares a Argentina, 1910 | Archivo General de la Nación Argentina | Public domain

In the early days of Internet user anonymity was the norm, with hidden identities offering a way to avoid prejudice and encourage freedom of expression. But with the arrival of social media, the importance of personal exposure has created an ever more personalistic net.

Few inventions have been so conditioned by the mask mentality since their inception as the Internet. In no other medium (except perhaps books) have so many identities been falsified, so many hoaxes spread and, in general, so many lies told as on the network of networks. Number 29 of the “Rules of the Internet”, written by anonymous users of the infamous 4chan forum around 2006, managed to capture the essence of the ubiquitous “digital mask”: “in the internet all girls are men and all kids are undercover FBI agents”. Notwithstanding the delirium that even then prevailed in that dark corner of the net, its users were right to assume that everything on the internet involved a certain percentage of mask.

A brief history of an internet that exists no more

During the period when its use became widespread in the United States and Europe, during the 1990s and 2000s, absolutely everything that could be found on the Internet was mediated by an acknowledged measure of lies. From nicknames to chat rooms, badly indexed eMule downloads, email chains with an unknown original author, and websites that disappeared without explanation, the general image of the Internet at the time was one of permanent turpitude or carnival. During those years, web traffic was spread across a chaotic mesh of blogs, forums and websites whose common denominator was often the fact that members used pseudonyms: “¿quieres identificarnos? Tienes un problema (You want to identify us? You’ve got a problem), sang the group La Polla Records in 1986.

This anarchic culture of the old-school Internet was able to develop untempered due to the fact that states had not yet developed their technological capacities. Until the major event of 2001 (9/11), states had not been overly concerned about imposing their laws on the digital sphere, as they considered, quite rightly, that it was inhabited by only a small part of the population. Proof of this is that large-scale development of major monitoring programmes such as “Total Information Awareness” (the forerunner to “PRISM”, the global surveillance system uncovered by Snowden) did not get underway until after the New York attacks. Before that we were navigating an internet not unlike the Far West, in which an ageing Sheriff (the state) was utterly incapable of hunting down the new generations of digital Dalton Brothers. Epic pranksterism like that of “The Yes Men”, who registered a website posing as the World Trade Organisation (WTO) and even gave controversial presentations in its name, is inconceivable today.

The Yes Men Fix the World

The mask, an opportunity for mature dialogue

It would be somewhat trivial to say that the main reason why someone chooses to mask themselves is usually to avoid being identified. However, in the digital world, the mask cannot simply be equated with the hood of analogue contexts – an item that is widely used at Black Bloc demonstrations around the world. In the incipient Internet, the use of a singular mask in the form of a nickname fulfilled a qualitatively different objective – to create completely depersonalised communication, turning users into a complete abstraction and thus making it possible to establish a dialogue without the usual prejudices on the grounds of origin, gender or age.

Much of the expectation generated by the Internet in its early days was due to the hope that virtual environments could foster truly mature communication. The ideal was that the only aspect to be assessed would be the quality of the arguments, since the interlocutor was completely unknown. It is also worth remembering that the infrastructures of the early Internet only properly supported text format (which amounts to absolute depersonalisation), so they helped to create the conditions that would allow this “dialogue 2.0” to become a reality. There are plenty of cases of curious teenagers having one-on-one conversations with leading figures in the hacker community or anonymous forum members participating in high-level academic discussions.

The nickname, that alias that formed the cornerstone of the early Internet, gave rise to a sort of virtual “autonomous identity”, deliberately detached from the real one. It was common to have several identities (very few or none under one’s real name) depending on the website, which would then be casually disused or discarded. As the accounts were not accompanied by a quantitative reputational system (i.e. “followed”, “followers”, “likes”), users did not tend to become too attacked to their nicknames, which were of secondary importance in relation to the main thing – the type of arguments that were put forward. The transient identities and lack of quantification systems made it possible to establish a culture based not only on freedom of expression, but also on having equal conditions for all speakers. Likewise, there was also something altruistic in a user investing hours in answering their dialectical adversaries, knowing that their contributions were not made under their real name, and that they would not be recognised outside the limits of the forum.

Virality

As the Internet entered its teenage years (the second half of the noughties), the zeal for nicknames began to wane in favour of real identities linked to people’s real-life names. This was the A.F. (after Facebook) Internet, whose emergence would mean the beginning of the end of the masks that once constituted the substrate of the Internet. Most people would gradually stop separating their real identity and their digital persona, which would become their avatar, while the amount of time spent online also increased after smartphones burst onto the social scene.

There is a before and an after Facebook because, with the emergence of the first modern social network, a large mass of users were captured within a platform that (i) was all-encompassing (not confined to a specific topic), (ii) had a user-friendly interface and (iii) came with with a reputational system. Absolutely all subsequent social networks have been based on these principles, so that they are, in a sense, later remakes of Mark Zuckerberg’s work.

The changes introduced by the big platforms have profoundly affected the way we relate to the digital world. The progressive importance of projection, both of oneself and of one’s own opinions, has unleashed a frenzy of virality, now easily quantifiable thanks to the number of followers and “likes”. On the Internet A.F., the norm is to convey as accurate an image as possible of the human being behind the screen (name, photograph(s), job, place of residence, emotional state, etc.). It would be enough to take a look at the Twitter or TikTok timeline of the average teenager to see the dramatic consequences of this “culture of exposure” that is so essential for gaining virality.

Digital masks, which allowed us to place ourselves on a “2.0” level of debate (free, equal, altruistic), were gradually abandoned as the inseparable personalism of today’s Internet gained ground. Being anonymous is incompatible with being viral. As for the die-hards who continue to use nicknames (especially those who use them to support a minority opinion), they are the object of constant suspicion in an environment that once again places more importance on character than on arguments. In many ways, we have slid backwards.

Finally, it only remains to point out that, nowadays, the use of pseudonyms has been reduced to a “personal brand” of the real person. Influencers and youtubers often use alternative names, although this is clearly not to distance their content from their real identities, quite the opposite – the aim is to stand out from the crowd. In fact, very few people would be familiar with the name Rubén Doblas, but virtually everyone knows the alias “El Rubius” under which he publishes content.

In his case, the degree to which his body has become intertwined with the persona he has created for himself means that his real identity can no longer be disassociated, ever again, from his YouTube account. Doblas has lost the ability to blend in with the crowd, a downside that he may himself have realised in 2018: “you want to do everything 100% and give 100% of yourself, [but] sometimes you can’t cope with so many things. I think I’m going to need some time to look at it all from an outside perspective and, like, still be myself without losing my fricking mind.” Modern influencers have certainly achieved enormous social prominence after adapting their lives wholesale to the principles of exposure and virality – but we have yet to see the effects that this Faustian mentality produces at the psychological, social… and cultural levels.





Why We Keep Talking About NFTs

What are the causes of the NFT boom in the contemporary art market and what does it bode for the future of digital art? 


Alex John Ettl working on Cecile Alexia Hart’s bust. 1920-1925 | Library of Congress | No known copyright restrictions

In a very short period of time, NFTs have shaken up the art market and generated a financial bubble. The use of blockchain technology overcomes an old problem for net art, but also brings with it new controversies such as its environmental impact. We analyse the causes of this phenomenon and how it might develop in the future.

Collins Dictionary recently announced that it has chosen the acronym NFT as its word of the year 2021. True to its role, Collins provides a concise definition of this term that has had such an impact on the contemporary art market in recent months: “non-fungible token: a unique digital certificate, registered in a blockchain, that is used to record ownership of an asset such as an artwork or a collectible”. Among the many ways in which NFTs have been described, the definition given by Collins has the value of clarifying a fact that has often led to confusion: an NFT is a certificate, not a type of artwork, nor a creative technique, nor an art movement. Just because something is minted on a blockchain as a non-fungible token does not make it a work of art, or even a work of digital art. NFTs do not really represent a transformation in the way artworks are created, nor do they embody a particular conceptual or aesthetic premise. It is not a technology that, in itself, provides artists with new means of expression or defines a visual style. Non-fungible tokens make it possible to certify the authenticity and ownership of a work of art in digital format (among other things), which is by nature infinitely reproducible, and this is their great contribution to the way in which some contemporary art is distributed, marketed and collected today.

Having clarified what an NFT is, one might wonder how a simple digital certificate could generate so much media buzz, disrupt the art market and fuel a financial bubble in barely eight months. It is a complex, multi-layered and nuanced story that does not lend itself to a simple explanation, let alone a complete one. From my perspective, and within the scope of my knowledge as an art historian, curator and researcher who has observed the evolution of the digital art market over the last two decades, I would like to offer some general keys to understanding how the NFT boom in the contemporary art market came about and what it bodes for the future of digital art.

Screenshot of Possession, online collection of Auriea Harvey and Michael Samyn that hosts the work If You Want To Clean Your Screen (1998) by Olia Lialina.

NFTs resolve a problem present since the inception of net art

When, in the mid-90s, a small group of artists began to experiment with the resources offered by the World Wide Web and html code, they saw the possibility of freely disseminating their works without having to depend on curators or gallery owners. In 1998, the artist Olia Lialina had the idea of turning her website into an art gallery and sold one of her works to the artists Michael Samyn and Auriea Harvey, who received the files and posted them on their own website, in a section entitled Possession. The work was still publicly accessible, but now had other owners. However, not all digital art can be sold in this way, which has led many artists, including Mark Napier [1], Carlo Zanni [2] and Rafael Rozendaal [3], to consider how to market their work on the internet. The main problem has always been how to ensure the collector’s ownership of a piece whose very nature is to be distributed and accessible to everyone. The creation of Bitcoin in 2009 brought with it blockchain technology as a distributed and unalterable record – a resource that is particularly interesting for the art market, where the creation of value springs basically from the authenticity and scarcity of works of art. Starting in 2014, the first platforms were created that allow artists and creators to register intellectual property on a blockchain. Monegraph, founded by artist Kevin McCoy and entrepreneur Anil Dash, introduces the possibility of transferring ownership of a digital work to a buyer by guaranteeing the authenticity of the piece and the transaction with a blockchain record. In fact, the first sale made when the platform was launched, an animation titled Quantum that McCoy transferred to Dash, is now considered to be “the first NFT”. In the same year, the artist Paolo Cirio created Art Commodities, a conceptual art project presented as a critique of the contemporary art market, but which foreshadowed the current structure of the NFT market, albeit with cultural rather than speculative ambitions.

Speculative mania is fuelled by games, crypto investors and auction houses

For years now, many artists have been selling their digital works with certificates of authenticity on the blockchain, without this sparking any revolution. However, when, in 2017, non-fungible tokens were created using the ERC-721 standard on the Ethereum blockchain, the first projects that took advantage of its functionalities caused a sensation precisely because of the possibility of owning, and exchanging, something unique. CryptoPunks, a series of 10,000 character portraits generated by an algorithm, and CryptoKitties, a game that involves adopting and exchanging virtual cats, use combinatorics to obtain a large number of characters with personalised traits, which are made unique by being registered as non-fungible tokens. The growing interest in these and other projects, particularly among cryptocurrency investors, is being harnessed by the auction houses Christie’s and Sotheby’s, which have jumped on the bandwagon with sales of Robert Alice [5], Beeple [6] and Murat Pak [7], taking advantage of their privileged position in the art market to make substantial profits. The spectacular prices achieved with the work of creators unknown to the art market, and in particular with pieces that anyone with 3D modelling skills can imitate, have triggered a rush to create NFTs among artists, illustrators, designers, programmers and opportunists alike. At the same time, a large number of cryptocurrency investors are looking to art as a defensive asset, without knowing (or perhaps even caring) exactly what art is or whether it is guaranteed to hold its value over time, and are buying art registered on the blockchain at prices well above what was seen previously on the market.

CryptoPunks, a series of 10,000 character portraits generated by an algorithm.

The NFT market reproduces the inequalities of the traditional market

The first wave of million-dollar sales led to a quick growth in activity in the NFT marketplaces created in 2017 and 2018, such as OpenSea, SuperRare and Nifty Gateway, based on the Ethereum blockchain. In 2020, two very different platforms were launched: Hic et Nunc (H=N),[7] a rara avis based on the Tezos blockchain, which established itself as the go-to for a broad community of artists, and Foundation, a marketplace specialising in digital art with a considerable portfolio of artists with an established track record in this medium. Two art markets were soon established based on NFTs: one represented by platforms such as SuperRare and Foundation, which only accept artists by invitation and sell works in auctions at high prices using Ethereum, and the other represented by H=N and other platforms on Tezos such as Objkt, open to all types of artists who offer their works at more affordable prices. Another fundamental difference lies in the environmental impact of the Ethereum and Tezos blockchains. As decried by the artists Memo Akten, Joanie Lemercier and Kyle McDonald, minting non-fungible tokens on the Ethereum blockchain consumes enormous amounts of electricity, which in turn contributes to CO2 emissions and thus to global warming. The Tezos blockchain uses a different protocol which consumes much less energy, similar to that generated by sending a tweet for each NFT. While the actual energy cost is still a matter of debate, Ethereum is still more polluting than Tezos, but also potentially more profitable, as most investors operate in ETH.

Collecting (quality) art in NFTs will be commonplace

The NFT art market has developed at breakneck speed over the last few months, opening up a wide range of possibilities, within which some dominant trends and influential actors are emerging. One of these trends is the replacement of marketplaces that are open to all, in which each work is just another image in an infinite grid, by a curated presentation of artworks with a specific discourse conveyed in the format of an individual artist’s project or a collective exhibition. This format is used by platforms such as Feral File, created by the artist Casey Reas, Daata, by the curator David Gryn, bit.art, by the gallerist Steve Sacks, and AlterHEN, created by an international collective of artists. Another trend is the identification of NFTs with a specific type of image, specifically a 3D model or animation with a futuristic aesthetic, along the lines of Beeple’s work, or abstract geometric compositions, also with three-dimensional effects, in the style of Pak. Finally, the current trend is towards the definitive assimilation of NFTs into the mainstream contemporary art market, as major galleries naturally incorporate this format into their offerings to collectors. Pace, one of the strongest galleries globally, has recently launched Pace Verso, a platform for selling NFTs that, notably, sets prices in dollars and accepts payment by credit card (as well as the main cryptocurrencies), in what can be interpreted as an invitation to “traditional” collectors who, while they don’t share the enthusiasm of crypto investors, don’t want to be left behind in the acquisition of digital art either.

An innovation cannot be said to have been assimilated by the art market until it is adopted by the galleries themselves. Now that we are at the point where both Pace and a growing number of other galleries have decided to sell NFTs, it can be considered that non-fungible tokens are going to be as common a format as silkscreen prints, digital prints, photographs or videos. It is also possible that the enthusiasm for these digital works is nothing more than a speculative bubble that hasn’t stopped growing yet (remember that the dot-com bubble lasted five years) and that in a few years many collectors will find that their works are not worth what they paid for them. But in any case, the resilience of the market and the involvement of its most powerful actors bodes a stable future for digital-format art and for records of authenticity and chain-of-ownership in blockchain, or other similar technologies. Whether under their oft-repeated acronym or under a different name, NFTs will continue giving us much to talk about.


[1] In 2002, Napier created The Waiting Room, a virtual online space shared exclusively by the collectors who acquire one of the 50 participations in the piece.

[2] For years, Zanni has been exploring the possibilities of marketing works of net art. One of his first pieces is Altarboy (2003-2004), which consists of a sculpture containing a server in which the work is hosted. To view it, the collector must connect the object to the internet, making the work publicly visible on a browser.

[3] Rozendaal creates pieces designed to be seen on a web browser and assigns them a domain name which makes each piece a unique object. A contract drawn up by the artist obliges the collector to keep the piece on the website, while adding their name to the piece’s source code. https://www.newrafael.com/websites/

[4] The work Block 21 (42.36433° N, -71.26189° E) by Robert Alice was the first NFT to be auctioned by Christie’s, sold on 7 October 2020.

[5] The digital collage Everydays: The First 5000 Days by Mike Winkelmann, known as Beeple, went for the astronomical figure of 69 million dollars in an online sale by Christie’s that closed on 11 March 2021.

[6] In April 2021, Sotheby’s launched a sale of NFTs by the designer Murat Pak, for a total sum of almost 17 million dollars.https://www.sothebys.com/en/digital-catalogues/the-fungible-collection-by-pak

[7] The platform was closed down by its creator on 11 November 2021 and is now running on other servers. A brief history of Hic et Nunc can be found at Github.





Against Complexity

Technological solutions often side-step the complexity of the systemic problems they face.


Two women using a switch. Halmstad, 1962 | Järnvägsmuseet | Public Domain

The belief in a simple technological solution to society’s complex problems has taken root. This techno-solutionist trend has proved ineffective in offering solutions, but it continues to shape the way we see the world. The design of digital technologies accordingly suppresses our ability to imagine other realities.

The COVID-19 crisis has exposed a massive social, ecological, political and economic systemic failure. Although the causes and consequences of this crisis are highly complex and far-reaching, we are repeatedly told that they can be solved by yet another app. This technological problem-solving approach is commonly known as techno-solutionism. Techno-solutionism tends to simplify and obfuscate the various concurrent realities that make up and trigger the specific problems it tries to fix; it simply doesn’t cope with problems. It has been proved that techno-solutionism doesn’t work when it comes to fixing highly complex events, such as the current global pandemic,[1] yet it is often actively embraced as the only possible answer to a critical situation. Moreover, while this techno-solutionist approach to dealing with highly complex realities benefits short-term systemic stability by avoiding immediate scenarios of collapse, it fails to take into account the complex series of events that caused the problem in the first place, and therefore risks actually making it worse. I think the following example clearly illustrates this idea.

Igloo White was a US Air Force operation that attempted to automate warfare during the Vietnam War. The operation ran from 1967 to 1972, costing between $6 and $6.7 billion.[2]  The Infiltration Surveillance Center (ISC), located at Nakhon Phanom (Thailand) and the largest building in Southeast Asia in 1968, was Igloo White’s state-of-the-art high-tech command centre. ISC intercepted and analysed thousands of sensors dotted along the Ho Chi Minh Trail, the major supply route for North Vietnamese army forces. These covert sensors were designed to detect the presence of humans and vehicles moving along the trail by capturing events such as body heat, motion, sound and even urine scent. When it perceived human activity, the ISC would simply calculate its coordinates and send them to the Air Force patrolling jets, which would then be automatically redirected to the target, and trigger the releasing of bombs. Officially, Igloo White was recognized as a substantially successful operation, claiming to have destroyed 90% of the enemy’s supplies and leaving around 13,000 civilian refugees[3] during the more than four years of electronically controlled warfare. However, the reality in the battlefield refuted these arguments: destroyed equipment was rarely located, and the North Vietnamese soldiers developed counter-tactics to confuse the American sensors, which ended up discharging considerable amounts of explosive material in empty territories.[4] In many ways, Igloo White failed (refused) to acknowledge and integrate the complex series of particularities that were part of the reality of the war it was trying to win, resulting not just in one of the most painful defeats in the history of the US Army, but also leaving behind a significant number of casualties and destroying vast extensions of natural landscape. I believe that Igloo White is a great historical example of how techno-solutionist strategies, as argued by Paul N. Edwards with regard to the Vietnam War, are being “justified in terms of statistics”, although there is “a wide gap between an official discourse of overwhelming success and the pessimistic assessments of independent observers”.[5]

Igloo White operation scheme | CC BY-NC-ND 4.0

The influence of techno-solutionism goes beyond the reduction of social and political agency, and actually erodes our capacity to imagine ourselves as part of a larger ecosystem. Far from being neutral entities, our everyday devices carry and enact a particular worldview that inevitably produces and reproduces certain realities and exclude others. As defined in The Social Construction of Reality, published in 1966, reality is a sophisticated and subjective process affected by multiple contextualized events, such as experience, interactions, language, and personal and social inheritance.[6] Most of our daily transactions are carried out by means of electronic devices integrated within a massive system of interconnected machines. Yet this expansive networked ecosystem is accessible through tangible and “intangible” interfaces, such as screens, keyboards and graphic user interfaces containing interactive elements such as clickable buttons and scrolling events. Ultimately, this amalgam of interfaces, interactive possibilities and limitations is largely mediating (and prescribing) our relationship with the world and deeply influencing the construction of our reality. According to a recent study, in 2020 a citizen spent an average of 3.1 hours using apps on their smartphone[7] and about 7 hours in front of a screen every day.[8] The design of any object embodies the social values and interests that respond to a specific ideology: that of the human group that has created it along with its socio-economic context.[9] Most of our everyday digital technologies are designed and developed by a few (but affect the many) in the heart of Silicon Valley, the cradle of techno-patriarchal capitalism. A system is designed for the purpose of controlling every process that affects it to obtain a desired output (although the system may transform those external processes, sometimes in unpredictable ways). In this sense, most of the devices, digital platforms and apps that we use in our everyday life offer little to no room for negotiation, thereby removing the possibility of modifying their systems by ignoring the multiplicity of sensitivities that are part of them (i.e., the users, nature), which dramatically dissolves their agencies to imagine alternative modes of being in the world. I believe that this reduction of experiences and worldviews reproduced by our technologies comes with dramatic consequences: as our conception of reality becomes increasingly detached from its physical context (and condition), our imagination is more and more homogenized, and our capacity coherently to respond to (inhabit) our life-giving ecosystems is progressively depleted.

Ultimate Solvers | Joana Moll

But which companies and current narratives are defining, designing, developing, producing, implementing and activating techno-reductionist ideology at a global scale?

“Great Data. Great Technology. Great Results” (Alliant); “Unbundled Technology” (Lotame); “When Experience Matters For Brands For People For Good” (Acxiom); “Personalize each customer’s journey with performance transparency” (Epsilon); “Symphony Segmentation Scales Personalization” (AnalyticsIQ); “MEASUREMENT IS TREASUREMENT” (4Info); “We Translate Customer Behavior Into Financial Opportunities” (Cignifi); “The power to predict” (Rocket Fuel); “Our data helps you know your customers like you know your friends” (Woveon); “TRUTH IN DATA. PROOF IN PERFORMANCE.” (Merkle Inc.); “PRIVACY & CIVIL LIBERTIES ENGINEERING” (Palantir); “Making measurement make sense” (Comscore); “Get inside the mind of your customer” (GFK); “A global data fusion enterprise applying ingenuity for the wellbeing of cities, citizens and corporations” (Sensen networks); “EXPERIENCE IS EVERYTHING” (Ansira).

This collection of impenetrable jargon is taken from Ultimate Solvers,[10] a piece I launched in July 2020, bringing together a series of slogans, brand identities and the supporting graphic materials used by some of the main tech corporations that prescribe technological fixes to announce their services. Interestingly, these companies tend to use a rather precise language to define what they actually do in a very ambiguous way. These corporations do, however, understand precisely how to benefit from the realities that their technologies will create and extract from. One can’t help but wonder what the long-term implications will be of solving highly complex systemic problems with reductionist techno-solutions. The future doesn’t look bright.

Originally published as part of SYSTEMS, Science Gallery at Trinity College Dublin.


[1] Morozov, Evgeny (2020). “The tech ‘solutions’ for coronavirus take the surveillance state to the next level”, The Guardian, 15.04.2020.

[2] Wikipedia (2020). “Operation Igloo White”, last modified 1 June.

[3] Edwards, Paul N. (1996). The Closed World Computers and the Politics of Discourse in Cold War America. Cambridge, MA: The MIT Press.

[4] Edwards, Paul N. (1996). op. cit.

[5] Edwards, Paul N. (1996). op. cit.

[6] Berger, Peter L., Luckmann, Thomas (1966). The Social Construction of Reality. New York, NY: Doubleday.

[7] 2020 Mobile App Evolution Report Overview. App Annie, San Francisco, CA: 2020.

[8] Screen Time: UK vs US vs The Rest of The World Compared. Business Fibre, UK: 2019.

[9] Feenberg, Adam (2002). Transforming Technology: A Critical Theory Revisited. New York, NY: Oxford University Press.

[10] Moll, Joana (2020). Ultimate Solvers, 7 July 2020.





The update that never ends

We take a look at TikTok, an infinite scroll platform where consumption and production are the same thing.


Classes at the Helena’s Rubinstein Beauty Shop, New York City, 1965 | Gottscho-Schleisner Collection, Library of Congress, Prints and Photographs Division | No known restrictions on publication.

TikTok is a social network for making and sharing short videos that has become a global phenomenon beyond the teen market for which it was created in 2018. We explore what it is that makes the experience of using this app based on infinite scroll and algorithmic architecture unique, the experience of time and the circulation of sexual and emotional capital that it offers, as well as the overrepresentation of certain realities to the detriment of others.

Time-lapse stories, animal videos, trap or reggaeton choreographies, lip-synchs to soap opera dialogues, K-pop, experiments with cleaning products and time paradoxes each follow hard on the heels of the last in the infinite zapping of the social network TikTok (Douyin in China). They are tiny nodes, seconds-long compulsions that look at us, study us to give us what we like best and invite us to respond with a new video to viral challenges that invite our participation. As opposed to the exacerbated continuity of gameplay, talk, music and ASMR videos on the streaming platform Twitch, the discontinuity of TikTok awakens the urge to see more, to see something different, a desire to share and be shared that can never be satiated, maybe just momentarily tired.

Video after video, with no interruption to remind us of life outside the app, TikTok, like its rival Triller, and like Instagram, Facebook, Twitter, WeChat and TouTiao, unfolds with swipe after swipe of our Smartphone screens. It may not yet figure as a milestone in anthropology studies or school textbooks, but the year 2006 saw the introduction into human cultural history of a visual device that is as important as the book or the printing press: the infinite scroll, a feature of the touch interface that is the foundation of most contemporary social media. Devised by the engineer Aza Raskin, interface designer, entrepreneur and, in recent years, an activist for the ethical use of technology, the infinite scroll not only catalyses the possibilities of the touch screen, first incorporated in Steve Jobs’s iPhone in 2007, it also constitutes the cognitive reprogramming of our access to information and our conception of emotions and time.

The feed that just doesn’t stop

“The eye only sees what it looks at, and only looks at what it already has an idea of.” These were the words written on a poster at the entrance to his criminal identification classes by Alphonse Bertillon,[1] the creator of judicial anthropometry, whose system was adopted by all police forces in the Western world as of 1888. When we repeatedly swipe the TikTok screen, not only do our eyes recognize those images for which we already have a paradigm of understanding, but the artificial intelligence system developed by the Chinese company ByteDance registers the micro-movements of our thumb, the time we dedicate to each image, our Likes and preferences, while tagging each video to swell the feed with the images that best satisfy our desire to continue glued to that apparently chaotic concatenation of stimuli that has over two billion downloads and peaks of almost one billion active users a month.

Faced with a kaleidoscope that captivates half the world’s population and which, during the pandemic, has broadened its reach beyond its original adolescent market in 2018, it is worth asking what makes the experience of using this app unique. It is not so much about clarifying the multiple narratives that run it through or identifying the leading users, such as Charli D’Amelio, Addison Rae and Bella Poarch (117, 81 and 71.6 million followers), but about asking what does TikTok think? What does TikTok see, what does it look at and what does it not show? How should we think of TikTok? How does it respond to our gaze? And, even, what are we to TikTok? Strictly speaking, viewed from the perspective of social media, human life is an epiphenomenon of its own self-reproduction, whose destiny is to produce and consume content that is reproduced virally. What is truly alive is the circulation of this enormous meme factory, with no beginning and no end.

On each of the smartphones that participate in the TikTok feed, the capitalist ideal of semio-production becomes a reality. It is no longer necessary to produce any specific content, but rather to support an artificial intelligence system that allows users to adapt the incessant production of videos to their own scopic drive. To Shakespeare’s adage “Let every eye negotiate for itself” that filmmaker Jean-Luc Godard chose to open his Histoire(s) du cinéma (1998),[2] the TikTok era responds with a mediated negotiation, a cognitive automatization that dislodges any unexpected image and guarantees the total fluidity of the consumer experience. TikTok is a platform on which being, consuming and producing are the same thing. The factory as the consumer is a chimera that industrial capitalism, which produced goods and learned to create needs, could never have dreamed of.

Further, the fact that adolescents are one of the greatest work forces draws a horizon in which, what Marx in his Fragment on Machines[3] called the general intellect, far from giving rise to a socialization of accumulated technological and cognitive capital, has the opposite effect. There is no possible emancipation of social time from the obligation of salaried work, since it is impossible to use TikTok, Twitch, Instagram, Facebook or any other social network without generating capital, without working. This overlap between life and work that characterizes our performance society obliterates any possibility of leisure without consumption, without Otium ludens in the Latin sense.[4] However, it would be naïve to criticise TikTok without recognising its importance as a phenomenon and the expertise of its algorithmic architecture to represent, in ways that other narratives fail to offer 21st-century users, the emotional tension field, particularly the identitary constellation of adolescence.

What Deleuze, in his 1978 course on Spinoza, called the affectus, “the continuous variation of someone’s force of existing”,[5] is something that the transition between video and video, the socialization that each response entails, eternalizes to an unusual extent. People dance, on TikTok. They laugh. They cry. They sing. They want to please. It is a sublimated projection of our own self, but—unlike Instagram—the floodgates are also open to the exteriorization of states of prostration, crying or euphoria, though almost always brought back to a positive attitude. Emotions are released as a consequence of the “overload of ourselves” that we suffer in a society from which collective rituals and the symbolic dimension of the images that articulate socialization have been stripped. For adolescents in particular, TikTok is a community space capable of catalysing the fantasy of a relational fabric that protects their own identity and the construction of their own emotions.

The accumulation of followers, the affirmation of Likes and the possibility of appearing in the same digital space as TikTokers who are emerging as reference models offer the full celebrity effect, its phantasm generated by the accumulation of personal emotional capital linked to self-exposure. The real driving force of the app is gaining a popularity that coincides with the screen as a projection of the confessional space of the bedroom. The hypervisibilization of intimacy also contributes to a transformation of the social evolution of the psychosphere in which desire, which always has a cultural genesis, is invested with a frenetic or, to use Guattari’s term, chaosmotic quality: [6] TikTok is not a place to share your own sexuality; it is an externalized organ of enjoyment, which explains the aghast reaction of adolescents when its use is limited by an adult.

The sexual capital of late modernity, or how to put an end to Galton

As Slavoj Žižek pointed out, images do not respond to our desire; they do not teach what to desire, but how to desire.[7] Since, in 1907, Julius Newbronner devised a system to attach a camera to the chest of a pigeon, our gaze has not ceased to be externalized. First, it was aeroplanes, then, as Harun Farocki warned,[8] video surveillance and missiles launched during the Gulf War, then networks and algorithms. Today, although we are the ones recording ourselves and giving our facial expressions to applications like FaceApp or the locking system of our Smartphone (the culmination of Bertillon’s classification system, his “spoken portraits”), our gaze is completely externalized. From the pioneering aerial view of Kronberg taken by Newbronner’s pigeon to the Chinese Dragonfly Eye surveillance system, the image has become automated to the point of losing the two principles of its own definition: produced by a human being to be seen by a human being.

From cognitive automation, we have moved to eliminating any blind spots, as shown by the expansion of Axon and AnyVision’s surveillance systems. If, in the public space, hypervisibility is the norm, as the Chinese social credit system shows, social networks promote a neuro-totalitarianism based on the patrimonialization of Eros and collective time. Given the TikTok sequence, the experience of time is simultaneously articulated and destroyed—that is, at each moment, time is directed through stories and pseudo-rituals, and simultaneously destroyed by the appearance of the next video. The coincidence of ritual and play on social networks causes a fracture in their place in time that is accompanied by a phenomenon of capitalization. We accumulate capital not only to survive but also as a way to confound death, as a fantasy of survival. And we accumulate sexual capital, as Eva Illouz and Dana Kaplan point out, as a result of an anomalous interference between the realm of the economy and the long sociocultural process that, for decades, led to the identification of sexual freedom with self-realization.[9]

The primary eroticization running through social networks shows how sexuality, once freed from the control of institutions like the family or the church, has ceased to be the space of unassailable freedom. In a society that substitutes the control and biopolitics described by Foucault with self-exploitation and performance,[10] personal attractiveness becomes part of mercantile logic, in what Illouz calls “scopic capitalism”.[11] As Catherine Hakim points out, erotic capital has always existed in different forms and it is necessary to banish a binary reading of empowerment and domination, since it is a more complex phenomenon.[12] The fact that sexual freedom is the normative basis of capitalism does not prevent inequality or even sexual declassification becoming the order of the day, with the hypervisibility of networks such as TikTok, or OnlyFans, with its erotic plus.

From the behavioural point of view, furthermore, according to Hito Steyerl, the choreographic language projected by social networks generates a domestication of gestures, a discipline of bodies in which empowerment coexists, without contradiction, with submission to the Market.[13] A phenomenon like DeepTok, with its grim images and videos of animal torture, prompts a reaction of the collective sensibility against TikTok’s all-encompassing capacity to assimilate. Likewise, as in any other social network, the over-representation of certain realities means the under-representation of situations like Chinese repression against the Uighur Muslim majority ethnic group or the use of “re-education camps”, something that teenager Feroza Aziz managed to denounce in November 2019 using a makeup video, immediately removed from TikTok.

Persistences of vision

One of the most equivocal figures in the history of science, Sir Francis Galton, eugenicist and inspiration for the racial hygiene policies of Nazi Germany, created a system of composite photography in which he tried to establish a series of figurative canons—with labels such as “genius”, “Jew” or “criminal”—that allowed him to predict people’s behaviour by means of a pictorial statistic created from superpositions into which he tried to read natural classes. Due to the capriciousness in the definition of these types, based on his prejudices, the images had a spectral aspect, the sum of thousands of portraits in which Bertillon’s biometric segmentation was transformed into non-existent chimeras, blurred beings that look out at us waiting for a match with traits drawn to satisfy a statistical anatomy.

Faced with the machine learning that governs TikTok, and its circulation of sexual and emotional capital, we might wonder whether, for its artificial intelligence, we are not the citizens whose future Galton was trying to capture: docile bodies consumed in a perpetual exploitation of the self. But no science leaves the ego intact, as Donna Haraway points out in The Persistence of Vision.[14] Today, a true science of images must be based on TikTok’s mapping of affects to consider the opportunity of reconquering an emotional language expropriated by the attention economy. As opposed to Aristotle’s idea that pathos, unlike logos, is deprived of action and will, a long line of thinkers from Nietzsche to Judith Butler has pointed out that the exposure of emotion, of affect, is movement, openness and power, which leads us to wonder not only who we are to TikTok and how it models our emotions, but also is there any way to reprogram it?


[1] Bertillon cites this phrase in Identification anthropométrique, instructions signalétiques (Imprimerie Administrative, 1893) based on the epigraph that anatomist Jean Louis Hippolyte Peisse takes from Paul Richter’s Traité d’Anatomie artistique (Paris: Plon, 1890): “L’Oeil ne voit dans les choses que ce qu’il y regarde et il ne regarde que ce qui est déjà en idée dans l’esprit.”

[2] “Let every eye negotiate for itself”, Shakespeare, William, Much Ado about Nothing (1599) Act II, scene I (1978: The Complete Works (The Alexander Text) 18th edition, London, Collins, p. 143).

[3] Marx, K.: Elementos fundamentales para la crítica de la economía política (Grundrisse) 1857-1858, vol. 2, Mexico. Siglo XXI, 1972, pp. 216-230.

[4] See: Guzzo, P.G. and Bonifacio, G. (ed.). Otium Ludens. Stabiae, cuore de l’impero romano. Castellammare di Stabia:N. Longobardi: 2007; Fumaroli, M. París-Nueva York-París. Viaje al mundo de las artes y de las imágenes. Diario de 2007 a 2008. Barcelona: Acantilado, 2010,

[5] Deleuze, G., En medio de Spinoza. Buenos Aires: Cactus, 2006.

[6] Guattari, F.: Caosmosis, Manantial, 1996.

[7] In: Fiennes, S., Žižek, S.: The Pervert’s Guide to Cinema (2006).

[8] Farocki, H.: Eye Machine I y II (2001, 2002); “Der Krieg findet immer einen Ausweg”, Cinema 50, Essay, Schüren Verlag, 2005.

[9] Illouz, E. and Kaplan, D.: El capital sexual en la modernidad tardía. Barcelona: Herder, 2020.

[10] Foucault, M.: Histoire de la sexualité. Paris: Gallimard, 1976; Surveiller et punir, Gallimard, Paris, 1975; Naissance de la biopolitique, Cours au collège de France 1978-1979, Hautes études, Gallimard-Seuill, París, 2004.

[11] Illouz, E.: El fin del amor. Una sociología de las relaciones negativas. Buenos Aires: Katz, 2020.

[12] Hakim, C.: “Erotic capital”, European Sociological Review 26.5, 2010.

[13] Steyerl, H.: “¡Corten! Reproducción y recombinación”, in: Los condenados de la pantalla. Buenos Aires: Caja Negra, 2014, pp. 185-200.

[14] Haraway, D.: Primate Visions, Gender, Race and Nature in the World of Modern Sciences. New York: Routledge, 1989.





Remember the Human

Reddit, the interface between digital niches and the mainstream, is one of the battle grounds for debates on freedom of expression and toxicity in the era of the great algorithm.


Navy children boxing | Harris & Edwing, Library of Congress | No known restrictions on publication

As it has grown, Reddit has taken decisions on the moderation of toxic content. The first attempts at minimal moderation caused hostile reactions among users, although over the years Reddit as achieved an alternative, decentralised moderation model. It is a more horizontal, cooperative and human model than that of other platforms, and the result is the closest thing to a space where it is possible to converse with true freedom.

The experiment had very clear rules. A collaborative canvas of 1000×1000 pixel squares, which over a period of 72 hours all and any Reddit users could paint however they chose, pixel by pixel, with each user allowed to make one change every five minutes. The title: Place. It was April 2017, and of all the 250 million “redditors”, over a million took part. What could possibly go wrong?

Reddit is a massive network of user-created communities of interest dedicated to hundreds of millions of topics, ideas, memes, places, ideologies, sensations and debates. In 2017, when the team behind Reddit decided to launch the experiment Place, the question (and reason why the management were somewhat nervous) was whether the canvas would be filled up with swastikas and insults.

The result was a disperse, mutating microcosm of factions, alliances, creation and destruction, of communities battling it out for the same space, incursions, counter-incursions, toxic trolls, resistance fronts… In several areas of the canvas swastikas emerged, to be wiped out by armies of users or turned into the Windows 95 logo just minutes later.

The final result is a visual metaphor of what Reddit aspires to be – a flag fragmented into a thousand coexisting pieces. Place was a performance art representation of the solution offered by Reddit to the problems of the dark, corporate, algorithmic web of the Trump era. The canvas showed that a living ecosystem is one in which everyone enjoys equal conditions and freedom of expression and where community control puts a stop to hate.

But can such a model really work?

A hippy-yuppie fusion

Reddit is the result of the wave of digital entrepreneurship that followed the dot-com crisis. The idea of a news aggregator came up during the first startup bootcamp run by Y Combinator, a startup incubator backed by the tech guru Paul Graham. Graham’s idea was to offer young entrepreneurs from universities in the US just enough money to be able to develop their projects, protecting them from the major investment funds that had overrated and inflated so many web projects during the tech bubble that peaked around the year 2000.

Paul Graham’s startup school was launched with the aim of creating a new, fresher and more unrestrained approach to offering pragmatic solutions to specific problems. Examples of alumni companies include Dropbox, Twitch and Airbnb. What these digital projects have in common is that they combine an eccentric, revolutionary and easygoing image with an underlying philosophy that is decidedly capitalist. This ambiguous, paradoxical combination is a direct cultural byproduct of what the theorists Richard Barbrook and Andy Cameron have termed the Californian ideology.

The Californian ideology describes the social and cultural environment in which Silicon Valley as we know it emerged. According to the authors of the term, it combines the liberal and eccentric spirit of 1960s counterculture with a technological determinism focused on capitalist and commercial production. The coming together of hippy and yuppie. Revolution and capitalism. The digital ecosystem offers the promise of freedom, assuring us that any problem can be solved with an app, and that forging a better world is synonymous with becoming a multimillionaire – without a tie, of course. And all thanks to creative ingenuity… and the ingenuity of investors!

Reddit is the prototype of this kind of cool entrepreneurship. Its founders, Steve Huffman and Alexis Ohanian, launched it in 2005 when they were fresh out of university, before adding new features to make it more attractive. The idea was simple – a website where anyone could post interesting links and receive upvotes or downvotes. The more upvotes a link received, the more visible it would become.

Reddit began to attract users with profiles similar to those of its creators – young university students with an interest in the worlds of videogames, programming, science fiction, etc., with a nerdy and edgy sense of humour, who welcomed a space where they could find links about things of interest to them.

Huffman and Ohanian decided to divide the page into subpages, which could be created by users to generate microcommunities on their specific topics of interest, with no limits. And this is how subreddits were born. Quite by chance, they had invented a system with the ingredients needed to accommodate hundreds of thousands of small digital tribes with their own icons, languages and traditions. A space for everyone with peculiar tastes, for authenticity and niche interests. A heterogenous, chaotic and raw place in times of diaphanous homogenisation.

And that’s what Reddit is today – a site that is a myriad of sites. A space that is hard to explain or define.

If you like cute animals, you can join r/aww; if you’re from Ontario, there’s r/Ontario; if your thing is cringeworthy antics, you can check out r/Cringetopia; if your plans for revenge always occur to you too late, you can amuse yourself with the stories on r/PettyRevenge, and if you’re totally hooked on the latest Call Of Duty, then obviously there’s r/CODWarzone. In r/AskParents you can find mums and dads who share parenting advice, at r/EarthPorn you can see photos of incredible places from around the planet, and if you can’t handle too much excitement, then r/MildlyInteresting is the place for you. In r/nonononoyes there are scenes that seem to be heading towards catastrophe but turn out well in the end. Then there’s r/Transpassing, a community of people who share tips and advice on gender transition. r/jazz, r/Catalonia, r/mademesmile. r/nextfuckinglevel, r/oldpeoplefacebook, r/wholesomememes… the list isn’t endless, but it might seem that way.

No few toxic spaces have also been generated on Reddit, such as r/Jailbait, where users shared erotic images of teenagers, r/FatPeopleHate, a community dedicated to humiliating overweight people, and r/incels, a community of misogynous men that incited violence against women. Dozens of racist, antisemitic, sexist and far-right communities have found a home and a voice on Reddit.

In an attempt to protect and justify the platform, one of its founders Steve Huffman described it as a place for “open and honest” conversations, with “’open and honest’ meaning authentic, meaning messy, meaning the best and the worst and realest and weirdest parts of humanity”.

Freedom to harass?

As it has grown, Reddit has taken decisions on the moderation of toxic content. It has deleted certain hate communities on different occasions, although its approach on how to deal with trolls, conspiracy theories and harassment has varied depending on the moment and the climate.

When in 2011 Reddit decided to ban r/Jailbait, the company’s then CEO, Yishan Wong, excused the decision as if it had committed a sin, clarifying that it had only been deleted because it shared content that could be considered illegal under US law. “We stand for free speech”, he said at the time, “we are not going to ban distasteful subreddits”. At the beginning of the decade, this idea of absolute freedom of expression was a strong idea shared by all the social media sites, which were expanding significantly at the time. It was the time of the great digital transformations, the Obama era, in which the general feeling was that 2.0 really was democratising the world. The CEO of Twitter said: “We are the free speech wing of the free speech party”.

Social media platforms had to be “neutral”, which at that time was synonymous with a passive attitude regarding any type of user behaviour. When in 2012 Reddit announced that it would ban the sharing of suggestive or sexual content involving minors on the platform, the comment about the news with the most upvotes said: “Good job, mods. You’ve now opened up yourselves to outside influences over what content can and cannot be posted to Reddit”.

Two years later, when Ellen Pao took the decision to close the five most toxic subreddits, which included r/FatPeopleHate and r/shitniggerssay, a significant proportion of users reacted in violent indignation against Pao, sharing images of swastikas linked to her name so they would appear whenever someone did a Google searched for her. Many of them threatened to switch to a platform with no censorship.

Users reacted with hostility towards the first attempts at minimal moderation. Why were they so annoyed about the closure of forums that pushed humiliation, misogyny and racism? Why did they feel so under attack?

The crumbs of cyberutopia

The origin of this specific idea of unconditional freedom harks back to the cyberutopic discourse of the early years of internet.

In 1996, the digital activist and Grateful Dead band member John Perry Barlow began his Declaration of the Independence of Cyberspace like this:

Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.

In this declaration, penned in response to the approval of the United States Telecommunications Act, Perry Barlow epically declared that internet users were creating a world free from privilege and prejudice due to race, economic power, military force or station of birth… emulating the foundational tone of the declaration of Thomas Jefferson. In this new world, he said, anyone could express their beliefs without fear of being silenced, “no matter how singular”.

This libertarian, poetic stance has had a practical application in many projects that have tried to position the internet as a global alternative to capitalism. Cyberutopia has played out into fights for the free circulation of digital contents, free access to information, free communication between individuals, digital neutrality, the digitalisation of democracy and an autonomous digital currency, among many other things.

However, as pointed out by the critical thinker Evgeny Morozov, the cyberutopian project is essentially a capitalist project with North America roots which, rather than offering a real alternative to capitalism, is nothing more than capitalism in its most complete and developed state.

Despite their limitations, cyberutopian ideas have crystallised in the ethics of digital communities, which have realised that they need to defend their bastions of freedom of expression from the establishment and the big traditional corporations. This idea of a freedom of expression under permanent threat from external forces has filtered through into the worldview of platforms like Reddit. Who is this hostility aimed at? Who would be capable of destroying their space?

The usual suspects are the traditional media, which mock and fail to grasp the dynamics of the internet, the corporations that want to exploit the authenticity of communities for economic gain… and the new users, who trample the grass and breach the community’s netiquette and customs. This last threat is known as “Eternal September”.

The idea of “Eternal September” refers to the huge influx of new users that appeared on the platform Usenet in September 1993, the moment when the internet service provider AOL opened the floodgates, giving a whole wave of newcomers access to this discussion system, overwhelming the small community of veteran users and overturning the social norms they had created. Since then, the term has been used to refer to the fear of a social invasion of new, often uninformed users that undermine and damage the harmony of digital platforms that have their own dynamics, language and balance.

As the internet of the big platforms continues to develop, fear of corporate control of the internet and fear that the Eternal September will wash away its collective philosophy are the reasons why many Reddit communities show a hostile reaction to the influence of the establishment and reject any type of external intervention.

This anti-establishment attitude in favour of personal freedom, exploited by certain far-right groups, is what is being used to nourish a new conservative populism that reworks the old concepts of cyberutopia to fight against what they call the dictatorship of political correctness.

But what exactly is this freedom of expression that they defend tooth and nail? What kind of place would the internet be without any kind of human intervention or moderation?

Freedom, for whom?

One case that illustrates the confusion between freedom of expression and a lack of rules is that of Etika. Etika was an African American streamer who broadcast himself playing all types of video games on Youtube and Twitch. Someone suggested that he play the popular video game Minecraft, but on one of its “anarchy servers”. Minecraft has hundreds of thousands of public servers on which players from around the world explore the map, build bases and make and dismantle communities. Its anarchy servers are characterised by having no type of moderation system, rules or administration team. All the players can do what they like without fear of being banned.

A few minutes after Etika joined, the server’s chat began to fill up with racist insults and comments wishing him dead, and wherever he went, users built monuments in the form of swastikas. Etika, who had talked on several occasions about his mental health problems and the fact that he was going through a difficult period, committed suicide just a few weeks after entering the server. After his death, the same racist jokes kept appearing on the chat, as if nothing had happened.

Contrary to what John Perry Barlow asserted, the internet does not offer an alternative to the real world, but merely a long and often caricaturised shadow. All the structural problems of the physical world are reproduced and amplified in the digital world – it doesn’t resolve them as if by magic.

A clear example of how this occurs on a structural basis is explained by Safiya Umoja Noble in Algorithms of Oppression, in which she argues that search engines like Google reinforce pre-existing social ideas.

She explains how, when she did a search with the words “black girls”, hoping to find fun activities to do with her African American daughters, many of the results on the first page were pornographic videos. Similarly, the search results for racialised people brough up negative, humiliating comments, unlike those for white people. After publishing her findings, Google decided to manually correct the results.

As described in a conversation on Healthy Gamer GG, the mental health channel for streamers and gamers, the daily experience of many female streamers on Youtube and Twitch is a round of insults, disdain and sometimes even death threats. For them, freedom of expression involves creating moderation systems that ensure their safety in male-dominated environments, where they are often seen as a threat, and constant efforts to highlight their work and not allow sexist comments and reactions to be normalised.

To put it another way, real freedom of expression on the internet needs clear rules, the banning of toxic users, prevention and erasure of harmful content and constant monitoring of the health of the community. But who should these tasks fall to?

Social corporate deplatforming

Since the organised attack on the US Capitol, many of the big platforms like Twitter and Google decided to ban Donald Trump from their corporate spaces and issue decisive declarations of intent. Amazon and Google also cut off platforms for the organisation of hate communities, such as Parler.

We won’t allow our platform to be used to express attitudes that promote violence, they said. After years of turning a blind eye and looking the other way when Trump breached the rules and threatened or insulted other users, his synchronised deplatforming when he was already on his way out the White House smelt more of marketing and damage control than firm corporate ethics.

The problem of purely corporate moderation mechanisms is that they are dysfunctional and arbitrary by nature. It is corporate, economic and political interests that influence what decisions are taken and when. What’s more, the task of controlling all the minor details of user activity is often outsourced to exploited workers who are traumatised by violent messages and images and who sign confidentiality agreements on the real moderation criteria.

The moderatocracy of Reddit, an alternative model

In 2020, Reddit jumped onto the band wagon and changed its policy to make it more restrictive, more inclusive and more proactive with regards controlling its conversations. The changes included banning communities that promoted hate towards people on the grounds of their identity or vulnerability. This announcement was followed by the closure of more than two thousand communities, including the subreddit that helped Trump win the 2016 election, r/the_donald, and r/ChapoTrapHouse, the subreddit of the left-wing podcast.

All in all, over the years, Reddit has achieved an alternative moderation model that considers the rules of the platform and the community rules of each subreddit. This decentralised model spreads out the task of moderation between volunteer users, the Reddit team and different democratic social self-control systems, such as referendums to update the rules of a certain subreddit or mechanisms for anonymous reporting and complaints.

To help them in their work, Reddit stays in direct contact with moderators and offers them psychological support. In addition, a lot of communities have created subreddits dedicated wholly to detecting hateful discourse and toxic subreddits, so they can be quickly reported before they have time to grow.

The Reddit model isn’t perfect. It can generate mini oligarchies of all-powerful moderator users, and the voluntary work of the moderators can be used by the platform to diminish its responsibility of what users say or do. But this combination of corporate responsibility, community moderation carried out by volunteers and structural support is at least a more horizontal, cooperative and human model than that of other platforms.

The first rule of Reddit’s content policy, as on many other sites and services nowadays, is this:

Remember the human. Reddit is a place for creating community and belonging, not for attacking marginalized or vulnerable groups of people. Everyone has a right to use Reddit free of harassment, bullying, and threats of violence. Communities and users that incite violence or that promote hate based on identity or vulnerability will be banned.

A somewhat less seductive statement than the calls for freedom of the cyberutopians of the 90s, but in view of the experience of the last 30 years, the result is closer to a space where it is possible to converse with true freedom, and where swastikas are erased by users, not algorithms or opaque platforms, even if only to turn them into the Windows 95 logo.





Connective nightmare at the call centre

For both ends of the telephone line, the call centre means giving up empathy and understanding.


Women working at the U.S. Capitol switchboard. Washington, D.C. 1959 | Marion S Trikosko, Library of Congress | No known restrictions on publication

Far from offering a place for meeting and genuine exchange between two parties, the call centre seems designed to thwart any attempt at communication. Its logic forces us to renounce dialogue and submit to strict connective protocols. In this sense, the call centre is a prime example of what Franco “Bifo” Berardi has described as a “displacement of conjunction towards connection”.

“What exemplifies the failure of the neoliberal world to live up to its own PR better than the call centre?” asks Mark Fisher in Capitalist Realism. Is There No Alternative? (Zero Books, 2009): “The closest that most of us come to a direct experience of the centrelessness of capitalism is an encounter with the call centre, […] you increasingly exist in two, distinct realities: the one in which the services are provided without hitch, and another reality entirely, […] a world without memory, where cause and effect connect together in mysterious, unfathomable ways, where it is a miracle that anything ever happens, and you lose hope of ever passing back over to the other side.”

Fisher strikes a parallel between the call centre and the sinister telephone system that Kafka describes in The Castle. “There is no specific telephone connection with the castle”, the warden explains to the character in the novel; “no exchange that puts our calls through; when you call someone in the castle from here, it rings on all the telephones in the lowest departments there, or rather it would ring on all of them were it not for the fact, which I know for certain, that on nearly all of them the bell is switched off. […] And that’s very understandable, after all. Who has any right to ring in about his private little troubles in the middle of the most important jobs, which are invariably being done in a tearing hurry?”

The telephone system that, in principle, should serve to communicate with the administrative authorities of the castle turns out to be a device designed to thwart any attempt at communication. Something very similar happens when we enter the labyrinth of some call centres. Because, to paraphrase the castle warden, who considers themselves entitled to make a complaint or try to clarify some minor detail to the company that provides a service? Who dares to interrupt the tearing hurry of a big corporation with complaints or questions of any kind at all?

“The call centre experience,” maintains Fisher, “distils the political phenomenology of late capitalism: the boredom and frustration punctuated by cheerily piped PR, the repeating of the same dreary details many times to different poorly trained and badly informed operatives […] Anger can only be a matter of venting; it is aggression in a vacuum, directed at someone who is a fellow victim of the system but with whom there is no possibility of communality.”

ABSA Bank Call center. Johannesburg, 2008 | CC BY-SA Media Club

This lack of communality, so characteristic of the call centre, is a perfect example of the anthropological transformation that Franco “Bifo” Berardi describes in And: Phenomenology of the End (Semiotext(e), 2015). The consequence of this mutation, resulting from the current technological transition, is none other than “the dissolution of the modern conception of humanity and the extinction of the humanist man or woman”.

But what exactly is this mutation? Ultimately, it is an inability to empathize and a loss of “the ability to perceive the other’s body as a living extension of my own body”. Berardi attributes this neutralization of empathy and the affective to a “displacement of conjunction towards connection”. Whereas conjunction is “a concatenation of bodies and machines that can generate meaning without following a pre-ordained design, nor obeying any inner law or finality”, connection, conversely, is “a concatenation of bodies and machines that can generate meaning only following a human-made intrinsic design, only obeying precise rules of behaviour and functioning”. Conjunction, writes Berardi, “entails a semantic criterion of interpretation. (…) Connection instead requires a criterion of interpretation that is purely syntactic.” In other words, conjunction is empathic, whereas connection is purely operational.

This is why the labyrinth of the call centre becomes even more Kafkaesque and desperate when our call is answered by an automatic voice recognition system. When this happens, we have to adjust our words to a strictly coded communication, and the possibility of solving our problem depends exclusively on whether or not, on the other end of the line, it is considered a real problem. As Berardi points out, it is the empathy of the conjunctive plane that enables the appearance of a previously non-existent meaning. In the connective mode, conversely, “each element remains distinct and interacts only functionally”.

Unable truly to listen and understand us, the virtual agent of the call centre can only answer a limited number of questions and incidents, so if the reason for our call does not correspond precisely to any of them, the communication may be terminated regardless of whether or not our problem has been resolved. This is what Éric Sadin, in his book L’Intelligence artificielle ou l’enjeu du siècle: Anatomie d’un antihumanisme radical (Éditions L’Échappée, 2018), calls the “menacing power of technology”. Increasingly, writes Sadin, we bend to “protocols designed to bring about inflections in each of our acts” and submit to the equations of a series of artefacts that have the “primary objective of responding to private interests and establishing an organization of society based on primarily utilitarian criteria”.

The call centre continues to be one of the main testing fields for the predictive and interpretive capabilities of artificial intelligence. If, as Fisher suggests, public relations represent “the point of view of neoliberalism”, the robotization of the call centre implies precisely the dehumanization of the meeting place between salesperson and customer (or, if you prefer, to use a terminology more in keeping with the times, between provider and user). Public relations were traditionally based on a form of conjunctive, empathic communication between two parties; now, however, with the proliferation of systems such as speech recognition, they seem to work increasingly according to the rules of a connective mode and for strictly functional purposes.




On the other end of the line

However, the connective nightmare of the call centre is not just about the experience of the person making the call—far from it. When it is not a machine that answers the call, the voice we hear on the other end of the line is that of a person who is much more subjected than we are to the strict connective protocols that govern the operation of telephone service switchboards. That person, or the one sitting next to them, could be the same person who called us yesterday afternoon on behalf of some other company to offer us a service. It is likely that, on that occasion, the call was ended rather abruptly on our part. This fact highlights the perverse way the voice circulates at the call centre and explains why it so often becomes a place of misunderstanding and frustration at both ends of the lines.

Although Mark Fisher refers to the people who work in the call centre as “fellow victims”, it is important to realise that it is precisely these people who bear the brunt of this submission to a form of strictly connective communication. After all, nothing can be more dehumanizing than being forced to abandon empathy in order to function in a non-human way comparable to that of a virtual assistant. A few years after the publication of Capitalist Realism. Is there No Alternative?, in his compilation of articles Ghost of My Life: Writings on Depression, Hauntology and Lost Futures (Zero Books, 2013), Fisher compared the call centre worker to a “banal cyborg, punished whenever they unplug from the communicative matrix”.

“Somehow you become an appendage of the computer, but with a voice” writes an anonymous call centre worker in the pages of ¿Quién habla? Lucha contra la esclavitud del alma en los call centres (Tinta Limón, 2006). Published by Colectivo Situaciones, this book brings together the testimonies of various call centre workers in Argentina and takes an inside look at the dark reality of these workplaces that continue to proliferate around the world. As Jamie Woodcock and Enda Brophy analyse in detail in their respective books Working the Phones: Control and Resistance in Call Centres (Pluto Press, 2017) and Language Put to Work. The Making of the Global Call Centre Workforce (Palgrave Macmillan, 2017), the call centre is a paradigm example of new forms of work that promote neoliberal business formulas typical of platform and/or surveillance capitalism.

Beyond the connective discipline to which their employees are subjected, call centres are also spaces of precariousness and exploitation. During the worst moments of the coronavirus pandemic, the call services not only didn’t stop working, they had to function at full capacity, becoming major focuses of contagion. The call centre represents the dark side (or, rather, one of the many dark sides) of digital economic expansionism. It is a photograph of a call centre that illustrates the cover of Ursula Huws’s book Labor in the Global Digital Economy: The Cybertariat Comes of Age (NYU Press, 2014). Here, the author examines the notion of “cybertariat” and the “destruction of occupational identity” within the framework of an economic model based on information and knowledge.

Converted into linguistic cogs in the “communicative matrix” of “iCapitalism” (Huws), call centre employees are the closest thing to production line workers in the post-Fordist era, constantly monitored by supervisors who, though remotely and silently, play the role of the factory foremen. Unlike the phone system that Kafka describes in The Castle, the call centre lines are always connected. If we get past the dissuasive preambles with which each call begins (the unbearable hold music, recordings about services or offers we don’t need, lists of options that don’t correspond to our problem or the classic passing of our call from one department to another), we end up talking to a person who most likely works under enormous psychological pressure to try to meet a series of unattainable objectives in exchange for meagre bonuses.

If, as Fisher writes, the call centre experience reveals the failure of neoliberalism, it is precisely because of its twofold collateral effect at both ends of the telephone line. Both the caller and the person who answers the call are condemned to give up empathy, typical of conjunctive communication, and submit to the rules of strictly syntactic, connective communication. The impossibility of “getting through” is reciprocal and invariably leads to a complete disempowerment of voice and word. In this sense, the call centre is the absolute reverse of what Brandon LaBelle, in his book Sonic Agency: Sound and Emergent Forms of Resistance (MIT Press, 2018), calls “sonic agency”—that is, the possibility of establishing relationships between subjects and bodies with the aim of creating new forms of resistance and negotiation of reality using listening, voice and sound. Perhaps the most convenient option at this point (bearing in mind the depth and interest of the notion proposed by LaBelle) would be to address this issue more specifically on another occasion.





Can AI create a more just world?

If we know that algorithms may have biases that can harm individuals or groups, why do we let them make decisions?


Un hombre en una balanza, 1923 | Harris & Ewing, Library of Congress | Sin restricciones conocidas de derechos de autor

Whether due to the way they have been designed or the way they have been trained, many algorithms have biases. But we humans, too, have lots of prejudices that are often very difficult to identify and also affect our decisions. Detecting in which fields the inclusion of automated systems is most possible and creating fair, transparent algorithms is crucial to generating more just solutions.

In August 2020, secondary school students in Britain took to the streets to demonstrate against the automated system that the UK’s Office of Qualifications and Examinations Regulation (Ofqual) had applied in correcting their exams. As the students had missed classes, the teachers were asked to provide an estimated grade for each student, and a comparative ranking with class mates in their school. Ofqual’s algorithm also took into account the school’s performance in each subject over the previous three years. The idea was that the results should follow a similar pattern and that 2020 students not be at a disadvantage. But the algorithm ended up reducing the teachers’ evaluations in 40% of cases because the calculation gave less importance to each student’s history and teachers’ assessments and more to other, external factors, such as the quality of the school or its past success. This led to visible discrimination against ethnic minorities from poorer regions. This is how brilliant students from schools with fewer resources saw their marks downgraded, with the direct consequence of not being able to go on to university.

Artificial intelligence is effective in predicting major patterns and relationships from big data, as well as streamlining processes. To do so, algorithms are used in ways which are not always neutral. Either because the data with which they are trained has biases, or because biases have been added during their design. As David Casacuberta, lecturer in Philosophy of Science at the UAB, reminds us, the problem is that automated systems, if fed with biased examples, will end up reproducing and even enhancing those same biases. “If, in a country such as the United States, people with African ancestry have much higher possibilities of ending up in prison without bail, and we train a neuronal network with those data, the algorithm ends up replicating the same types of biases.” Proof of this is the research that the independent foundation Pro Publica led for years, until it showed that the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) algorithm always gave a higher probability of committing crimes for black and Hispanic than white defendants.

If it is known that algorithms can have biases, and that these can cause harm to individuals or groups, why are they used? One answer is that the benefit or accuracy of the results is greater than the damage or error. Is this fair to those who lose out, like students with downgraded marks?

Not all bias is negative

When we talk about bias, we always think of it negatively, but it can also be positive. And this in itself is a language bias. This occurs when we seek gender parity or want to take an affirmative action to mitigate discrimination against a group of people (positive discrimination). For example, when a company with more than 50 employees is required in Spain to have at least 2% of employees with a disability.

An intrinsic problem with bias is that we often know in which direction we need to go to correct it, but not how far. For example, it is not clear that there should be gender balance in all professions. Nursing is one example of a profession where women have shown greater empathy with patients, which would justify their being in the majority. The right distribution may not even exist, or it may have to be based on social consensus. In the case of affirmative actions, the perception of positive bias may be more important than the action itself. Suppose an engineering faculty wants to increase the proportion of female students, currently just 20%. To do so, it decides to grant 5% of its places (the last ones) just to women, which would serve to increase the proportion of women annually by 1%.

This example is not hypothetical, but has actually occurred at the University of Chile since 2014. Currently, this quota has increased to 8%. An analysis of this program has shown that the difference between choosing women or men is one more right answer in the maths exam, which is not statistically significant and, therefore, there is no real gender discrimination. What is more, in five years the percentage has exceeded 30%, since the perception that it is easier to enter and that there will be more women has increased the habitual number of female applicants. That is, perception of the action was more powerful than the action itself (an increase of 7% as opposed to the 5% represented by the action). Recently, an exclusive 1% has also been added for indigenous Mapuche inhabitants. This success has now been replicated in almost all engineering schools in Chile.

Fairness and algorithmic transparency

Sometimes, bias leads to a claim for algorithmic transparency by citizens, when the AI system has been seen to be unfair. To understand how an algorithm made the prediction or decision, we use the term explicability (explainable artificial intelligence). Transparency is an ethical principle recognised by the European Union in numerous documents such as the Ethics guidelines for trustworthy AI. The main objective is to prevent discrimination by the automatisms of algorithms largely due to biases contained in the data with which AI systems are trained.

An example of this kind of failure would be the Bono Social, an economic compensation that the Spanish government promised in 2018 but that has brought controversy. It was initially intended to help the neediest families with their electricity bill. To do so, electricity companies checked that applicants met the requirements using a computer application (BOSCO) that decided who should receive these grants. Faced with numerous complaints from families that objectively met the requirements but had their application turned down, the citizen foundation Civio asked for information about the design of the algorithm and the source code of the tool, in accordance with to the Transparency Act. “If we do not know the inner workings of applications like this, we cannot supervise the work of the public authorities”, said David Cabo, director of Civio. The idea was to identify where in the process there could be errors and, having detected them, do everything possible to ensure that families received their social compensation. Faced with the government’s refusal, they have filed a contentious-administrative appeal.

A researcher at the Oxford Institute, Sandra Wachter, thinks we should have the legal right to know why algorithms make decisions that affect us. She explains that governments, companies and financial organizations should give “counterfactual explanations”—that is, if you’ve been denied a mortgage, you should be able to ask the bank: “If I earned 10,000 euros more a year, would you have granted me the loan?” But for some experts, such as Kate Vredenburgh of Stanford, believe that this type of explanations also can have bias.

Ethical principles to avoid bias

To summarize, we could list some ethical principles associated with having less bias in AI-based systems. There are dozens, but the most important ones related to bias are:

  1. The algorithm must be fully aware of what it is doing. In this case, it needs to be aware of its biases and how to mitigate them.
  2. The algorithm must be fair and not discriminate against people. One of the most important causes of inequity is negative bias. An alternative that reflects fairness is for the algorithm to be interpretable and/or explainable, and this means addressing existing biases.
  3. The algorithm must be transparent—that is, fully reflect how it works, including if there is bias in the data, if there is bias added by the algorithm or if there is bias produced in the user’s interaction with the system. If there is not enough transparency, the algorithm must be auditable to be able to verify, for example, that someone has not been discriminated against. Note that transparency is required by lack of confidence, since in the Anglo-Germanic world people trust the system, and accountability is ultimately required.

As yet unresolved…

Finally, let’s go back to our question. If we use algorithms knowing that they have bias, why do we let them make decisions? One answer is that humans also make mistakes in our decisions, largely due to the prejudices we have acquired. Biases—particularly cognitive biases—are like prejudices: very difficult to identify. The person who has most prejudices is the one who thinks they have none. But we can also look at it from another viewpoint: automated systems are of great help in situations where bias does not play an important role, such as air traffic control. Having professionals working long hours under tension is more dangerous than training machines for the task. They don’t get tired; they are programmed and they are more efficient.

Another answer is that algorithms are fairer than people because, given the same data, they always make the same decision. People, conversely, have much more “noise”, making their decisions more random. This is a major problem in court, where judges’ mood may be even more influential than their biases. Indeed, along with other authors, Daniel Kahneman, Nobel Memorial Prize in Economic Sciences, alerts us to the high cost of inconsistent decision-making in humans and the fact that algorithms are, in general, fairer.

How can AI help us create a more just world? A partial solution would be to create a Jiminy Cricket-type virtual assistant who could alert us when it detected prejudice, whether in speech, action or judgment. Or warn us in the event of possible manipulation based on bias integrated into an intelligent system. But how many of us would agree to let a machine—a mobile phone or device—listen in permanently to what we say (in private and in public) to correct these prejudices and biases?





A Geology of Media 

The materiality of media is not limited to economic or technical issues; it has an intimate connection with the soil, the air and nature.


Archaeologists at work in Sigtuna town. Sweden, 1941 | Carl Gustaf Rosenberg, Swedish National Heritage Board | Public Domain

In order to properly understand contemporary media culture, we have to start with the material realities that go before: the history of the Earth, geological and mineral formations, and energy. When we realize this, we are faced with the profound social and environmental consequences of our networked lives. Not only do we need rare minerals to make our digital machines work, but outdated media technologies return to the Earth as the residue of digital culture, contributing layers of toxic waste that future archaeologists will work on. Courtesy of Caja Negra Editora, we publish a preview of A Geology of Media by Jussi Parikka.

We want to bring these various components together now: planned obsolescence, the material nature of information and electronic waste. Planned obsolescence was introduced as the logic of consumer technology cycles, which is embedded in a culture of material information technologies that in themselves should be increasingly understood through chemicals, toxic components and the residue they leave behind after their media function has been, so to speak, “consumed”. The realization that information technology is never ephemeral and therefore can never completely die has both ecological and media archaeological importance. As a material assemblage, information technology also has its duration that is not restricted to its human-centred use value: media cultural objects and information technology have an intimate connection with the soil, the air and nature as a concrete, temporal reality. Just as nature affords the building of information technology—how, for example, gutta-percha was an essential substance for insulating nineteenth-century telegraphic lines or how columbite tantalite is an essential mineral for a range of contemporary high-tech devices—, these devices return to nature.1

In short, information technology involves multiple ecologies that travel across political economy and natural ecology.2 This Guattarian take on media ecology is connected to an ecosophical stance: an awareness of overlapping ecologies feeding into interrelations between the social, mental, somatic, nonorganic and animal. Indeed, following Sean Cubitt’s lead, we argue that archaeologies of screen and information technology media should increasingly not look only at the past but also inside the screen to reveal a whole different take on future-oriented avant-garde:

The digital realm is an avant-garde to the extent that it is driven by perpetual innovation and perpetual destruction. The built-in obsolescence of digital culture, the endless trashing of last year’s model, the spendthrift throwing away of batteries and mobile phones and monitors and mice… and all the heavy metals, all the toxins, sent off to some god-forsaken Chinese recycling village… that is the digital avant-garde.3

Our proposed alternative archaeology of tinkering, remixing and collage starts not with Duchamp and others but with the opening up of the technological gadget, the screen and the system.

Media archaeological methods have carved out complex, overlapping, multiscalar temporalities of the human world in terms of media cultural histories, but in the midst of an ecological crisis a more thorough nonhuman view is needed. In this context, bending media archaeology into an artistic methodology can be seen as a way to tap into the ecosophical potential of such practices as circuit bending, hardware hacking, and other ways of reusing and reintroducing dead media into a new cycle of life for such objects. Assembled into new constructions, such materials and ideas become zombies that carry with them histories but are also reminders of the nonhuman temporalities involved in technical media. Technical media process and work at subphenomenological speeds and frequencies4 but also tap into the temporalities of nature—thousands, even millions, of years of nonlinear and nonhuman history.5

Phases of media positioned in reference to political economy: new media and media archaeology are overlaid on Gartner Group’s hype cycle and adoption curve diagrams, graphic representations of the economic maturity, adoption and business application of specific technologies. Diagram by Garnet Hertz.

In conclusion, communications have moved beyond the new media phase and through the consumer commodity phase, and much of it is already obsolete and in an “archaeology phase”. The practice of amateurism and hobbyist DIY characterizes not only early adoption of technologies but also the obsolescence phase. Chronologically, digital media have moved from a speculative opportunity in the 1990s to become widely adopted as a consumer commodity in the 2000s and have now become archaeological. As a result, studying topics like reuse, remixing and sampling has become more important than discussions of technical potentials. Furthermore, if temporality is increasingly circulated, modulated and stored in technical media devices—the diagrammatics and concrete circuits that tap into the microtemporality below the threshold of conscious human perception—we need to develop similar circuit bending, art and activist practices as an analytical and creative methodology: hence the turn to archives in a wider sense that also encompasses circuits, switches, chips and other high-tech processes. Such epistemo-archaeological tasks are not only of artistic interest but also tap into the ecosophical sphere in understanding and reinventing relations between the various ecologies across subjectivity, nature and technology.

Although death of media may be useful as a tactic to oppose dialogue that only focuses on the newness of media, we believe that media never die: they decay, rot, reform, remix and are historicized, reinterpreted and collected. They either stay as a residue in the soil and as toxic, living dead media or are reappropriated through methodologies of artistic tinkering.

[1] Gutta-percha is a natural latex rubber made from tropical trees native to Southeast Asia and northern Australasia. Columbite tantalite, or “coltan,” is a dull black metallic ore, primarily from the eastern Democratic Republic of the Congo, the export of which has been cited as helping to finance the present-day conflict in Congo.

[2] Félix Guattari, The Three Ecologies, trans. Ian Pindar and Paul Sutton (London: Athlone Press, 2000).

[3] Sean Cubitt, interviewed by Simon Mills, Framed.

[4] Such hidden but completely real and material “epistemologies of everyday life” are investigated in a media archaeological vein by the Institute for Algorhythmics.

[5] Manuel DeLanda, A Thousand Years of Nonlinear History (New York: Zone Books, 1997).





Social networks: editing or censorship?

The change in behaviour of big media platforms sparks reflection on the ethical regulation of information.


William Hulme Grammar School pupils in the Computing Laboratory, c. 1974 | University of Salford, Manchester | CC-BY-NC-ND

Recent months have seen a turnaround in the public role of the principal media, both traditional and technological. The major platforms have launched a battle against disinformation at different levels that came to a head during the US elections. The big US television networks and newspapers are now aware that we are living in post-truth times, and social networks have stepped into their condition as editors of information and, with it, of reality. Faced with this paradigm shift, it is worth reflecting on the line that separates editing from censorship, and wondering who should be responsible for the ethical regulation of information.

On 5 November, during Donald Trump’s insane press conference in which he complained that the elections were being stolen from him, something historic happened: three major television channels (ABC, CBS and NBC) interrupted the live transmission. In other words, they refused to continue broadcasting fake news. On the same day, Twitter and YouTube cancelled the account and the channel of the controversial creator of conspiracy theories, Steve Bannon, for using them to suggest that the director of the FBI be beheaded and have his head put on a pike. During the previous weeks, the main social networks had proceeded in similar fashion: both Facebook and Twitter deleted contents of Trump’s and his campaign accounts, considering them fraudulent or incitement to hatred.

This is a huge shift in the public role of major platforms. Over the past year they have been accused, on solid grounds, of having changed the world for the worse. There are already well-founded studies that show that Trump and Bolsonaro would not have come to power without them. And they have decided to act accordingly.

Cadenas de EEUU interrumpen transmisión de discurso de Trump | Los Angeles Times En Español

If you’ve got a Twitter account, you’ll have noticed. You can no longer retweet a tweet that contains a link; instead, you have the option of accessing the article to read it first or citing it in your own tweet—that is, the retweet now comprises two actions. After so many years of automatic, instinctive, almost animal likes and retweets, Silicon Valley engineers have made “friction” fashionable. Increasing friction in the design of interactive devices and social networks means turning what was once a single click into two or three. This dilated process aims to avoid news being shared compulsively and invite whoever does it to some reflection on what is being spread.

At the same time, that friction goes against the very spirit of social networks during their 15 years of existence. For all of that time, what they did, precisely, was to smooth down anything that prevented users from navigating, surfing, sliding across the surfaces of their interfaces with the greatest possible smoothness and immediacy. The weight of Trump’s victory and guilt at the manipulation that they enabled with those too well oiled mechanisms, were determinant in the radical shift that occurred in the context of the 2020 elections. An opportunity for symbolical reparation. A litmus test for them, because, as Kevin Roose wrote in The New York Times, they had to make their tools and systems worse to make democracy a little better.

TikTok has its own security centre to combat disinformation. Facebook has created an external verification centre, of which the Spanish website maldita.es forms part, along with dozens of media from around the world that make up the International Fact-Checking Network. By the summer, all the major social networks had taken measures to circulate the correct information about COVID-19, to the detriment of fake news and conspiracy theories. The US elections were the second phase of the same battle against disinformation (both interconnected, since the actual US government contributed to spreading hoaxes about the pandemic). Facebook even approved a viral misinformation protocol.

It is a question of updating strategies that some platforms have developed in recent years to reinforce the common good and clean up their image, such as Facebook’s response to emergencies (be it an earthquake or a terrorist attack) or, along the same lines, Google Person Finder. But now applied not to the outside world or natural disasters, but to the more subtle, pixelated disasters brought about by social networks themselves (or sinister characters such as Dominic Cummings, the architect of Brexit, or Steve Bannon, former vice president of Cambridge Analytica and adviser to Donald Trump, among other misdeeds).

What's the code of principles of the International Fact-Checking Network? | The Poynter Institute

During the recent elections, the big US television networks and newspapers became aware that we are living in post-truth times and that the traditional rules of journalism no longer apply. In the world before Trump’s presidency, television channels would have continued to broadcast a president’s delusions. In the world it bequeaths to us, however, they will cease to do so. Social networks themselves, which have been largely responsible for this change of paradigm, have reacted with similar decisions. They are deleting or tagging what certain opinion leaders say on their platforms. In this way, they have stepped into their condition as editors of information and, with it, of reality. Editing means controlling, changing, directing, prescribing and, sometimes, also censoring. Is what the main traditional and technological media in the United States are doing censorship?

The initiative of Pedro Sánchez’s government in Spain, allowing the creation of governmental means of controlling information and disinformation, has been described by legal experts as a probable cause of legal instability. Its raison d’être is the new international context, the imitation of Bannon’s strategies by far-right parties around the world—including Vox—and the proven interference of foreign governments in democratic elections, which have thereby ceased to be entirely sovereign. Over and above criticism of press freedom and freedom of expression (we have to remember that the “Gag Law” is still in force in Spain), it is worth reflecting on whether the ethical regulation of information should be delegated to big media and big platforms, or whether this role should be undertaken by public bodies. Does it make sense for Facebook or Twitter to decide what is true and what isn’t? Should the European Union or the UN do it instead? As Juvenal and Alan Moore asked: Who watches the watchers?





Rethinking Artificial Intelligence through Feminism

From a feminist perspective, we offer a critical analysis of the machine learning of artificial intelligences to find more equitable practices.


Sharon Hogge, an electronics engineer, poses with autonomous sentry robot ROBART I and the HT3 Industrial Robot. Maryland, 1983 | The U.S. National Archives | Public Domain

Technology surrounds us and is everywhere, but how this technology is made and who really benefits from it and who does not, are still important questions to be investigated. Recently, movements have emerged such as Data Feminism and Design Justice, which analyse technology from a more critical angle with the intention of creating more equity in technological practice. We explore some of their implications based on the artistic project Feminist Data Set, hand in hand with its creator, who explores what an intersectional feminist machine learning labelling and training system would be like and what would be necessary to construct it.

In a time of the re-rising of fascism, of what feels like a lessening of social justice values, and in an age of global digitization, social justice has never been more integral in the space of interrogating data, technology, and the structure of society itself. Technology surrounds us, everywhere, but how technology is made, and who benefits deeply from technology, and who does not, is still an important question to unpack, research and critique. Feminist Data Set is an art project that uses intersectional feminism as a critical framework for investigating and critiquing machine learning. The investigation happens through a critical design lens, since the project involves making a chatbot from start to finish using intersectional feminism as a guide. This includes asking what is intersectional feminist data collection; what is intersectional feminist data; what is intersectional feminist data labeling, and data training; does an intersectional feminist system or product exist to data label and data train; what does intersectional feminist software look like; what does it do, and are there intersectional feminist algorithms; what would these algorithms then need to exist; what does an intersectional chatbot look like, and how does it interact?

Politically and artistically, Feminist Data Set is inspired by the work of the maker movement, critical design, Arte Útil, Data Feminism, Design Justice, the Critical Engineering Manifesto, Xenofeminism, and the Feminist Principles of the Internet. Pedagogically, Feminist Data Set operates in a similar vein to Thomas Thwaites’s Toaster Project, a critical design project in which Thwaites builds a commercial toaster from scratch. Feminist Data Set, however, takes a critical and artistic view on software, particularly machine learning. What does it mean to thoughtfully make machine learning, to carefully consider every angle of making, iterating, and designing? Every step of this process needs to be thoroughly re-examined through a feminist lens.

There is a growing movement of analyzing technology through a more critical and ethical lens (however, the word ethical itself is becoming controversial and overused). But this critical lens is important, especially as it exists to create more equity within technology as a practice and as its own entity—entity, in the sense that technology is a specific kind of thing, be it software or hardware, and a practice would be how an individual or group uses technology as an addition to its own creativity and making. Current books like Data Feminism by Catherine D’Ignazio and Lauren Klein generate a practice that unpacks data and datasets in society, and creates methodologies for having better data practices through a feminist lens. Design Justice by Sasha Costanza-Chock directly situates social justice within the design world, within design making and product design, and in design thinking as an exercise and practice.

Feminist Data Set | Caroline Sinders

Both Data Feminism and Design Justice analyze how capitalist and corporate structures use design and data, and how civic technologists, social justice movements, and activists engage with data and design. By situating design and data in both the corporate and non-corporate worlds, Data Feminism and Design Justice create an expansive, more holistic view of how data and design are utilized, from a more problematic end (the corporate side) and a more equity-driven end (the social justice and activist side). But both of these ends of the spectrums, and the spaces in-between, must be explored, and this is what makes Data Feminism and Design Justice strong as books, as methodologies, and as use cases. Feminist Data Set takes a similar stance. By interrogating machine learning, not just an artistic practice, but one rooted in product design and the corporate world, Feminist Data Set looks at how technology misuses machine learning as a whole field, and then in individual pieces within the machine learning pipeline.

Often the tools I need to make Feminist Data Set as a critical design and art project don’t exist. For example, what is a feminist data training platform? Is there one? In machine learning, when it comes to labeling data and creating a data model for an algorithm, groups will generally use Amazon’s labor force, Mechanical Turk, to label data. Amazon created Mechanical Turk to solve their own machine learning problem of scale: they needed large data sets trained and labeled. Using Mechanical Turk in machine learning projects is standard in the field; it is used everywhere, from technology companies to research groups to help label data. Mechanical Turk underpays their workers and treats them as gig economy employees rather than full-time employees, thus giving them fewer benefits, and is not intersectional feminist, so I cannot use it in my work. I either have to find an alternative or build one. For the past year, I’ve been exploring what an intersectional feminist machine-learning labeling and training system is, and what it would need. I’ve been creating a tool, much like a calculator, that translates the number of tasks a Turker does, and the costs of those tasks, into an hourly wage. Say, for example, a client prices ten thousand tasks at four cents each: the calculator would translate that into hours or days of work it would take a Turker to complete all of those tasks, and how much their wages would be.

In this sense, Feminist Data Set blends art and social justice-driven research and technology, like Turkopticon, made by Professor Lilly Irani and Mechanical Turkers. Turkopticon allows Mechanical Turkers to rate jobs and clients. This add-on solves a real problem that workers face, often not knowing the quality of a client and having no way to share this information with other Turkers.

Sorting Feminist Data collected at SOHO20, 2018 | SOHO20 Gallery / Rachel Steinberg

To create a feminist AI, the labor and payment inequity in machine learning data training platforms needs to be confronted. In an article written by The Atlantic that investigates the treatment of Mechanical Turkers, the problem “is not necessarily that requesters are underpaying for the work. The average requester pays around eleven dollars an hour for the work they get, according to Hara. But there are also many requesters who pay less than that, and there are many requesters who post tasks that take longer than they say to complete. Still, the root of the problem is that these platforms allow requesters to avoid paying workers for the downtime that would arise if workers did these tasks full-time”. A research paper co-authored by researcher and Mechanical Turker Kristy Milland found that a median wage was about two dollars an hour but only four percent of Turkers earned more than $7.25 per hour.

What is a feminist version of Mechanical Turk? What would it need?

Pay equality and pay equity is a step towards equity in technology and society. Workers need to be paid for their time, not just for time doing a task, but time working—finding work and setting up work tasks—and they should be paid well for their labor. Gig economy companies should be held to labor standards and labor laws.

Making must be thoughtful and critical in order to create equity. It must be open to feedback and interpretation. We must understand the role of data creation and how systems can use, misuse, and benefit from data. Data must be seen as something created from communities and as a reflection of that community—data ownership is key. Data’s position inside technology systems is political, it’s activated, and it’s intimate. For there to be equity in machine learning, every aspect of the system needs to be examined, taken apart and put back together. It needs to integrate the realities, contexts, and constraints of all different kinds of people, not just the ones who built the early Web. Technology needs to reflect those who are on the web now.





The I in the Internet

The Internet has gone from being a utopia where everything was possible to a place full of angry people obsessed with their own representation


Computer with person dressed as Dracula. 1980 | John Marton, The U.S. National Archives | No known copyright restrictions

Jia Tolentino started surfing the Internet with Geocities, forums and GIFs. Years later, the battle between social media networks all vying for our constant attention has completely changed the scenario. In this advance excerpt from Falso espejo. Reflexiones sobre el autoengaño, courtesy of Temas de Hoy, Tolentino reviews this evolution to understand how the Internet ecosystem conditions our lives on and outside of the Internet.

In the beginning the internet seemed good. “I was in love with the internet the first time I used it at my dad’s office and thought it was the ULTIMATE COOL,” I wrote, when I was ten, on an An­gelfire subpage titled “The Story of How Jia Got Her Web Addic­tion.” In a text box superimposed on a hideous violet background, I continued:

But that was in third grade and all I was doing was going to Beanie Baby sites. Having an old, icky bicky computer at home, we didn’t have the Internet. Even AOL seemed like a far-off dream. Then we got a new top-o’-the-line computer in spring break ’99, and of course it came with all that demo stuff. So I finally had AOL and I was completely amazed at the marvel of having a profile and chatting and IMS!!

Then, I wrote, I discovered personal webpages. (“I was aston­ished!”) I learned HTML and “little Javascript trickies.” I built my own site on the beginner-hosting site Expage, choosing pastel col­ors and then switching to a “starry night theme.” Then I ran out of space, so I “decided to move to Angelfire. Wow.” I learned how to make my own graphics. “This was all in the course of four months,” I wrote, marveling at how quickly my ten-year-old internet citi­zenry was evolving. I had recently revisited the sites that had once inspired me, and realized “how much of an idiot I was to be wowed by that.”

I have no memory of inadvertently starting this essay two de­cades ago, or of making this Angelfire subpage, which I found while hunting for early traces of myself on the internet. It’s now eroded to its skeleton: its landing page, titled “THE VERY BEST,” features a sepia-toned photo of Andie from Dawson’s Creek and a dead link to a new site called “THE FROSTED FIELD,” which is “BETTER!” There’s a page dedicated to a blinking mouse GIF named Susie, and a “Cool Lyrics Page” with a scrolling banner and the lyrics to Smash Mouth’s “All Star,” Shania Twain’s “Man! I Feel Like a Woman!” and the TLC diss track “No Pigeons,” by Sporty Thievz. On an FAQ page – there was an FAQ page – I write that I had to close down my customizable cartoon-doll section, as “the response has been enormous.”

It appears that I built and used this Angelfire site over just a few months in 1999, immediately after my parents got a com­puter. My insane FAQ page specifies that the site was started in June, and a page titled “Journal” – which proclaims, “I am going to be completely honest about my life, although I won’t go too deeply into personal thoughts, though” – features entries only from October. One entry begins: “It’s so HOT outside and I can’t count the times acorns have fallen on my head, maybe from ex­haustion.” Later on, I write, rather prophetically: “I’m going in­sane! I literally am addicted to the web!”

“The Story of How Jia Got Her Web Addic­tion” | Jia Tolentino

In 1999, it felt different to spend all day on the internet. This was true for everyone, not just for ten-year-olds: this was the You’ve Got Mail era, when it seemed that the very worst thing that could happen online was that you might fall in love with your business rival. Throughout the eighties and nineties, people had been gathering on the internet in open forums, drawn, like but­terflies, to the puddles and blossoms of other people’s curiosity and expertise. Self-regulated newsgroups like Usenet cultivated lively and relatively civil discussion about space exploration, me­teorology, recipes, rare albums. Users gave advice, answered questions, made friendships, and wondered what this new internet would become.

Because there were so few search engines and no centralized social platforms, discovery on the early internet took place mainly in private, and pleasure existed as its own solitary reward. A 1995 book called You Can Surf the Net! listed sites where you could read movie reviews or learn about martial arts. It urged readers to follow basic etiquette (don’t use all caps; don’t waste other peo­ple’s expensive bandwidth with overly long posts) and encouraged them to feel comfortable in this new world (“Don’t worry,” the author advised. “You have to really mess up to get flamed.”). Around this time, GeoCities began offering personal website hosting for dads who wanted to put up their own golfing sites or kids who built glittery, blinking shrines to Tolkien or Ricky Mar­tin or unicorns, most capped off with a primitive guest book and a green-and-black visitor counter. GeoCities, like the internet itself, was clumsy, ugly, only half functional, and organized into neighborhoods: /area51/ was for sci-fi, /westhollywood/ for LGBTQ life, /enchantedforest/ for children, /petsburgh/ for pets. If you left GeoCities, you could walk around other streets in this ever-expanding village of curiosities. You could stroll through Expage or Angelfire, as I did, and pause on the thoroughfare where the tiny cartoon hamsters danced. There was an emergent aesthetic – blinking text, crude animation. If you found some­thing you liked, if you wanted to spend more time in any of these neighborhoods, you could build your own house from HTML frames and start decorating.

This period of the internet has been labeled Web 1.0 – a name that works backward from the term Web 2.0, which was coined by the writer and user experience designer Darcy DiNucci in an article called “Fragmented Future,” published in 1999. “The Web we know now,” she wrote, “which loads into a browser window in essentially static screenfuls, is only an embryo of the Web to come. The first glimmerings of Web 2.0 are beginning to ap­pear. . . . The Web will be understood not as screenfuls of texts and graphics but as a transport mechanism, the ether through which interactivity happens.” On Web 2.0, the structures would be dynamic, she predicted: instead of houses, websites would be portals, through which an ever-changing stream of activity – status updates, photos – could be displayed. What you did on the internet would become intertwined with what everyone else did, and the things other people liked would become the things that you would see. Web 2.0 platforms like Blogger and Myspace made it possible for people who had merely been taking in the sights to start generating their own personalized and constantly changing scenery. As more people began to register their existence digi­tally, a pastime turned into an imperative: you had to register yourself digitally to exist.

In a New Yorker piece from November 2000, Rebecca Mead profiled Meg Hourihan, an early blogger who went by Megnut. In just the prior eighteen months, Mead observed, the number of “weblogs” had gone from fifty to several thousand, and blogs like Megnut were drawing thousands of visitors per day. This new in­ternet was social (“a blog consists primarily of links to other Web sites and commentary about those links”) in a way that centered on individual identity (Megnut’s readers knew that she wished there were better fish tacos in San Francisco, and that she was a feminist, and that she was close with her mom). The blogosphere was also full of mutual transactions, which tended to echo and escalate. The “main audience for blogs is other bloggers,” Mead wrote. Etiquette required that, “if someone blogs your blog, you blog his blog back.”

Through the emergence of blogging, personal lives were be­coming public domain, and social incentives – to be liked, to be seen – were becoming economic ones. The mechanisms of inter­net exposure began to seem like a viable foundation for a career. Hourihan cofounded Blogger with Evan Williams, who later co­founded Twitter. JenniCam, founded in 1996 when the college student Jennifer Ringley started broadcasting webcam photos from her dorm room, attracted at one point up to four million daily visitors, some of whom paid a subscription fee for quicker loading images. The internet, in promising a potentially unlimited audience, began to seem like the natural home of self-expression. In one blog post, Megnut’s boyfriend, the blogger Jason Kottke, asked himself why he didn’t just write his thoughts down in pri­vate. “Somehow, that seems strange to me though,” he wrote. “The Web is the place for you to express your thoughts and feelings and such. To put those things elsewhere seems absurd.”

Every day, more people agreed with him. The call of self-expression turned the village of the internet into a city, which expanded at time-lapse speed, social connections bristling like neurons in every direction. At ten, I was clicking around a web ring to check out other Angelfire sites full of animal GIFs and Smash Mouth trivia. At twelve, I was writing five hundred words a day on a public LiveJournal. At fifteen, I was uploading photos of myself in a miniskirt on Myspace. By twenty-five, my job was to write things that would attract, ideally, a hundred thousand strangers per post. Now I’m thirty, and most of my life is inextricable from the internet, and its mazes of incessant forced connection – this feverish, electric, unlivable hell.

myspace.com, 2017 | Web Design Museum

As with the transition between Web 1.0 and Web 2.0, the cur­dling of the social internet happened slowly and then all at once. The tipping point, I’d guess, was around 2012. People were losing excitement about the internet, starting to articulate a set of new truisms. Facebook had become tedious, trivial, exhausting. Insta­gram seemed better, but would soon reveal its underlying func­tion as a three-ring circus of happiness and popularity and success. Twitter, for all its discursive promise, was where everyone tweeted complaints at airlines and bitched about articles that had been commissioned to make people bitch. The dream of a better, truer self on the internet was slipping away. Where we had once been free to be ourselves online, we were now chained to ourselves on­line, and this made us self-conscious. Platforms that promised connection began inducing mass alienation. The freedom prom­ised by the internet started to seem like something whose greatest potential lay in the realm of misuse.

Even as we became increasingly sad and ugly on the internet, the mirage of the better online self-continued to glimmer. As a medium, the internet is defined by a built-in performance incen­tive. In real life, you can walk around living life and be visible to other people. But you can’t just walk around and be visible on the internet – for anyone to see you, you have to act. You have to communicate in order to maintain an internet presence. And, be­cause the internet’s central platforms are built around personal profiles, it can seem – first at a mechanical level, and later on as an encoded instinct – like the main purpose of this communication is to make yourself look good. Online reward mechanisms beg to substitute for offline ones, and then overtake them. This is why everyone tries to look so hot and well-traveled on Instagram; this is why everyone seems so smug and triumphant on Facebook; this is why, on Twitter, making a righteous political statement has come to seem, for many people, like a political good in itself.

This practice is often called “virtue signaling,” a term most often used by conservatives criticizing the left. But virtue signal­ing is a bipartisan, even apolitical action. Twitter is overrun with dramatic pledges of allegiance to the Second Amendment that function as intra-right virtue signaling, and it can be something like virtue signaling when people post the suicide hotline after a celebrity death. Few of us are totally immune to the practice, as it intersects with real desire for political integrity. Posting photos from a protest against border family separation, as I did while writing this, is a microscopically meaningful action, an expression of genuine principle, and also, inescapably, some sort of attempt to signal that I am good.

Taken to its extreme, virtue signaling has driven people on the left to some truly unhinged behavior. A legendary case occurred in June 2016, after a two- year- old was killed at a Disney resort –  dragged off by an alligator while playing in a no- swimming- allowed lagoon. A woman, who had accumulated ten thousand Twitter followers with her posts about social justice, saw an opportunity and tweeted, magnificently, “I’m so finished with white men’s entitlement lately that I’m really not sad about a 2yo being eaten by a gator because his daddy ignored signs.” (She was then pilloried by people who chose to demonstrate their own moral superiority through mockery –  as I am doing here, too.) A similar tweet made the rounds in early 2018 after a sweet story went viral: a large white seabird named Nigel had died next to the concrete decoy bird to whom he had devoted himself for years. An outraged writer tweeted, “Even concrete birds do not owe you affection, Nigel,” and wrote a long Facebook post arguing that Nigel’s courtship of the fake bird exemplified… rape culture. “I’m available to write the feminist perspective on Nigel the gannet’s non- tragic death should anyone wish to pay me,” she added, underneath the original tweet, which received more than a thou-sand likes. These deranged takes, and their unnerving proximity to online monetization, are case studies in the way that our world –  digitally mediated, utterly consumed by capitalism –  makes communication about morality very easy but makes actual moral living very hard. You don’t end up using a news story about a dead toddler as a peg for white entitlement without a society in which the discourse of righteousness occupies far more public attention than the conditions that necessitate righteousness in the first place.

On the right, the online performance of political identity has been even wilder. In 2017, the social- media- savvy youth conservative group Turning Point USA staged a protest at Kent State Uni­versity featuring a student who put on a diaper to demonstrate that “safe spaces were for babies.” (It went viral, as intended, but not in the way TPUSA wanted – the protest was uniformly roasted, with one Twitter user slapping the logo of the porn site Brazzers on a photo of the diaper boy, and the Kent State TPUSA campus coordinator resigned.) It has also been infinitely more consequential, beginning in 2014, with a campaign that became a template for right-wing internet-political action, when a large group of young misogynists came together in the event now known as Gamergate.

The issue at hand was, ostensibly, a female game designer per­ceived to be sleeping with a journalist for favorable coverage. She, along with a set of feminist game critics and writers, received an onslaught of rape threats, death threats, and other forms of harass­ment, all concealed under the banner of free speech and “ethics in games journalism.” The Gamergaters – estimated by Deadspin to number around ten thousand people – would mostly deny this ha­rassment, either parroting in bad faith or fooling themselves into believing the argument that Gamergate was actually about noble ideals. Gawker Media, Deadspin’s parent company, itself became a target, in part because of its own aggressive disdain toward the Gamergaters: the company lost seven figures in revenue after its advertisers were brought into the maelstrom.

Damsel in Distress: Part 1 - Tropes vs Women in Video Games | Anita Sarkeesian | Feminist Frequency

In 2016, a similar fiasco made national news in Pizzagate, after a few rabid internet denizens decided they’d found coded mes­sages about child sex slavery in the advertising of a pizza shop associated with Hillary Clinton’s campaign. This theory was dis­seminated all over the far-right internet, leading to an extended attack on DC’s Comet Ping Pong pizzeria and everyone associated with the restaurant – all in the name of combating pedophilia – that culminated in a man walking into Comet Ping Pong and fir­ing a gun. (Later on, the same faction would jump to the defense of Roy Moore, the Republican nominee for the Senate who was accused of sexually assaulting teenagers.) The over-woke left could only dream of this ability to weaponize a sense of righteous­ness. Even the militant antifascist movement, known as antifa, is routinely disowned by liberal centrists, despite the fact that the antifa movement is rooted in a long European tradition of Nazi resistance rather than a nascent constellation of radically paranoid message boards and YouTube channels. The worldview of the Gamergaters and Pizzagaters was actualized and to a large extent vindicated in the 2016 election – an event that strongly suggested that the worst things about the internet were now determining, rather than reflecting, the worst things about offline life.

Mass media always determines the shape of politics and cul­ture. The Bush era is inextricable from the failures of cable news; the executive overreaches of the Obama years were obscured by the internet’s magnification of personality and performance; Trump’s rise to power is inseparable from the existence of social networks that must continually aggravate their users in order to continue making money. But lately I’ve been wondering how ev­erything got so intimately terrible, and why, exactly, we keep play­ing along. How did a huge number of people begin spending the bulk of our disappearing free time in an openly torturous envi­ronment? How did the internet get so bad, so confining, so ines­capably personal, so politically determinative – and why are all those questions asking the same thing?

I’ll admit that I’m not sure that this inquiry is even productive. The internet reminds us on a daily basis that it is not at all reward­ing to become aware of problems that you have no reasonable hope of solving. And, more important, the internet already is what it is. It has already become the central organ of contempo­rary life. It has already rewired the brains of its users, returning us to a state of primitive hyperawareness and distraction while over­loading us with much more sensory input than was ever possible in primitive times. It has already built an ecosystem that runs on exploiting attention and monetizing the self. Even if you avoid the internet completely – my partner does: he thought #tbt meant “truth be told” for ages – you still live in the world that this inter­net has created, a world in which selfhood has become capital­ism’s last natural resource, a world whose terms are set by centralized platforms that have deliberately established them­selves as near-impossible to regulate or control.

The internet is also in large part inextricable from life’s plea­sures: our friends, our families, our communities, our pursuits of happiness, and – sometimes, if we’re lucky – our work. In part out of a desire to preserve what’s worthwhile from the decay that sur­rounds it, I’ve been thinking about five intersecting problems: first, how the internet is built to distend our sense of identity; second, how it encourages us to overvalue our opinions; third, how it maximizes our sense of opposition; fourth, how it cheap­ens our understanding of solidarity; and, finally, how it destroys our sense of scale.

 

In 1959, the sociologist Erving Goffman laid out a theory of iden­tity that revolved around playacting. In every human interaction, he wrote in The Presentation of Self in Everyday Life, a person must put on a sort of performance, create an impression for an audi­ence. The performance might be calculated, as with the man at a job interview who’s practiced every answer; it might be uncon­scious, as with the man who’s gone on so many interviews that he naturally performs as expected; it might be automatic, as with the man who creates the correct impression primarily because he is an upper-middle-class white man with an MBA. A performer might be fully taken in by his own performance – he might actu­ally believe that his biggest flaw is “perfectionism” – or he might know that his act is a sham. But no matter what, he’s performing. Even if he stops trying to perform, he still has an audience, his ac­tions still create an effect. “All the world is not, of course, a stage, but the crucial ways in which it isn’t are not easy to specify,” Goff­man wrote.

Erving Goffman and the Performed Self | BBC Radio 4

To communicate an identity requires some degree of self-delusion. A performer, in order to be convincing, must conceal “the discreditable facts that he has had to learn about the perfor­mance; in everyday terms, there will be things he knows, or has known, that he will not be able to tell himself.” The interviewee, for example, avoids thinking about the fact that his biggest flaw actually involves drinking at the office. A friend sitting across from you at dinner, called to play therapist for your trivial roman­tic hang-ups, has to pretend to herself that she wouldn’t rather just go home and get in bed to read Barbara Pym. No audience has to be physically present for a performer to engage in this sort of selective concealment: a woman, home alone for the weekend, might scrub the baseboards and watch nature documentaries even though she’d rather trash the place, buy an eight ball, and have a Craigslist orgy. People often make faces, in private, in front of bathroom mirrors, to convince themselves of their own attrac­tiveness. The “lively belief that an unseen audience is present,” Goffman writes, can have a significant effect.

Offline, there are forms of relief built into this process. Audi­ences change over – the performance you stage at a job interview is different from the one you stage at a restaurant later for a friend’s birthday, which is different from the one you stage for a partner at home. At home, you might feel as if you could stop performing altogether; within Goffman’s dramaturgical frame­work, you might feel as if you had made it backstage. Goffman observed that we need both an audience to witness our perfor­mances as well as a backstage area where we can relax, often in the company of “teammates” who had been performing alongside us. Think of coworkers at the bar after they’ve delivered a big sales pitch, or a bride and groom in their hotel room after the wedding reception: everyone may still be performing, but they feel at ease, unguarded, alone. Ideally, the outside audience has believed the prior performance. The wedding guests think they’ve actually just seen a pair of flawless, blissful newlyweds, and the potential backers think they’ve met a group of geniuses who are going to make everyone very rich. “But this imputation – this self – is a product of a scene that comes off, and is not a cause of it,” Goffman writes. The self is not a fixed, organic thing, but a dramatic effect that emerges from a performance. This effect can be believed or disbelieved at will.

Online – assuming you buy this framework – the system me­tastasizes into a wreck. The presentation of self in everyday inter­net still corresponds to Goffman’s playacting metaphor: there are stages, there is an audience. But the internet adds a host of other, nightmarish metaphorical structures: the mirror, the echo, the panopticon. As we move about the internet, our personal data is tracked, recorded, and resold by a series of corporations – a re­gime of involuntary technological surveillance, which subcon­sciously decreases our resistance to the practice of voluntary self-surveillance on social media. If we think about buying some­thing, it follows us around everywhere. We can, and probably do, limit our online activity to websites that further reinforce our own sense of identity, each of us reading things written for people just like us. On social media platforms, everything we see corre­sponds to our conscious choices and algorithmically guided pref­erences, and all news and culture and interpersonal interaction are filtered through the home base of the profile. The everyday madness perpetuated by the internet is the madness of this archi­tecture, which positions personal identity as the center of the uni­verse. It’s as if we’ve been placed on a lookout that oversees the entire world and given a pair of binoculars that makes everything look like our own reflection. Through social media, many people have quickly come to view all new information as a sort of direct commentary on who they are.

Advertising Hazards: Your Attention is a Commodity That Can Be Manipulated | Tim Wu | Big Think

This system persists because it is profitable. As Tim Wu writes in The Attention Merchants, commerce has been slowly permeating human existence –  entering our city streets in the nineteenth century through billboards and posters, then our homes in the twentieth century through radio and TV. Now, in the twenty- first century, in what appears to be something of a final stage, commerce has filtered into our identities and relationships. We have generated billions of dollars for social media platforms through our desire –  and then through a subsequent, escalating economic and cultural requirement –  to replicate for the internet who we know, who we think we are, who we want to be.

Selfhood buckles under the weight of this commercial importance. In physical spaces, there’s a limited audience and time span for every performance. Online, your audience can hypothetically keep expanding forever, and the performance never has to end. (You can essentially be on a job interview in perpetuity.) In real life, the success or failure of each individual performance often plays out in the form of concrete, physical action –  you get invited over for dinner, or you lose the friendship, or you get the job. On-line, performance is mostly arrested in the nebulous realm of sentiment, through an unbroken stream of hearts and likes and eyeballs, aggregated in numbers attached to your name. Worst of all, there’s essentially no backstage on the internet; where the off-line audience necessarily empties out and changes over, the online audience never has to leave. The version of you that posts memes and selfies for your precal classmates might end up sparring with the Trump administration after a school shooting, as happened to the Parkland kids –  some of whom became so famous that they will never be allowed to drop the veneer of performance again. The self that traded jokes with white supremacists on Twitter is the self that might get hired, and then fired, by The New York Times, as happened to Quinn Norton in 2018. (Or, in the case of Sarah Jeong, the self that made jokes about white people might get Gamergated after being hired at the Times a few months thereafter.) People who maintain a public internet profile are building a self that can be viewed simultaneously by their mom, their boss, their potential future bosses, their eleven-year-old nephew, their past and future sex partners, their relatives who loathe their politics, as well as anyone who cares to look for any possible reason. Identity, according to Goffman, is a series of claims and promises. On the internet, a highly functional person is one who can promise everything to an indefinitely increasing audience at all times.

Incidents like Gamergate are partly a response to these condi­tions of hyper-visibility. The rise of trolling, and its ethos of dis­respect and anonymity, has been so forceful in part because the internet’s insistence on consistent, approval-worthy identity is so strong. In particular, the misogyny embedded in trolling re­flects the way women – who, as John Berger wrote, have always been required to maintain an external awareness of their own identity – often navigate these online conditions so profitably. It’s the self-calibration that I learned as a girl, as a woman, that has helped me capitalize on “having” to be online. My only ex­perience of the world has been one in which personal appeal is paramount and self-exposure is encouraged; this legitimately unfortunate paradigm, inhabited first by women and now gener­alized to the entire internet, is what trolls loathe and actively repudiate. They destabilize an internet built on transparency and likability. They pull us back toward the chaotic and the un­known.

Of course, there are many better ways of making the argument against hyper-visibility than trolling. As Werner Herzog told GQ, in 2011, speaking about psychoanalysis: “We have to have our dark corners and the unexplained. We will become uninhabitable in a way an apartment will become uninhabitable if you illumi­nate every single dark corner and under the table and wherever – you cannot live in a house like this anymore.”





Luxury communism and automation

Is a world possible where robots do everything, while we simply enjoy life? We consider how automation will affect us, based on the book “Fully Automated Luxury Communism. A Manifesto” by Aaron Bastani.


Passengers either talking or reading in observation car. c 1905 | George R. Lawrence, Library of Congress | Public Domain

With regard to transformations in the sphere of automation he has no doubts: in the year 2020, we will reach 3 million industrial robots. Fully Automated Luxury Communism considers how robots will do everything while we humans enjoy the fruits of their work in equal measure: social justice and unlimited abundance. Taking Bastani’s work as a starting point, we reflect on what type of automation is being imposed upon us and what type could be desirable.

When one hear or read anything regarding something called luxury communism, all one can do is express interest, because although it seems to be an eccentric term, it is connected to a series of traditions and critical practices that link emancipation with abundance, understood as a qualitatively rich life which is placed in common. It is what so-called “left-wing Nietzscheanism” or the freer Marxist tendencies secretly promised. Utopian cities such as Constant’s New Babylon are some of the best-known ruins of that fascinating revolutionary future that never arrived. However, the untimely echo of the urge that moved them, namely, the desire for a life not subjected to the resentment and servitude of a material and vital scarcity, still resonates today. That is why, while the new techno-monarchs fantasise with things such as constructing city-states in the sea, you receive the news on Fully Automated Luxury Communism (FALC) with expectation, even if that leads to you quickly becoming suspicious of it.

To be very brief, the background and immediate context of FALC are at least two: on the one hand left-hand accelerationism, whose paradigmatic political expression can be found in works such as Inventing the Future by Nick Srnicek and Alex Williams (here is their manifesto); and on a historical level, the crisis of the neoliberal model and the most recent cycle of struggles (from Tahrir Square to the contemporary global revolts), accompanied by the simultaneous deployment of a series of techno-scientific revolutions that Bastani groups under the term “the Third Disruption”. The central argument recreates theFragment on Machines” of Marx and affirms that these transformations will procure, through the automation of production processes, an extreme supply of resources. According to Bastani, we can find indications of a certain exhaustion of capitalism and a potential break thanks to the effects of automation, as indicated by the tendency towards reducing marginal costs, which will lead to everything being able to be free in a near future.

In this sense, it is a fervent response to “capitalist realism” and to the famous Thatcherian mantra, “There is no alternative”. It prolongs the criticism of immediatism and political immanentism that Nick Srnicek and Alex Williams call “folk politics” to designate forms of political struggles incapable of planning an emancipatory future, leading to an anarchic and impotent cul-de-sac. For Srnicek and Williams, it is necessary to return to the “grand narratives” of modernity, to tackle the political from a single position that embodies all the other positions (an idea that resonates in other militant environments) and to restore institutional and state mediation with the aim of planning universal strategic objectives. Bastani’s FALC is that strategy of large-scale objectives. An alternative that presents “a reality of plenitude beyond all imagination.”

The idea that transformations in the field of automation are strong seems to be beyond all doubt (known by those above and those below), and this is perhaps one of the reasons for which it is worth reading the text in the style of a compendium. For example, if in 1970 there were 1000 industrial robots in the world, in 2016 they numbered 1.8 million and in 2020 they will reach 3 million, with an exponential curve in increased productivity typical of nearly all sectors (thanks to the experience curve). Technological unemployment will be, the author explains, an insurmountable barrier for capital, breaking the cycle of accumulation in the absence of consumers.

Why is NASA sending a spacecraft to a metal world? | Linda T. Elkins-Tanton | TED Ed

From the energy perspective, the infinite potential of the sun to provide energy will be used in the same exponential terms, meaning that technologies such as solar cells, lithium ion batteries, wind turbines and LEDs provide permanently cheaper energy. Interstellar mining will exploit key water and mineral resources (people like Jeff Bezos and Elon Musk are already working in this direction), although the tendency points towards complete decarbonisation and full reuse of already existing minerals. In any case, asteroids such as 16 Pysche are examples of the promise of “a wealth beyond value”, which makes necessary the relaunching of public state-funded space projects.

Gene editing, through simple techniques such as CRISPR-Cas9, will also be increasingly accessible and will help to prevent HIV, Huntington’s disease and cystic fibrosis, among many other genetic diseases. What is interesting is not only their curative function but the stretching of our bio-capacity. Genetic biohacking could be socialised following models that have transformed other industries (such as P2P), enabling the break with the privatising protectionism of patents and intellectual property. Thenceforth, genetic manipulation will expand even to include pets. All of this will have to be the subject of “vigorous public debate” which, ultimately, will lead to adequate regulations.

In the words of the author: “Just like SpaceX and rocket technology, CRISPR-Cas9, doesn’t permit humans to do anything particularly new. Rather, it illustrates how information, wanting to be free, disrupts mainstream views about scarcity and makes extreme supply possible.” In this sense, molecular agriculture is following the same path and is already a reality that could make the food of 9.6 billion people sustainable: lab-cultured steaks based on samples of animal tissue, lab-cultured fish, or the lab-cultured milk and eggs of Clarafoods, for example, will modify our relationship with food facilitating a diet rich in animal products without the (ecological and moral) inconveniences of the meat industry. Whisky and other drinks are no exception (although wine nostalgics will be able to enjoy “purity” for longer, due to its infinity of nuances).

The imminent ecological disaster (the “sixth mass extinction” or disappearance of a quarter of mammals) makes the acceleration of a sustainable ecological transition obligatory because, paradoxically, “there is no alternative”. However, this transition should not be based on a return to the local, to the “small is beautiful”, because its efficiency is less than the global articulation of strategies of ecological transformation. This “post-scarcity”, as well as the ecological transition, will be accompanied by a socio-political transformation in a populist key that, in addition to being capable of defining what “the people really are” (sic), will pivot around four organisational pillars.

Firstly, a worker cooperativism and a protectionist municipalism that will determine criteria for procurement based on cooperativist and ecological parameters. Both pillars will be given in the wider context of the return of the Nation-state, which will renationalise the majority of basic services (“effective action can only happen through Nation-states”) and will pursue “socially controlled finances” (i.e. national energy investment banks). So that development includes the global South, the fourth pillar will be constituted by an anti-globalist internationalism, which through institutions to be created such as the International Bank for Energy Prosperity and One Planet Tax will extend post-scarcity to the whole of the planet.

To be fair, Bastani’s explanation is more accurate and extensive, rich in the sense of an upgrading of the technological possibilities, and it is correct in that the popular masses will be the protagonists in future struggles (whether in their fascistic, elitist or egalitarian version). What is concerning, however, is not so much the ambition and fascination (which one can come around to sharing) but the absence of any more elaborate theoretical proposal. Bastani affirms that the revolution will be red and green. One might imagine that luxury communism should have more colours. And not as an appendix or a supplement, with which we secretly render accounts, but as what, in the manner of a premise, should mark the reflection around automation.

Atlas, The Next Generation | BostonDynamics

Bastani protects himself from potential criticisms – from critics incapable of being propositional – of his utopian enthusiasm. But the propositional compulsion of his text contrasts with the complete lack of dialogue (due to forgetfulness or caution) in relation to problems such as social reproduction, among others. Authors such as Silvia Federici have criticised Marx’s followers’ “continuing love affair with the ‘Fragment on Machines’ (in Revolution at Point Zero). Proposals closely related to Bastani’s text, such as the xenofeminist politics for alienation (sensitive, incidentally, to the problem of difference), as well as the broader debate around a non-Eurocentric universal, imply a more demanding reflection, even for a manifesto. It is not enough to say that the nation-state returns through the necessity imposed by “quick, effective action”, against the impotence of localism and the cult to globalism, because “it is time to make history again”. The affirmation that “the point is not to change the words we use, but the reality they describe” seems insufficient. And not so much because of any anti-institutional, rhetorical or “immediatist” urge. It is simply surprising that Bastani celebrates the spectacular curve of development of companies such as Boston Dynamics without noticing anything else. To be brief, the project for post-scarcity through automation requires, in turn, a critical reflection regarding automation itself.

Although Bastani affirms that communism “… is luxurious – or it isn’t communism.” and that a life of abundance will dissolve “any boundary between the useful and the beautiful”, and despite accompanying his work with interesting quotes from poets and artists, we will find few clues to an experimental attitude in his manifesto: his desire to demonstrate quantitatively and technically the viability of an alternative to capitalism (with the incessant resource of monetary calculations) gives his text an alternativist and technocratic air.

The absence of the Situationist International in the debate around post-work is intriguing, although what has been said about the State can explain it. Nearly 60 years following its publication, a text such as The Situationists and Automation, by situationist painter Asger Jorn, is still brilliant. Jorn defends against the technocrats and sociologists who prefer to “first establish automation, then figure out what to use it for”, that automation must be implemented “based on a purpose contrary to its own establishment”. In other words, between “defeatism” and “moronic optimism” it opts, not so much for an abundance “beyond all imagination” (Bastani), but an “active imagination” that can “go beyond the realization of automation itself” (Jorn). The determining political question, therefore, to avoid the future becoming the hypertrophy of the baseness of the present, does not involve knowing whether automation is possible or not, but a reflection that does not postpone the question around what kinds of automation we want and we are capable of imagining. Which involves questioning, beyond its accelerated inevitability, the premises upon which automation, as imposed upon us today, is based.

When the 1917 Revolution broke out, a poet named Gastev rushed to introduce Taylorism with an impetus that would not envy the most ambitious American entrepreneur, until he convinced a reticent Lenin of its virtues. What we ask ourselves is whether the automation of the 21st century will find its artists and thinkers before coming up against those who, in the name of urgency, have decided to leave the poetry for later.





Offline

We asked different authors to take a free, textual, no-links approach to the idea of disconnection and technology addiction.


Kid on a swing. Caldwell, Idaho, 1941 | Russell Lee, Library of Congress | Public Domain

In 2012, the technology journalist Paul Miller went offline for a year. Back then, some of us thought it seemed a little over-the-top: it was the year of the 2.0 boom and growing enthusiasm over collective intelligence. Almost a decade later, we’ve all seen the good and the bad side of the web, while criticism about the darker side of cognitive capitalism and the attention economy is growing. So it is that, inspired by the possibility (or impossibility) of disconnection, we’ve asked eight collaborators to give their opinion on the subject. These texts have been fed into a neural network which has also generated its own answer.

  1. Living Dead, by Anna Pacheco
  2. To Be Connected or Disconnect, by Efraín Foglia
  3. Very Brief Guide to a Healthy Digital Diet, by Felipe G. Gil
  4. Bird, God, Cloud , by Irene Solà
  5. Reconnecting, by Jorge Carrión
  6. B-I, by Libby Heaney
  7. Des…, by Liliana Arroyo
  8. The False Weight, by Maria Cabrera Callís
  9. El Mal Alumne, by Estampa

 

Living Dead

Anna Pacheco

What happens when someone who has died leaves a WhatsApp group? What mechanism is activated? What type of sadness (a new type, certainly) is caused by the idea of someone who is no longer here actively leaving a place that only exists on the virtual level? Logically, we might think that a dead person would leave a group when, in the real world, they disconnect his telephone number. And this might take months, until the living get over the shock and the fear of absence of their loved one. In my case they took four months and, when it happened, the impact was huge. We found out when we received that standard text that appears to tell you that so and so has left the group. Most of us didn’t know how to respond or react, neither with worlds nor symbols. Only one person plucked up the courage to respond, by means of an emoji with a tear + a broken heart. It was an understandable thing to do, but the result seemed rather crass. No one else said anything or managed to ameliorate the situation. The moment was pushed out of my head twenty minutes later by some unimportant matter or other. Someone said “I can’t find the keys for my scooter” or something similar. Then someone shared a link to a news piece and an image of a plate of food. This made me feel a new pain. Another more or less natural impulse is to visit all those virtual spaces where the dead person is still alive, in their own way. 450 photos on Instagram, 150 tweets, the conversation on Facebook where the person told you things. I go back to these virtual and now funeral spaces every time I think about the dead person, which is more or less every day. I go back with a renewed faith and the deep sadness of always finding something new, something I’d overlooked, something I didn’t know; I feel the need to gather all the information of someone who no longer creates any. Their public data are valuable because, for me, they offer an immediate connection with someone who is alive.

 

To Be Connected or Disconnect

Efraín Foglia

Capitalism is perfecting itself because each connection to the Web produces assets. Each click generates value. The Internet becomes a techno-biological organism that extracts life. The Web shows us its darkness and perversion. Hate multiplies on social media, where racist and supremacist attacks grow. It’s all reason to disconnect. Disconnection from the Internet should be a basic right.

We mustn’t forget that the Web doesn’t belong to them, to those vile netizens. Disconnecting from fear is like losing a city street that you don’t walk down anymore because it’s got too dangerous. It’s a lost corner of the city that no one fought for. There are people who have fought to stop up being excluded from the Web and to make it a more just place. Now, unjust power has disconnected them from life. We have to decide: should we stay connected and resist, or disconnect? “What I love and believe in the most is connection, human connection and the technologies by which that is achieved.” Edward Snowden (2019). #PermanentRecord #FreeAssange

 

Very Brief Guide to a Healthy Digital Diet

Felipe G. Gil

There’s a lot of talk about the toxicity of social media and Internet in relation to how we use technology. There are two extremely mundane metaphors that help me to maintain a healthy relationship with the Internet. The first is the metaphor of the knife.

A knife is an essential tool in the kitchen. You need it to cut up food, to make it suitable for eating. But you can get cut by a knife. It can even be used to kill someone. That’s why I think it’s important to differentiate the use we make of a tool from the tool in and of itself. The tool is here, and it’s here to stay. What we need to debate is the use we should make of it. And in this regard, all those uses that serve not to harm others but to make our lives easier, to defend universal values such as equality, diversity, etc. – these are the ones we should be working towards. Internet is a knife that we need to use to cut away racism, sexism, xenophobia, etc.

The other analogy that closely suits my purpose is that of watering. We often think about our relationship with social networks in absolute terms: you’re either connected or you’re not. No one gives a plant more water than it needs each day because this could harm it. The same thing occurs with the Internet and the use we make of technology: a constant drip can be healthier than a complete drought or too much water. We still need to encourage time off and controlled use. But I think we need to find a balance between being hyper connected and allowing our identity to depend completely on social networks and the Internet (too much water) or completely disconnecting and generating what might be an unnatural state of indifference (complete drought).

 

Bird, God, Cloud

Irene Solà

One could fiercely defend the problems that come with the possibilities of being practically everywhere, of knowing almost everything, of responding instantly, and also superficially, to any query, of satisfying everyone’s curiosity, of making connections with an infinite number of points. We could talk about the disastrous consequences this might have for our ability to concentrate, our capacity for in-depth understanding, for critical thought, even for empathy. But I haven’t started on this subject with the intention of entering that bramble patch, but to mention that, like someone who picks blackberries without hardly a scratch, on the personal and, above all, creative levels, this all-knowing, all-powerful, undiscriminating, all-linking and unlinking, omnipresent, addictive and besieging tool that is the Internet, with its deeply dark side, is key. Key for some creatives. Not all of us, of course. Important for me, for example. Because nothing I write would be as it is if I weren’t writing it today, in this century, with a computer in front of me which, like an oracle, like a crystal ball, like a reflection in a pool, shows me everything I want to see and quenches my initial thirst with an answer, an image, a sound, a video, the momentary ability to be a bird, god, cloud, to – for a moment – seem understand, know what I don’t know, what I haven’t learnt, what I’ve forgotten, what I haven’t yet seen or experienced. And I realise that if I didn’t use digital methods, it would take me days, weeks to gather together all the information I want to know now, while I’m writing, to go to and from the library or bookshop, to meet certain people, to ask questions out loud, to see with my own eyes how cheese is made or how babies are born (even though, in any case, it would be good if I could do and see and ask all these things eventually). But that without any doubt, I write the way I do due to the possibility of accessing the boundless, instantaneous and jumbled information offered by the Internet.

 

Reconnecting

Jorge Carrión

You can leave your telephone on a piece of furniture or charging at home; the watch, wristband and smart clothing are less external, they wrap themselves around you, even most intimately. Meanwhile, the Internet of Things shapes itself into an immersive atmosphere, all enveloping, and the wearable version is designed to be like a second skin. The system of connection pushes out its bubbles: it’s a veritable siege.

Only naked, flying or asleep can we completely disconnect. In the shower, the pool, the sea, on a plane or in bed. Although in time, these spaces will also be assimilated, monitored, converted into data. Because that’s the point: to measure and scan each and every human activity.

To fight, even if only symbolically, against this overwhelming expansion of geolocalised biocontrol, I suppose we will soon start to see the appearance of disconnection protocols, for political reasons and also for wellbeing, for our mental health. At least for a time, while it’s still possible. Because I think the future is quite obviously hybrid: when the sensors are inside our bodies, there will no longer be any way to stop emitting data twenty-four hours a day in any setting, whether urban or natural. Or maybe there will. The history of the future is a series of confirmations that our predictions were wrong.

I realise that between 2007 and 2012 I wrote mainly about the screen (in novels like Los muertos and non-fiction such as Teleshakespeare) and that, conversely, since 2012 I’ve been writing about paper (Librerías; Barcelona. Libro de los pasajes; Contra Amazon). I don’t think it’s any coincidence that this most recent phase has coincided with my becoming a father. Because that’s what it’s about: disconnecting so we can reconnect. With bodies, with the physical world, with newborn skin.

 

B-I

Libby Heaney

(All the lines below are things people said to britbot, my online chatbot exploring Britishness, and feature in my B-X book. I chose them because I feel like they resonate with the themes above. The spelling/punctuation/grammar is other people’s)

Because it is less part of society. Bizarre Blokes arguing about women’s bodies, as always. Bonding. By focusing less on naked tits and more on issues like climate change Can I restart?

Colonialism it’s f***** up the entire planet Crappie! Criminal law each to their own yeah Cyber and self harm Decaying place Developing new technologies to benefit our economy. Disconnected. Do you like my algorithms? Do you experience your self? Elementary Equal pay, and equal air time Everyone started getting more knowledge and opportunities. Exactly bro Exhausting Fantastic! Fat LOADS of fat Female For me it doesn’t work. From the start from the very inception of it’s being Generally, ugly Good as new disagreements in general life but it’s not very realistic Good news stories, real life occurences and crowd funding! Good! Got bigger, richer, more equal in some ways. Hard fuckers Hate her! Have no tv He has some foreign language programs shown on Netflix I don’t watch TV anymore. He’s a twit. He was an unrelenting bully. I am bored. I didn’t accuse. I love frsh air and my farts. I suspect you are not listening to me 😉 I think there’s an amazing variety

 

Des…

Liliana Arroyo

connect. Turn out the light, turn down the music, climb out of the display case, hide from the noise. Distance yourself to fit together. Don’t look for me, I can’t find myself. Eyes shut to the bombardment. Beep, beep, bop. 1456 notifications and rising. Follow, stop following. Like it, love it. Clap clap. That makes me angry. And I keep going down, further down. It’s never-ending, I think. I keep scrolling, half-heartedly. It goes on forever, I think. Half an hour, an hour, an afternoon… Enough. I’m going to stop looking.

Screen off, dark night. I need to post something. Come on, think. Something people will love. I’ve got it: selfie #52 from the informal session. With filters. Natural ones of course. And this trending hashtag for my original post. Same as the other two million. Shared uniqueness, collection of genuineness. I post, therefore I am. OK, posted. Is that it? Yes. The spiral starts.

I go out, I count to 10. Give it some time… reactions, come on! Seconds in the void. I fill it with fear. Fear of silence. Fear of being invisible. The anxiety of the wait pounds through my head. BA DUM. BA DUM. BA DUM. I breathe, I go in. I want ‘likes’… let’s go. I go in. Notifications, yes! Dopamine rush. I’ve got 10, 30, 50, 200. A flood of ‘likes’. But… Oh no, wait! Where’s my crush?

 

The False Weight

Maria Cabrera Callís

The false weight
which increases your sales
shines, above the scales,
like a gold tooth
in a sick mouth.

The false weight that fed the fraudulent sales, that swindled customers who went home poorer, dirtier and diminished by the fear and injustice of knowing nothing about numbers or words, the false weight that shined silently and arrogantly, sitting atop the scales, is today something we create ourselves, through our self-sacrificing work to create the smile sown onto our face and the straw mattress we lie down on, satisfied after the working day, exhausted, docile and actively ignorant, to be warmed up by that iron belly.

Everything that is not here
or that one day was lost
puffs up and grows
like a dead fish
floating on the water
until it becomes
the only possible truth.

The falsa data that grows inside us, tenacious and subcutaneous, blind and stubborn like a mole, digs down until it becomes the only working vein, the only tissue, the only language. The sky must be full of stars, it’s true, but it hangs lower than ever, with a plump belly, close to the ground, in reach of any net. The truth is a bitcoin that no longer jingles in our pocket.

 

El Mal Alumne

Estampa

We need to disconnect from social media and disconnect ourselves from the air-conditioned day. Data are published in an immersive, instant and profiling atmosphere. Each click breaks away from the connection. Clicks overwhelm each other and adults start to dissolve into the electricity. But they can take months, they can resist.

Internet, like a dark mirror, has our capacity for concentration and proximity. The web is a datacentre, a place of data, a place or a slide, where personal and collective relationships are created in no time and the system is placed under the table. Internet is a montage of one’s own self: an infinite number of kilometres and more metres of me. And I haven’t completely downloaded.

Silence is a key matter and it’s not very realistic. Silence is an instant and after awareness. A type of silence. A silence that goes from four to two hundred people. A silence, silence. But silence.

A day without internet is anonymity. It has to be done to disconnect. The ceremony begins.





Contradictions in contemporary video games

We explore the dilemmas currently posed by video games, a means of expression and entertainment that have come to hold a central position in our collective imagination.


CC BY Albert Tercero

The video game designer Sid Meier defined games as “a series of interesting decisions”, from which it can be deduced that understanding a game involves, first and foremost, identifying and thinking about the most important dilemmas that it presents to the player. So let’s imagine that contemporary video game culture is a game. What are the “interesting dilemmas” it poses nowadays? In this text, I set out to explore three such dilemmas which, in my opinion, are crucial.

Game/Technology

Games may well be the most paradoxical and contradictory element of digital society: they are both a problem and an antidote in our relationship with new technologies. On the one hand, ‘gamification’, the veneer of entertainment adopted lately by all types of digital technology, from apps to social networks, can favour their consumption and acritical use, making us believe that these and other types of digital technology are really just ‘inoffensive’ entertainment. On the other hand, Miguel Sicart, author of Play Matters (2014), defends certain essential aspects of games, such as creativity, personal expression through play and ‘playful’ exploration of limits as the best manner to relate to the technological world that surrounds us. He proposes this playful-critical attitude towards new technologies as a necessary way to make up for their widespread use for utilitarian and production purposes, and for those video games that point basically and unequivocally to efficiency as the path to success.

It might interest you

Sicart’s theory-come-manifesto stresses ‘play-as-appropriation’, the ideal of a rebellious ‘homo ludens’, play as a carnival-like act: ‘carnivalesque play takes control of the world and gives it to the players for them to explore, challenge or subvert’ (2014: 4). Similarly, in his article ‘At home we play like this’, Víctor Navarro gathers interesting examples of ironic forms of play. For example, a group of players who basically spend their time dancing to Destiny, an online war fantasy game, or Robin Burkinshaw’s ‘project’ of playing as living as a homeless person in The Sims and blogging about his experience. As Navarro says, this is ‘a crossover between the subversive study of some systems devised for Ikea consumerism and TV series narratives (…) The thesis is clear: in our real world we play in easy mode’. A recent example is the case of Claire, a young running enthusiast from New Jersey who always takes routes in the shape of a penis, registering them on a digital map of the city using the app of a sports brand and publishing them on social media. Her routes make a sarcastic point about apps that use a ‘cool’, ‘gamified’ veneer to promote social acceptance of personal data processing and surveillance algorithms.

The work of Sicart and Navarro shows the value of and need for a playful culture of resistance against certain aspects of digital society: of defending the flexibility and emotionality of games against functionality and mechanical logic; creativity and personal expression against hyper-efficiency and technological determinism; and users’ critical awareness in their relationship with new gaming and gamified technologies.

‘At stake is more than our culture of leisure, or the ideal of people’s empowerment; at stake is the idea that technology is not a servant or a master, but a source of expression, a way of being (…). In the age of computing machinery, we need to see play as both playing systems and playing with systems, as appropriation and resistance of systems’ (Sicart, 2014: 33 i 98).

CC BY Albert Tercero

Escapism/Empathy

‘That’s the thing about video games, they can give you experiences that you can’t have in real life, that you haven’t had, so it can be, hopefully, an experience that can add to someone’s conception of how people are in the world’. Karla Zimonja (video game creator, co-founder of the company Fullbright; in: Muriel & Crawford, 2018: 127)

It might interest you

In a media that has historically been criticised for ‘escapism’, in recent years different creators have explored and strengthened the potential of video games for social empathy. As suggested by Muriel and Crawford in their recent book ‘Video games as culture’ (Routledge, 2018), video games, understood as a designed experience, have huge potential for putting us in other people’s shoes, making the gaming experience an experience of someone else’s perspective, helping us to understand it. These don’t have to be ‘educational games’ in the strict or conventional sense. To the contrary, the main ‘empathy games’ around today are closer to the idea of an ‘indie video game’: for example, ‘Gone Home’ (Fullbright, 2013), which narrates the main character’s teenage crisis – the discovery that he is gay and his relationship with his parents; ‘This War of Mine’ (11 bit studios, 2014), which places the player in the shoes of a group of civilians who are the victims of war, showing the harsh reality of their day-to-day lives (instead of the usual ‘soldier’ avatar offering); or ‘Koral’ by Carlos Coronado (2019), a poetic experience of ocean scuba diving that promotes environmental awareness and the conservation of nature.

However, the independent designer Anna Anthropy has warned of a ‘problem’ with empathy games: the risk that they be used as a ‘shortcut’, an easy, quick and pleasant (playful) gesture of empathy for players, which ‘calms’ their conscience without promoting a deep understanding of the problem or true commitment to understanding and helping others in real life (Muriel and Crawford, 2018: 136).

However, the interest in the ‘new wave’ of empathy video games should not lead us to a negative or reductionist understanding of video games based on fantasy and escapism: ‘travelling’ to fantasy or science-fiction worlds has always stimulated the imagination and creativity and invited us to practice the healthy art of breaking away from our cognitive and sensory routines to explore other worlds with a fresh outlook. This does not run contrary to the ability to empathise – quite the opposite. Fantasy or science-fiction video games can also be empathetic, or at least they have the potential to be, even though this may be in a different way to the most well-known empathy games.

Indeed, the contradiction between escapism and empathy in contemporary video games is a false or misleading dilemma, but one that is worth bearing in mind in order to understand the present and future of video games as a means of expression.

CC BY Albert Tercero

Participative culture/Digital playbour

Ready Player One, the latest Steven Spielberg film based on the novel of the same name by Ernst Cline, deals, albeit latently, with the key elements of the third area of conflict we want to look at in contemporary video game culture: the conflict between participative culture and digital playbour.

The story (to give a brief synopsis) takes place in a dystopian future in which the world is in the thralls of a chronic energy crisis resulting from the abuse of the planet’s resources in preceding decades. It is within this context that a massively multiplayer online role-playing game called OASIS becomes the sole life motivation for many young people. The owner of OASIS, the billionaire James Halliday, announces that he is about to die and that he will create a final challenge for players. The first player to successfully complete Halliday’s ‘Greatest Challenge’ will inherit his fortune and become the new owner of OASIS. For Wade Watts and his friends – the film’s heroes – this challenge gives their lives new meaning. Enter the baddy: a company named Innovative Online Industries that has put together an ‘army’ of gamers to try and take control of OASIS whatever it takes, without the slightest concern for fair play. From this point on, the story turns into a kind of allegorical confrontation between young rebels and a perverse incarnation of digital capitalism (IOI). But are Wade and his friends really ‘young rebels’? The ideological deconstruction of Ready Player One gives food for thought in this regard, although we need to include a brief ‘flashback’.

It might interest you

In the mid-to-late 90s and the noughties, the optimistic narrative of ‘participative culture’ (led, in the academic sphere, by Henry Jenkins) was widely accepted. This discourse celebrated the demographic made up of gamers and fans, considered to be ‘prosumers’ (consumers of popular culture who are also creators or, at least, co-authors of content), aligned with the ideals of collective intelligence and participative democracy. Even though it hasn’t lost all its relevance, this narrative has gradually lost strength in parallel with the growth in ‘platform capitalism’ (Nick Snirceck, 2016) and the debate around ‘digital playbour’ (Kücklich, 2005). As is well known, platform capitalism is underpinned by the action of individuals creating services (for Airbnb or BlaBlaCar) or content (for Facebook or Youtube) which are then monetised by tech companies. Online video games are part of this logic: fans of online games such as World of Warcraft, Minecraft or League of Legends feed inputs into these online platforms in what for many is more than a simple hobby: they help to improve the game experience, create content (sometimes ‘mods’ to the original game), guide new players, report any users displaying inappropriate behaviour, etc. This is what Kücklich termed ‘digital playbour’ in reference to the ambiguous mix of play and work, pleasure (the user’s) and productivity (the company’s).  The most recent area to use ‘playbour’ is e-sports – professional video game competitions – currently one of the industry’s most lucrative sectors. It’s an area with both a positive and a dark side: although in the most important competitions the players tend to have contracts, it is also true that the fervour amongst players and their desire to become the next Messi of the video game world means they often end up accepting unclear or unsuitable employment conditions.

Having said all this, the answer to the question we asked above (are Wade and his friends really ‘young rebels’ opposed to the capitalism symbolised by IOI?) should have become more complex… In the story they obviously rebel against IOI, but at the same time they implicitly embody the ideal consumer of digital capitalism: young players with an extraordinary passion and dedication to OASIS, to a large extent disinterested (they were already huge fans of Halliday and OASIS before the Greatest Challenge), obsessively knowledgeable about the history and each nook and cranny of the virtual world, content creators who act as role models and guides for other players… while also projecting a certain romantic vision of young e-sports competitors. Basically, they are not simple consumers, they are ‘prosumers’ and ‘pro-gamers’. But, unlike in the Cline/Spielberg cyberpunk fairy tale, in the real world it isn’t so clear whether this offers any guarantee of social change.

CC BY Albert Tercero





Dank memes: the reaction to an unliveable Internet

Emerging from the dawn of the first Internet, today dank memes can be understood as an absurd expression that condenses the spirt of our times and as an expression of fury that boycotts the marketing logic of the Internet.


Clown in the Springtime Tallahassee parade. Florida, 1985 | Deborah Thomas, Florida Memory | Public Domain

Dank memes are the anguished guffaw of the citizens of an Internet that is increasingly dark, dichotomic and brutal, an undecipherable language in times of algorithms, the creative saturation of the last Gaul stronghold of the Internet, entrenched and exhausted, fighting the imperial business model of Silicon Valley.

The word dank has three meanings: the first refers to the atmosphere of a dark, unpleasant, damp and cold place (like a cave or a dungeon); the second, in smoker’s slang, to the most pungent, sticky, high-quality marijuana and, according to the third meaning, dank is a way of saying that something is really good. Originally, saying that a meme was dank was equivalent to sarcastically saying that it was really good, in order to point out, precisely, that it had become too mainstream, or that it had exhausted its communication capacity (no longer understandable) or its humoristic capacity (no longer funny). But dank memes quickly became a genre of memes in themselves, which consciously used this exhaustedness as part of its own self-referential language and aesthetic. These were memes that talked about other memes, ironically imitating the errors of other memes.

The dank culture always sought the limit (self-awareness, self-reference  and self-destruction), fleeing comprehension by the majority, avoiding digestion by the mainstream. Probably, this rejection of the majority was caused by the fact that while the dank meme concept gradually developed, the Internet was being invaded by the sickly-sweet and goody-goody tone of articles from BuzzFeed, numerous challenges and prefabricated chain messages, and above all, by the mass arrival of new users via Facebook. The Internet of social media networks. The famous 2.0.

To protect itself from this white and artificial Internet, the older and more experienced members of communities such as Reddit and 4chan built themselves a creative trench, based on absurdity, strangeness and mistrust towards any form of positive value or faith in humanity. Conspiranoia, tedium, anxiety and disdain were taking power in the active parts of many of these communities. YouTube series such as Don’t Hug Me I’m Scared and Asdfmovie illustrate to perfection this collective mood that, as argued by Rachel Aroesti in an article in The Guardian, considers that modern life is absurd and that there is no moral standard that makes much sense.

 




Even before the arrival of Trump, part of this frustration and mistrust inherent to Internet culture had generated a breeding ground that enabled the most reactionary, racist and misogynist ideas to be viewed as transgressive and attractive. As pointed out by Angela Nagle in her book “Kill All Normies”, this phenomenon was similar to the use of swastikas or references to Nazism by bands such as Joy Division and the Sex Pistols as tools for provocation and social transgression. However, in reality, where does this culture of strangeness and this frustration that is pushing digital creation towards darkness and tribalism come from? How have we returned suddenly to the first definition of dank, the one describing a dungeon or a cavern that is cold and dark?

From internauts to users

We often forget, when we make comparisons between the real world and the digital world, that if the world that we inhabit with our body were anything like the Internet of today, it would be a kind of Las Vegas casino: all the land in the hands of giant private corporations, a world without pavements, or hospitals, or squares, or governments… It would barely be a city made exclusively from advertisement-buildings that need our permanent, growing and anxious attention in order to survive and that, in exchange, are willing to offer us exhausting doses of euphoria, envy, sadness, boredom and excitement.

The creative and innocent freedom of the first Internet where we were not consumer-users but explorer-internauts is now just a distant memory. Back then, connecting to the Internet meant discovering a universe full of microbubbles floating in all directions. Now we don’t connect anywhere, we inhabit our digital identity in a permanent and passive way. Before, we navigated around small websites manufactured with little skill and home-grown, all of them just a crumb of their communication potential. The most exemplary reference of this diverse and heterogeneous Internet is GeoCities. These days we no longer navigate, we just open up the latest app or network to check whether we have any notifications.

Internet was a broad and heterogenous place, full of creative and naïve experiments, and there were real expectations of improving the life experience: a forum meant the possibility of universal warmth when you felt alone, receiving an email was as exciting as receiving a letter, and the pace of navigation did not allow you to go very fast, so everyone had time to write and read blogs. If we jump to the Internet of today, we will realise that the specialisation of the business model has converted the Internet landscape into a homogeneous space, without empty spaces between major platforms, with language and imagination abducted by giants who compete with each other to catch our attention.

The artistic project Cameron’s World recovers texts and images from the pages and images of GeoCities from between 1994 and 2009 thanks to the digital archiving work of Archive.org.

Mice and algorithms

The social media networks seduce us, offering to facilitate communication based on the centralisation of the whole communication flow in one single space. Sean Parker, ex-president and founder of Facebook, has recognised recently that this promise of uniting the world hid another intention right from the very start: distracting us and exploiting our vulnerabilities. This hidden goal has become more explicit as, in order to keep us attentive, the platforms have evolved not to improve the free circulation of users and their messages but to increase their addiction and compulsive consumption.

Twitter, Instagram, YouTube, Facebook and Tik Tok have incorporated techniques and designs from the world of casinos – such as the elastic scroll gesture that emulates the lever of a slot machine. They have also discovered that the fastest way of grabbing our attention is by angering us or by making us feel envy or sadness. Specialisation takes on even more extreme forms with the increase of the use of algorithms that, little by little, gradually substitute our decision regarding what we want to do, see or consume. The timelines of social networks have gone from being the representation of our personal selection to hyper-optimised rivers of content that opaque processes decide for us. All we have left is the gesture: scroll and tap, scroll and tap. We are mice.

The most exaggerated case of this new phase is Tik Tok. The third largest social network in the world (500 million users) barely takes the users that you decided to follow into account. If the video that a user makes does not have a major impact among the first few users to whom it appears, the platform parks it and isolates it. It is noise. Make an effort to do something that is more fun ad attractive, it seems to tell you. What the platform ends up doing is accumulating attention in a few nodes that become viral and sharing out anxiety and isolation among the rest. This opaque control of what users see gives rise, furthermore, to new forms of censorship. Tik Tok, owned by a Chinese company, isolates political content or content that makes reference to the country’s hotspots, such as the protests in Hong Kong or Tibet.

Performative identity and self-deception

As explained by Jia Tolentino in The I in the Internet, the first essay in her latest book, Trick Mirror, the Internet has become a “self-deception machine” that bases its economy and infrastructure on the commercialisation of our personal identity. Tolentino says that “Identity, according to Goffman, is a series of claims and promises. On the Internet, a highly functional person is one who can promise everything to in indefinitely increasing audience at all times”. On the Internet we have come to say that we exist.

Tolentino uses the idea of sociologist Erving Goffman, according to whom identity is always linked to a performance. Our everyday interactions are based on the role that we play in front of others. For example, when we go shopping or we have a job interview, we offer a certain performative version of our identity, and the performance is transformed depending on the audience at any given time. When we go home, says Goffman, we are in the backstage area of our identity, where at last we can stop acting, and digest and rest.

Life has become an asphyxiating experience for many people because, according to Tolentino, the Internet breaks this ecosystem and forces us to constantly maintain, even if we are at home in pyjamas, the same never-ending performance for a massive and anonymous audience. The Internet has destroyed the backstage and the digestion. We never come down off the stage.

Magnification and saturation

Furthermore, the design and format of the spaces where we perform our online identity force us to exaggerate the message to attract attention and generate some type of reaction in our environment. Again Tolentino: “there is no engagement without magnification”. This idiosyncrasy alters the normal environment of interactions and causes everyone to be more predisposed, for example, towards becoming inflamed, feeling irritated or becoming indignant, as preferred ways of generating transit and attention.

Furthermore, to top it all, the deluge of notifications on our screens reminds us of the need to attend this performance without any stopper and destroy our minimum capacity to concentrate, to rest and to digest the stimuli that we receive. Against this vacuous horror of inputs and interactions, the alternative generated up to now, that of digital minimalism, is a response designed not to make our lives easier but to make us more efficient when it comes to working and not reducing our job performance.

RT ≠ endorsement

Finally, the forced adaptation of our ideas, political reflections and thoughts to the limitations of the formats of social media networks – stories that last 24 hours, hashtags that fragment every ideology into pieces, limitation of characters, structural censorship – hinders the propagation of complex ideas and nuances. Similarly, online solidarity has been substituted by a performative expression of solidarity, adapted to the logics of identity marketing: adding a ribbon to a profile, using the latest hashtag and retweeting viral political content have substituted the potential need to participate in a protest in the offline world. Even though it is true, despite everything, that the Internet offers an alternative to political action to anyone who, due to their status or the situation in which they find themselves, cannot participate actively in political action in the street, it is also true that online morality disincentivises, in general, any type of consequent political action to whoever it can.

That is not all, but rather this absence of has enabled far-right ideologies – which had become marginalised from social debate – to be deployed with greater ease, taking advantage of the speed of the interaction and the generalised frustration for introducing hate speech into the normality of digital life.

Dank

Against all this unliveable reality, dank memes can be understood, at the same time, as the expression that better condenses the spirit of our time and as an expression of fury that boycotts the marketing logic of the Internet: by making messages undecipherable and noisy, they are essentially anti-algorithmic. In their secret production and interpretation, they are neither tameable nor can they be sold.

Dank memes are the vanishing point and the Munch’s scream of a society exhausted with everything, with itself and with not being able to stop consuming and consuming itself, but that resists, cornered, through histrionic laughter. Exhausted with an Internet that had to be a universe of imagination and utopia and that has become an asphyxiating and unliveable place, that is not only conquering every corner of our everyday lives but that is becoming increasingly more inextricable from our way of feeling, thinking and wanting, from who we are and from why we will be remembered.





Rereading Debord

A look at today’s hyperconnected society of social media, cognitive capitalism and algorithms through the lens of the work of Guy Debord.


Boy with a camera in the crowd, 1965 | Örebro läns museum | Public Domain

Of the “society of the spectacle” as described by Guy Debord in his most well-known book, there is little left. Social media connected via mobile technology, the Internet of Things, algorithms and machine learning make up a world that poses many questions, not only whether we can continue to place our belief in images. We have published the prologue Releer a Debord, courtesy of Caja Negra. It offers an analysis of the hyperconnected society in which we live – where cognitive capitalism reigns – in reference to the work of Guy Debord, focusing on issues such as representation, freedom of expression, the spectacle, power and time.

Reasoning about history is,
inseparably, reasoning about power.

Machiavelli[1]

 

 We still need to speak of images

Guy Debord’s cinematographic output starts in 1952 with Howlings for Sade, within the context of his youthful enthusiasm and Letterist cinema, and ends in 1978 with In girum imus nocte et consumimur igni, fruit of a profound personal melancholy. This makes the tone of these two films very different from On the Passage of a Few Persons Through a Rather Brief Unity of Time (1959), Critique of Separation (1961), Society of the Spectacle (1973) and Refutation of All the Judgements, Pro or Con, Thus Far Rendered on the Film “The Society of the Spectacle” (1975). Even though his first film is a head-on attack of cinema (“There is no film. Cinema is dead. No more films are possible. If you wish, we can move on to a discussion”), his signature concept is criticism of the society of the spectacle. Debord distanced himself from the Letterists (Lemaître, Isou, etc.), from their implicit iconoclasm and from the performative perspective that they accorded to cinema. His scripts take the form of essays while incorporating the Marxist criticism of the cultural industry put forward by the Frankfurt School. Debord also offers the coherence of essays that have not completely cast aside the personal dimension (mixing both generational and structural considerations, the daily struggle and the workers’ struggle), the lyricism of a poet, the instructional strength of a strategist and the melancholy of a lighthouse keeper who observes the world from his tower, lit up but isolated. His verdict was crystal clear, but is it still valid? What is left of him?

For Debord, the basis of spectacle lies not only in the images produced by the capitalist machine, but in the fact that such images mediate all our social relations. In On the Passage of a Few Persons Through a Rather Brief Unity of Time he was already arguing that it was necessary “to understand all of what was done” so as not to “add other ruins to the old world of spectacles and memories” and therefore understand our time without the need to produce more images (understood as false idols). As far back as 1959, Debord was dealing with what would be constant themes: a reflection on the society of the spectacle, another on the happiness of the petit bourgeoisie, and the first forays into cine-détournement as a methodology towards a new type of criticism, with the aim of establishing a material and ideological distance from images. Cine-détournement understands the appropriation of spectacular images as a revolutionary act in that it is a means to expropriate the expropriators, in other words the advertisers and the film industry, particularly Hollywood. He puts this into practice in The Society of the Spectacle, which opens with a revealing quote from Feuerbach: “Undoubtedly our era prefers the image to the thing, the copy to the original, the representation to the reality, appearance to being… for in these days illusion only is sacred, truth profane”. Debord warns about the fact that spectacles are the manifestation of the unliving, an inversion of life and the affirmation of life as simple appearance. A very modern-day assessment, although embraced only by a minority, as spectacle is a ritual that is made to captivate, to resist rejection, to abandon oneself .

This preference for images as part of commodity fetishism has become accentuated with social media in a world of virtual realities connected through mobile technology. The proxy or representative on the screen has ended up usurping the place of the original. It is no longer a question of managing images, rather we organise our own image using the filters and facial contouring offered by an endless array of apps to tune ourselves up. Even the bodies used as the benchmarks for cosmetic surgery have shifted from a universal look (that of the stars of the moment) to the virtual self; people want to look like the face offered to them by Snapchat or Facetune, they want to show the best version of themselves in a selfie, to mummify their virtual self.

Distance, dreams, alienation

Debord makes a simple but decisive observation: “Everything that was lived directly has move away into representation”. What are the consequences of this distancing? It is a distance that is not at all evident, as it constantly disguises itself, given that entertainment invites us to immerse ourselves, to identify with what it is projecting, but always from a distance, mediated through the screen and not through direct experience of things, on pain of seeing with one’s own eyes the deceitful and rickety edifice that stands behind the spectacle. This distance is what confirms the illusory nature of the spectacle, but also of our lives. The very etymology of entertainment is clear: keeping someone in a specific mood. In fact, the South Korean philosopher Byung-Chul Han considers that what is missing in the context of the digital deluge that entertains us each day is the “pathos of distance”, that which allows us to differentiate between respectare (respecting) and espectare (seeing). The unity of life no longer comes from experience, but from the spectacle itself, accumulating and sharing its goods across the planet. While in Debord the spectacle is shown as something universal, nowadays we would need to update our terminology to connotate that the spectacle takes commodities across the world, maximising its global nature. This long-distance relationship with the world separates knowledge from direct experience and delegates it to the cultural industry through a process of cultural colonisation. The cultural industry offers a sui generis version of the world, a fast, readymade, simplified, amusing, reassuring, planned, unreal and, as a result, insane model.

The condition for the development of the spectacle is the masses, both in the sense of the public masses of Debord’s sixties and in allusion to today’s connected masses. The ideal condition of these masses is “connected, but alone”, in the words of Sherry Turkle. “The spectacle reunites the separate, but reunites it as separate”, says Debord. This separation refers not only to the physical distance between individuals, but also to the Marxist concept of alienation. Things get more complicated when we attempt to think of this alienation as applied to the connected digital masses, as this is a highly participative context that belies the theory of “the more [the spectator] contemplates, the less he lives”. In fact, while the television and commercial film viewer based their efforts on the act of watching (“To watch is to work”, according to Hito Steyerl), today we not only have to watch, but also to produce signs in the form of images, icons or written language. Modern-day capitalism, which we could call cognitive capitalism, semio-capitalism or psycho-capitalism, squeezes the value out of all this semantic production, translating the symbols into data that are collected, interpreted and sold. In this way, collective participation, managed through the main platforms (Google, Facebook, Amazon, Airbnb) becomes a double-edged sword that always benefits the business owner: on the one hand, it allows us to express ourselves as a means of momentaneous psychological release, and secondly, it turns expression into a means of capitalist accumulation. We have moved from the freedom of consumption to the freedom of expression, turning the latter, once a force for revolutionary change or social transformation, into a strategy for obtaining capitalist growth.

In Refutation of All the Judgements, Pro or Con, Thus Far Rendered on the Film “The Society of the Spectacle”, Debord says that the spectacular organisation of class society involves “the obligation on the part of all those who seek their happiness there, to always stay at a great distance from that which they hope to love, as they never have the means, intellectual or otherwise, to attain direct and profound knowledge, a complete practice and an authentic taste”. More than a result, this distance is one of the reasons why the happiness promised by commodities is always deferred, why it never actually materialises. For this to work, a world of misery (financial and emotional) is created as the backdrop to people’s everyday lives. The value gap between what is seen and what is experienced makes satisfaction hard to attain and accentuates the need to consume more and better dreams. According to Debord, the spectacle is the guardian of modern society’s desire to sleep. The 24/7 flow of audiovisual products allowed by connected digital technology, with phenomena such as binge-watching series, helps the induced, restricted sleep of the spectacle – that which prevents us from truly dreaming – to never be interrupted.

Twinkle, twinkle, little star. How i wonder what you are

As Debord states in Critique of Separation, the world of leaders is the same as that of the spectacle, as both put forward exemplary behaviour and find a hero under every stone. Stars replace the heroes of old. Debord continues the analysis put forward by Adorno and Horkheimer on the starlet in Dialectic of Enlightenment (1944). According to the Frankfurt Schools philosophers, the starlet should symbolise the typist or secretary, but while highlighting the distance that separates the two, a distance that can only disappear momentarily through viewing, through the spectacle. This is the “religion of success”, according to Adorno and Horkheimer. In On the Passage of a Few Persons Through a Rather Brief Unity of Time, Debord was already emphasising the need we have for stars and for our lives to be indistinguishable from the films as the remedy to “a dismal and anonymous life that would like to enlarge itself to the dimensions of cinematic life”, as industrialised culture, in the words of the Frankfurt philosophers “inculcates the conditions on which implacable life is allowed to be lived”. It is necessary that the public should have a miserable life in order to long for the star and in this way participate in the religion of success, turning their everyday lives into a pseudo-paradise. None of this has changed.

Nowadays, thanks to social media platforms based on filtered images such as Instagram, to minimalist, clonal home furnishing companies such as Ikea and to low-cost airlines like Ryan Air and Vueling, it is possible to have a flimsy low-costthat is iconographically enviable and irrefutable from the distance afforded by the screen. In his book Comments on the Society of the Spectacle (1988), Debord foresaw what social media has brought to fruition: the fact that it is now very easy to obtain a celebrity status that grants the right to shine wherever one is. Wherever, but much better online, through avatars, filters or temporary images, as if wearing a second skin that is woven with care every day by each user. Each person is a star, flung into this sea of noise that is the virtual battle for identify. This context also explains the reason behind the boom in TV talent shows that aim to exalt anonymous people, just like in programmes from the early days of television such as Arthur Godfrey’s Talent Show (1946-1958); a phenomenon that runs in parallel to the increase in programmes featuring people who are famous for previously having appeared on television, reaffirming the consideration that “that which appears is good, that which is good appears”.[2] Appearances take on bodily form, but through their illusory facet, in a globally entangled illusion. Debord underlined the global nature of stars, but as part of an unlimited catalogue of styles; in fact, when he made The Society of the Spectacle, he illustrated this with The Beatles and Marilyn Monroe, icons of a decade that is gone for good. The classic stars persist – Katy Perry is the new Marilyn – but nowadays, narcissistic online culture offers the backdrop to fame earned through “likes” and followers. The “insignificant signified”, as Raoul Vaneigem would say, turns their private life into their conquest. The costumbrist celebrity of social media offers an extreme process of singularisation: each individual wants to stand out from the crowd, but the more they try, the more they resemble the rest. In fact, it is the algorithm that is responsible for reorganising the virtual masses in accordance with people’s shared tastes, their behavioural patterns and their appearance. When in Refutation of All the Judgements, Pro or Con, Thus Far Rendered on the Film “The Society of the Spectacle” Debord says that “spectators do not find what they desire; they desire what they find”, we might think of those algorithms that manage our desires à la carte with a system of recommendations and structuring of what should appear on each user’s interface so that the desire never culminates, or in other words is never fulfilled.

The exercise of power, an endless mediation

The spectacle is the most effective way of managing public power, something that was described centuries ago by La Boétie in his Discourse on Voluntary Servitude (1576) in which he explained how, over the course of history, tyrants have bought the public’s servitude through their control of public entertainment. The same old panem et circenses as always. In The Society of the Spectacle, Debord talks about the two-fold power of the spectacle: concentrated and diffuse. The first pertains to bureaucratic capitalism and to dictatorial figures; the second to the capitalism of the abundance of commodities for a salaried public. This abundance is precarious and unsatisfactory in the fact that it makes the consumer aware that they are only enjoying an infinitesimal part of what they could possibly consume. In short, the essence of what Debord described as the “the autocratic reign of the market economy, which had acceded to an irresponsible sovereignty, and the totality of new techniques of government that accompanied this reign”.[3]

In Comments on the Society of the Spectacle (1988), Debord traces back his own steps to form a new vision of power: the integrated spectacle which, according to the author, lives in the perpetual present of incessant technological renewal, of the integration of the public and private spheres in the economy, and of the unequalled insincerity of generalised secrecy. The handling of power in this way is a synthesis of concentrated and diffuse power, which nowadays has found its greatest boon in connected digital technologies – Benthem’s panopticon for a brave new world. From the Internet of Things to social media, by way of Internet platforms (Amazon, Google, Facebook), this technology based on algorithms and machine learning is being used in a totalitarian manner to control and predict. In this simple equation (“control and predict”) that is rooted in the history of capitalism, we encounter actions such as effects on consumption patterns at the niche market level, the manipulation of electoral votes, management of public hate and the inoculation of specific moods into people’s minds. It is for this reason that authors such as Franco “Bifo” Berardi assert that we are witnessing an authentic psychological colonisation. Of course, this is a low-key power (a “soft” or democratic despotism, in the words of Tocqueville), but one with a very wide scope (it trickles down to the smallest individual level, while encompassing the great masses of connected people). It is an absolute power, with few autonomous zones, within a reality based on digital technology and the net.[4] In Comments on the Society of the Spectacle, Debord says that surveillance could be more dangerous, given that there are neither the tools nor the intelligence required to analyse the interests in the mass of information generated by the media. Artificial intelligence, algorithms and data tracking, or data mining, have filled these holes. The book The Society of the Spectacle was written in a pre-algorithmic world, in other words a world that still saw the need for a narrative on the social contract. Meanwhile, the “algorithmic universe” stands in a post-social and post-human scenario in which exist what Déborah Danowski and Eduardo Viveiros de Castro might call “us without world”[5] or, to take it to the extreme, another world without us, without time, without a future; an on-off world.

Time: we understand it all now

Debord’s relationship with time is philosophical and political, but also literary. From Howlings for Sade to In girum imus nocte et consumimur igni, his writing both sings about and bewails the passage of time.[6] He understood that the energy of the situationists was inseparable from their youth, and that this was an important part of the revolutionary project. In In girum imus nocte et consumimur igni he borrows the Latin palindrome “We wander in the night and are consumed by fire”, also known as the devil’s verse, to talk about our transitory nature in a text with a Baroque spirit, redolent of the work of Jorge Manrique, in which the author declares that everything has its time and everything passes. In a Dantesque tone (“Nel mezzo del cammin di nostra vita”), [7] Debord waves goodbye to his lost youth.

Beyond these rather melancholy (and even ethylic) appraisals of chronological time, what Debord is interested in is analysing productive time, what he terms “commodity-time”: “an infinite accumulation of equivalent intervals”[8] measured by the capitalist machine; an irreversible, consumable, pseudocyclical time. It is irreversible from the economic perspective because it is based on the expropriation of one’s own time in so far as time is invested in consumption and the spectacle. But it is also pseudocyclical because it is based on repetition (day-night, work-rest) and on the smallest difference, that is to say tiny variations in what has always been the same, on the incorporation of new commodities or new micro-experiences within the sameness offered by the consumption of images and leisure time.

This understanding of time put forward by Debord has been updated under what we could call chronoscopic time, the time that governs our culture of screens and social networks. It is, literally, “in the broadest sense…the image of the consumption of time”, the culmination of this revealing description that Debord gives on the relationship between the capitalism of the spectacle and time. Chronoscopic time is the pointillist time – quantified, speedy and global – given to us by connected mobile digital technology, especially that which is related to the translation of information into specific metrics, in other words to the data industry. From this perspective, time is a statistic whose value is given by the position of its smallest particle (datum), but also, and above all, by its speed and how it is interpreted (data comparison), together with its economic return. This scenario has made leisure time a time for economic production in the sense that when we communicate everything that happens to us and does not happen to us when we do nothing, what we are doing is feeding the global data industry, the authentic poem of the devil to whom, in Faustian fashion, we offer our souls in exchange for connectivity and a pseudo-community that is as ephemeral as the decides. Historical time, therefore, is the result of economically productive time, of data time.

In the old agrarian societies, Debord sees a cyclical time of a very different type to the illusory nature of pseudocyclical spectacular time, as old societies experienced “the time of immobile illusion, really lived”. In On the Passage of a Few Persons Through a Rather Brief Unity of Time, he understood that appropriating life-time and removing it from the pseudocyclical time of commodities and, therefore, from production time, was a revolutionary act. This is one of the keys to the current debate on emancipation in relation to the Fourth Industrial Revolution: how to establish a new politics of time. The possibility that those areas that offer an alternative form of time (based on manual work, crafts, naturalness, slowness, the real cycle)[9] could disappear both from the sphere of representation and from our daily lives, would represent a rupture in the process and nature of our developed societies. Connected digital time generates the myth of speed, of instantaneity, of an omnidirectional reality, but also of indistinctness, randomness and chance, of a lack of consequences. How then can any change be explained? Climate change? What is lacking or not present? The precariousness of bodies and lifetimes? Connected time literally disconnects us from time, and Debord sensed this when he warned of the fact that capitalism generates a universal history based on a globally unified irreversible time.

Over and out

Without wanting to finish off by espousing solutionism, what is clear is that we need to turn to another politics of time, to other means, or as Debord would say “possess the community of dialogue and the game”.[10] While in The Society of the Spectacle Debord uses détournement to demonstrate and, in a way, shatter the ideological foundations of the spectacle, what we must now ask ourselves is not only why we should still place our trust in images, but also why we should continue talking about images if we live in a world where they seem to have invaded every space and where the virtual sphere has imposed itself as the main arena for symbolic-emotional exchanges. Images require a new interpretation, a new context for their legibility, both the mad images of the present day (which have yet to settle) and the silent images of the past (neither remembered nor yet forgotten). The task started by Debord has not been concluded. There is still much to be done in terms of audiovisual and written essay. The time of essay is the live time of relationship, the unhurried time of digestion, everything that lies between one end and the other, between yes and no; it is the space of dialogue, of analysis, of virtual autopsy, of rearrangement, of all that is waiting to be arranged, to be resaid. This analysis must also be applied to the computerised world of data. It is from this lighthouse with views of the storm that we may decide that the best way to create an alternative politics of time is by reshaping the world’s social contract by drawing. We can start by sharing, on the edges of the algorithmic fence, our dissatisfaction and indignation with this soft-totalitarian society of the spectacle, where we are sucked in and transformed each day to favour the economic health of the large digital platforms, the inexhaustible source of our pre-set happiness.


[1] Quoted in The Society of the Spectacle (1973).

[2] The Society of the Spectacle (1973).

[3] Guy Debord, Comments on The Society of the Spectacle (1988).

[4] To go into further depth regarding this context, see the most recent publications by the researchers Shoshana Zuboff (The Age of Surveillance Capitalism), Cathy O’Neil (Weapons of Math Destruction) and Virginia Eubanks (Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor).

[5] Deborah Danowski, Eduardo Viveiros de Castro, The Ends of the World, Buenos Aires, Caja Negra, 2018.

[6] Howlings for Sade ends with: “After all the untimely answers and the ageing of youth, night falls from on high. (…) Like lost children we live our unfinished adventures”.

[7] This is the opening to The Divine Comedy, summarised by Debord in In girum imus nocte et consumimur igni as follows: “Halfway along our true life’s path, surrounded by a dark melancholoy, expressed by an array of sad, mocking phrases in the bar of lost youth”.

[8] The Society of the Spectacle.

[9] “So long as agrarian production remains the central activity, the cyclical time which remains at the base of society nourishes the coalesced forces of tradition which fetter all movement. But the irreversible time of the bourgeois economy eradicates these vestiges on every corner of the globe”, The Society of the Spectacle.

[10] The Society of the Spectacle.





What if the Internet were an ally of linguistic diversity?

Some 54% of the contents on the Internet are in English. We wonder about the rest of the languages, taking as a starting point the case of the African continent, where a third of the world’s languages are spoken and cultural activists take advantage of the digital environment to defend it.


Fruit market at Zanzibar, 1890-1923 | Library of Congress | Public Domain

Over half of the contents on the Internet are written in English and the Web often threatens to unleash a cultural apocalypse that will put an end to the majority of the world’s languages. Africa is home to a third of the planet’s linguistic diversity, and despite the apparent vulnerability of its national languages, cultural activists are exploiting all the potential of the digital environment to maintain this cultural heritage. These include methods for learning and raising visibility and even bringing the Internet to people who are not reached by commercial developments.

If 54% of Internet contents are in English, what is happening with the rest of the 7,100 languages spoken around the world? The Internet seems to be a kind of steamroller at the service of English (and a small handful of other languages) and of linguistic homogenisation. The reality of the data shows that the Web has potential for unleashing this cultural apocalypse. However, by paying closer attention, it is possible to observe a not inconsiderable number of resistance initiatives, or even initiatives aiming to convert a threat into an opportunity.

The scenario for African languages is representative for different reasons. According to data from the Ethnologue project, practically one out of every three languages that is spoken in the world has African origins, in other words, the African continent houses some 30% of the planet’s linguistic diversity. At the same time, Africa is the region with the lowest Internet penetration, i.e. with the smallest current market. This panorama is getting worse. If we add the dearth of African technological production, it would seem that storm-clouds are approaching. And if we also take into account that none of the languages with African origins can presume to generate more than 0.1% of the content of the Internet, according to data from w3tech, then it seems that the new technological times and the digital era are sounding the death knell for the majority of the 2,144 languages that are spoken in Africa.

But in the midst of all this adversity, clues are appearing that challenge that fatalistic view. For example, the report from Funredes, the Internet observatory for linguistic and cultural diversity on the Internet, which in a fleeting but optimistic reference points out: “The best progression (in terms of presence in Internet contents) corresponds to African and Asian languages.” And at the same time, modest initiatives stand out that, from a position of cultural activism, defend the role of African languages or even, that are convinced of being able to use digital tools to guarantee the survival of these languages.

The association of Guinean bloggers, Ablogui, has a committee focusing on languages whose aim is to promote the generation of contents in some of the national languages. It is run by Tafsir Balde and he recognises that “the fact that around just ten languages monopolise 90% of Internet contents constitutes a threat for African languages”. However, far from a defeatist attitude this blogger affirms that “this is why necessarily we must promote multilingualism and ensure that contents are accessible in the largest number of languages possible, to preserve linguistic and cultural diversity”. Despite the certainty of the risk, Balde offers a positive analysis: “Based on my experience in the promotion of African languages, I think that the Internet represents an opportunity for them in the measure that the digital environment can offer more efficient tools for their expansion”.

A similar reading is made by Sinatou Adedje Saka, a Beninese journalist resident in France and one of the promoters of the Idemi platform, an instrument aiming to make African languages more visible through digital tools. “Starting from the basis that African languages are under threat, I believe that, objectively, Internet is an opportunity. The problem is that today Internet is directed by US corporations that have the monopoly of everything that is communicated, I am thinking of Google and Facebook, and that has denaturalised the original spirit of freedom and inclusion”, says the journalist ruefully. “They are private enterprises, with private interests, and you can’t trust in them to promote languages that do not generate financial returns”, she adds.

Despite this, Sinatou Adedje Saka is optimistic. “Beyond the discussion about problems of diversity in the digital space”, she asserts, “the Internet is an opportunity because it can unlock territories; it can make languages more visible; it can contribute new tools for them; and it opens up very interesting debates around the transcription of those languages”. “There is still a lot of work to do”, she affirms, “because the great machineries of the Internet only pay attention to whatever generates profits, but there is a really great potential for visibility and the connection of different circles for reflection, to overcome the barriers that they face in the digital world”.




One of the opportunities that opens up for culture in African languages is that of literature. One example of this dynamic is the initiative for the translation of the short story “The Upright Revolution”, by Kenyan author Ngũgĩ wa Thiong’o. What the Pan-African collective of writers Jalada originally launched as a celebration of African linguistic diversity has ended up spreading to languages from all over the world, many of them also minority languages, and has turned this story, originally written in Gikuyu, into one of the twenty-most translated tales of all time, available in 87 languages.

Moses Kilolo was head of this project as chief editor of Jalada and he explains that there is considerable literary production in African languages but that the main challenge (and damper) is publication. “It’s all business”, Kilolo recalls, remarking that publishers “will analyse the costs of publication of works in African languages and the return on the investment, so the figures may not always be attractive and will discourage writers and publishers alike”.

However, for this writer, who has experience of exploring unconventional forms of publication, “work published in African languages through digital platforms has increased in recent years”. And he notes two reasons: “The power of telling our stories in our own languages, as a source of natural pride and of preservation. And online publishing reduces costs significantly. All you need is an Internet connection, web hosting, writers and publishers and, hey presto, you have a nice online publication. Sharing it is relatively easy, because a lot of people have suitable devices and can access it from anywhere. And two of the main costs of publishing are eliminated: printing and distribution”, he concludes.

Kilolo, however, does not overlook the main drawback of this system: “It is very difficult to generate money online, although not impossible. The solution is a hybrid of printed and online publications. Greater access to online publishing will promote the goal of encouraging works written in African languages and will offer readers the opportunity to access them. The monetisation of these platforms will make it sustainable both for the writers and the platforms. Monetising the online platforms has its difficulties, but a hybrid focus with a printed publication is useful for earning more money and sustaining the entire industry”, Kilolo summarises.

Based on these examples and the enthusiasm of cultural activists, we can see how the digital environment is unfolding as an ally of African languages, through simple (or at least accessible) tools for learning them, and functionalities that take advantage of the attraction that the social networks have already demonstrated, putting it at the service of people who are not on the radar of the big companies, as well as tools that reduce language barriers.




Afrilangues.com, for example, is a website that offers the possibility of accessing classes in a dozen languages through different formats. Egbe is an application that enables people to learn (or at least get to know more about) Gen (also called Guin and Mina), a language spoken by more than 300,000 people in Togo and Benín. Similar is “Je parle le bassa 2.0”, which combines a mobile app and a YouTube channel to promote the learning of a language spoken by two million people, above all in Cameroon.

Elsewhere, African developers and entrepreneurs are trying to make need a virtue and take advantage of what for others is an obstacle: linguistic diversity and a lack of literacy in European languages. Malian engineer Mamadou Gouro Sidibé created Lenali, a social network with two particularities: the first that it functioned in Western African languages such as Bambara, Soninke, Wolof or Songhai, as well as French. The second particularity was that it was a completely vocal social network, in other words, Sidibé thought especially about people who did not know how to read or write, in order to make all the utilities of the social networks available to them.

The question of linguistic diversity is an important challenge upon which another engineer, in this case Nigerian, has focused to develop OBTranslate, an online translation tool that promises translations from two thousand African languages, probably with a certain margin as a marketing focus. In any event, it is a tool that affirms that it is exploiting the functionalities of artificial intelligence to offer access to the languages to which the big companies pay little or no attention.

Simply as clues to a curious dynamic two perhaps anecdotic episodes can be mentioned. Firstly, the community of Senegalese Tweeters are arousing a certain envy among others in Western Africa, because of their habitual use of Wolof on the social network. And secondly, it is probably no chance that the Facebook page with the most followers in Tanzania is a platform that offers information in Swahili. Specifically, JamiiForums, a platform that enables citizens, no more and no less, to send anonymous alerts regarding cases of corruption.





Renata Ávila: “The Internet of creation disappeared. Now we have the Internet of surveillance and control”

An interview with this specialist in human rights, technology and freedom of expression to discuss how today’s societies are advancing to the drumbeat of “digital colonialism”.





Three decades ago, the Internet promised to be a democratising place to be turned to in the flight from the inequalities of the analogue world. It was presented to us a field in which to find freedoms, boundless creation, communication that transcended frontiers and free education for all. “We were promised an open Internet – and it was a trap”, says Renata Ávila, annoyed. “We believed that we were building something collective, but we ended up being the unsalaried slaves of the new digital world”. We take advantage of the awarding of the CCCB III Cultural Innovation International Prize, to talk with one of the most influential and lucid voices in the world of technology and human rights.

A booby-trapped connection for the poor

The Internet Report Health 2019 offers us a reminder that half of the world is already connected to the Internet. Which means over 4 billion people. But we could also turn that on its head and think that – three decades following the creation of the Internet – only half of the world is connected. What happens to all the people who are unconnected? How do they relate with each other, communicate, work or entertain themselves? “The people deciding for what purpose those without access to the Internet are going to connect to it are the technology companies that dominate the future of industry. And these companies only represent the hyperconnected 1%”, explains Renata Ávila.

Each of the answers in her discourse – laboriously and firmly constructed – unravels a complex web of connections that explain why, today, we live with the same or more inequalities than in the past, even though we were promised that the Internet was going to change everything. Apparently happy but more controlled than ever before. We know this and yet we ignore it, because we don’t want to lose our portion of fame, of ego, of being famous, of being communicated or saving time, even if we then squander it on worthless rubbish.

This lawyer and activist talks with a global perspective about the movements that the power of “digital colonialism” is weaving. Her arguments are essential for preventing ourselves from being crushed by the technological world, from being carried away by the current of ephemeral divertemento. For being fully aware that, as individuals, our battle is not lost, but that we can control the use of our data, refuse to give away our facial recognition or demand that the privacy laws that protect us are obeyed.

Before the imminent transition to 5G ­– all of us connected to all the objects that surround us – Renata Ávila strips us of our veil of naivety and insists that the Internet is now never going to be the Internet that we dreamed of. Now we are inside the Internet of surveillance, of control and of measurement. “It may be that factory workers in Bangladesh do not have access to the Internet, but they are connected to objects that are watching them all the time. They monitor their work, check they are not distracted, that they aren’t chatting with their workmates. And what those cameras see is going to determine their wage packets. The connectivity that is offered today to poor people is the connectivity of control and of chains”.

“If I were president…”

The priority themes that the Internet Health Report has highlighted for this year are five: privacy and security, decentralisation, digital inclusion, openness and digital literacy. But if we are going to prioritise, which of them is most urgent? “None of them can be dismissed”, answers this activist with cross-cutting ideas. And to explain it she proclaims herself imaginary president of three countries. “If I were president of country A – which concentrates all the most powerful technology companies on the planet – my decision would be to back decentralisation. Because if I do not fragment these companies that have so much control and power, by using good laws of competition, I am feeding a monster that is literally going to swallow me up and I am not going to be able to govern”. This example takes us to the USA.

And she continues. “If I were president of country B – which produces a certain technology and I have my population connected, but my citizens consume everything from country A, while the latter steals their data, gives them an insecure infrastructure and violates their fundamental rights as citizens – my concern would be security and privacy”. And, again, we can see an analogy with this hyperconnected Europe and Silicon Valley.

“However, if I were the president of country C – where I have almost nobody connected, I do not produce industry, I am consuming the cheapest and least prepared services of the type A country – what do I do? Do I connect them to a free centralised system in exchange for giving away all the data of my citizens? They have not even developed digital literacy skills. Where do I begin? Do I take them to a new phase of dependency, of colonisation?” The answers are not easy, Renata Ávila points out. “We should pressure type B countries to offer alternatives to the poorest countries and revert the current situation. We can only thus achieve a balanced system”, she answers as a possible recipe.

The surveillance empires

International lawyer Renata Ávila defends at all costs technology as a tool for empowering citizens and achieving true transparency of governments and multinationals. This is precisely the objective of the Fundación Ciudadanía Inteligente, (Smart Citizens Foundation), of which she has been executive director since 2018.

The combination of power – explains Ávila – with a highly sophisticated degree of technological development and a strong market push are making it easier for the USA and China to enter poor countries, to exploit them and to control them, these days technologically. Faced with the question of whether there is any escape from the desolate and manipulated landscape that she is sketching, the lawyer shrugs and answers: “The only hope to redefine this technological imperialism is for Europe to take on the leadership role that is its duty. For it to offer alternatives that respect human rights and alternative business models that are not based on data extractivism. This will not be competitive in the market but it could come from governments, putting social interests at the centre”.

Digital colonialism

A few companies concentrate a lot of power, and the worst thing, Ávila affirms – is that they control the thinking of entire collectives. Welcome to “digital colonialism”. Trump, Brexit, Bolsonaro and Johnson are all examples of this domination. But so too are the American GAFAM (Google, Amazon, Facebook, Apple and Microsoft) and the Chinese BATX (Baidu, Alibaba, Tencent, Xiaomi) empires.

“At the start of the 21st century, one of the questions that excited me most about access to the Internet was the possibility of producing infinite copies of books and sharing knowledge. That idea of an Internet that was going to be a tool for integration and access to knowledge has shattered into smithereens. It was a booby trap. We are working as the unpaid slaves of the new digital world. I feel that it’s like when the Spanish colonisers reached Latin America. We believed the story of ‘a new world’. And we were in a box, controlled by the most powerful country in the world. We should have regulated a long time before. And we should have said: ‘I will share my photo, but how are you benefitting and how am I?’ Because what we are doing today is work for free; with our time, creativity and energy we are paying these empires. We are giving them everything”.

And she rounds off her speech by ensuring that not only are our lands at their mercy, like in the past, but the most private, most vulnerable part of each of us. “We are totally predictable and controllable. And that means easily manipulated. This really worries me”.

A control that is exercised, undoubtedly, through the algorithms implemented in our mobile apps, in public services, in the companies that sell us products. Algorithms that take decisions automatically, that influence our most everyday actions, but that we are unaware of because of the opacity operating around us. Because we don’t make the effort to learn. Because we don’t want to know.

“I am on the advisory council of an initiative of the InterAmerican Development Bank to conduct pilots of ten artificial intelligence applications in the public sector. Our first fight is that all of them must be transparent and auditable”, she explains to me with hope. “Let’s start there, because we can’t attack the private sector”.

Precarity sold as an opportunity

We move into the field of ethics and ask Renata Ávila about three concepts that have modified their meaning in the last decade, precisely due to the acceleration with which we have adopted technology. They are trust, privacy and transparency and how these influence the new generations. We cannot divorce these three questions from the concepts of austerity, precarity and the institutional corruption crisis”, she argues. “Letting strangers into your home to spend the night, is that an excess of trust or the need to seek resources?”.

For this activist, the intense precarisation of employment, the lack of opportunities for young people, the betrayal by governments that opte to bail out the failed banks following the economic crisis rather than concerning themselves with the future of their citizens, has led people to find other resources. “How many Über drivers have I found that had two university degrees? The failure is very much a systemic one”.

“We are immersed in two extremely important crises, of which we do not want to take the slightest bit of notice, but one day they are going to explode and we are going to realise”, comments Renata Ávila. It cannot be overlooked that so much technology must inevitably take its toll on the environment. An environmental crisis, but also a technological one. We cannot decelerate the current pace, and much less return to a past where connections were only face to face. So, what is to be done? She has a formula, which is perhaps not “magic” but could give a result: changing the logics with which we function. And it consists precisely of trusting in technological innovation in order to harm the planet less. “Leave behind the years of programmed obsolescence, the data extractivism model, store less on giant servers that need monumental refrigeration systems, etc”.

An optimistic message for the present

After all that has been discussed, some might think that this Guatemalan activist is so realistic that she leaves no room for optimism. But Renata Ávila does not like being negative and she is convinced that the human race is capable of finding resources to emerge from any “mess”, even at the most critical moments. “We have a perfect cocktail” – she says with a half-smile of worry. “A democratic crisis caused by some terrible leaders in power, with a climate-change and technological crisis. This can only lead to a collective reflection and make us reconsider on what planet we want to live in the future”.





Orwell in times of facial recognition

George Orwell’s vision of dystopia, "1984", describes situations similar to those of current times and advances ethical dilemmas still difficult to tackle today.


CC-BY-NC Martín López, 2019

Seven decades have now passed since the publication of 1984, the most quoted political fiction in contemporary culture. Advances in big data, surveillance and artificial intelligence make reading it today a disquieting exercise in current relevance. To mark the occasion of Orwell Day, which has been held at the CCCB since 2013, we re-read the novel with a new perspective. We find similarities with the most immediate present and ethical dilemmas that are still difficult to tackle today.

On 8 June 2019, the 70th anniversary was reached of the publication of 1984, the dystopian novel par excellence. Written by Eric Arthur Blair, real name George Orwell, it describes a future society in which a totalitarian regime controls all aspects of life through continuous surveillance. A nightmare scenario that made a strong impact at the height of the Cold War, but that has endured because of its inspired capacity to foresee phenomena such as post-truth, video-surveillance, and in a more subtle way, the forms of learning of machines and artificial intelligence.

The tracking of emotions

Behind Winston’s back the voice from the telescreen was still babbling away about pig-iron and the overfulfilment of the Ninth Three-Year Plan. The telescreen received and transmitted simultaneously. Any sound that Winston made, above the level of a very low whisper, would be picked up by it, moreover, so long as he remained within the field of vision which the metal plaque commanded, he could be seen as well as heard. There was of course no way of knowing whether you were being watched at any given moment. How often, or on what system, the Thought Police plugged in on any individual wire was guesswork. It was even conceivable that they watched everybody all the time. But at any rate they could plug in your wire whenever they wanted to. You had to live — did live, from habit that became instinct — in the assumption that every sound you made was overheard, and, except in darkness, every movement scrutinized.

George Orwell wrote 1984 on the Isle of Jura, Scotland, between 1947 and 1948. In the mid-20th century, television was still a minority medium, while video-surveillance cameras did not start to be commercialised in the United States until 1949, barely a year before the writer’s death. However, the novel anticipates a mass proliferation of screens, which is still expanding today, from multi-cinemas to smartwatches. The telescreen of 1984, vaguely described as “an oblong metal plaque, a kind of dulled mirror” – is a two-way instrument that emits messages but at the same time captures the most subtle of sounds and facial expressions. These devices are ubiquitous and implacable, as they register everything from an “unconscious look of anxiety” to a nervous tic or even a rumbling stomach.

Despite the fact that wariness towards video-surveillance is nothing new, it is surprising to contrast how recent developments in the field of artificial intelligence bring us closer to scenarios like those described in 1984. The most obvious are facial recognition systems, computer programs capable of identifying a person in images or videos and associating their face to pre-existing databases such as police records, Internet use habits, publications on social media, etc. This technology is legal, cheap and accessible, as shown by a report in The New York Times in which one of the editors constructs a fully-functioning system of this type for around sixty dollars.

The fear of the potential for these tools has already led the city of San Francisco to ban their use by the police and other public institutions. “We can have good policing without being a police state”, argued Aaron Peskin, the politician who championed the measure; “and part of that is building trust with the community based on good community information, not on Big Brother technology”. This debate is still a long way off in other places around the world, such as China, where the government of Xi Jinping is laying the groundwork for a quasi-universal technological control.

CC-BY-NC Martín López, 2019

From the Great Helmsman to Big Brother

Although facial recognition and other technologies are starting to be thrown into doubt in the United States, their use in China is following a disturbing pattern. Advances in big data and artificial intelligence, combined with the extensive network of video-surveillance cameras already present in the country, make fertile ground for the testing and implementation of advanced experiments in social control. The most emblematic is the so-called Social Credit System. Although still in its testing phase, this system aspires to combine the personal information and behaviour of the population and companies to assign them a trustworthiness level, which can then be reduced or increased. Giving money to an NGO scores points, as does caring for elderly people or leaving a hotel room tidy. In contrast, not paying a fine subtracts points; this can also be the result of behaving badly on public transport or smoking at a hospital.

A good social credit score can mean reductions in utility bills or using certain services without having to leave a deposit, for example. A low score may be penalised by limited access to bank loans or to good schools for the family, among many other possibilities. In the worst of cases, and for those behaviours that the government considers especially harmful, people may enter a blacklist with bigger restrictions such as being unable to buy plane tickets or purchase a property. How people are added to and removed from these blacklists is an unknown, above all taking into account the fact that the Chinese judicial system is not independent in line with international standards. For this reason, some people are finding themselves trapped in a legal limbo, with the sensation, to paraphrase Winston Smith from 1984, that “nothing is illegal since there are no longer any laws”.

The voices defending the Social Credit System argue that in reality it rewards people and companies who contribute to the social good. However, the mere existence of a mechanism of this scale implies a collecting of data on a mass scale that necessarily invades all aspects of people’s personal lives. A compiling of information that combines prior governmental records, tracking by Internet provider companies and increasingly sophisticated surveillance systems, trained to recognise faces, voices and even the way that people walk.

The perfect prison

Despite the fact that Orwell’s novel does not describe a reputation-based system like that of China, it does envision a future in which, as well as infringements being punished, social correctness is rewarded. Thus, it is good to celebrate the victories of Big Brother, to report traitors, to show public hatred towards the enemy or to join the Junior Anti-Sex League. If everyone has the certainty “that every sound you made was overheard, and, except in darkness, every movement scrutinized”, the only way to survive is to modify one’s way of behaving, both in public and in private.

Prominent among authors who have studied the social effects of surveillance is Michel Foucault, with his review in the 1970s of the “panopticon” concept. The panopticon, let’s remember, is an architectural structure for prisons designed by Jeremy Bentham that allows prison guards to observe the inmates at all times. The latter, in turn, live aware that they are being watched, but they never know exactly when. For Foucault, this construction is the perfect disciplinary system, whose philosophy has spread to the whole of society thanks to corrective institutions such as schools or hospitals. The panoptic society isolates people to analyse them individually and constantly, is a mechanism offering the ability “to see constantly and to recognise immediately”. Its greatest effect is to lead each person to “a state of conscious and permanent visibility that assures the automatic functioning of power”. As in the case of prisons, where “it is at once too much and too little that the prisoner should be constantly observed by an inspector: too little, for what matters is that he knows himself to be observed; too much, because he has no need in fact of being so”.

Little can be added to the words of Foucault seen from the perspective of 1984, a tale whose narrative tension is based on a suffocating sensation of control. Unfortunately, nor is it complicated to find parallels in a world that is already aware of the revelations of Edward Snowden. Let’s remember that in 2013, this ex-analyst from the United States National Security Agency confirmed suspicions that his government and those of other countries had been collecting mass data on millions of people in an invasive way, including communications by chiefs of State.

CC-BY-NC Martín López, 2019

Suspicions for the future

Despite Orwell being a renowned author during his lifetime, he did not live long enough to see the success of his final novel. He died of tuberculosis in London in 1950, barely a few months after its publication. This also prevented him from confirming how some of his predictions were getting ever closer to the truth. As well as living surrounded by telescreens, people in the novel work with the “speakwrite”, a device to which they dictate texts, similar to today’s voice recognition systems. The author even intuited automated creativity, illustrated by gadgets such as the “versificator”, which combines rhyming words to create songs without any human intervention.

Curiously, both the inventions and the scenarios thought up by Orwell are related with the perverse use of machines or with their potential to destroy humanity. A fear as ancestral as the myth of Prometheus, but that connects with the dilemmas that are generated by today’s technological acceleration. At what point does an innovation cause a political and cultural change? Do we take technology’s social impact sufficiently into account? What does a society entirely based on interpersonal communication bring with it?

Isaac Asimov said that 1984 was a novel more talked about than read. This is something that can only occur when a story becomes an icon of popular culture. A condition that, in Orwell’s case, seems to have no expiry date, as he sketched out a fine metaphor for the totalitarianisms of the 20th century that is also useful for analysing the most controversial aspects of the digital revolution.





Rethinking the Internet from the cultural sector

The finalists of the Cultural Innovation International Prize offer us different ways of raising awareness and tackling the challenges faced by the Internet.


Floats on nets used in salmon fishing. Astoria, Oregon, 1941 | Library of Congress | No known copyright restrictions

What can we do, from the cultural sector, to promote an Internet that is more free, egalitarian and open? This is the question put forward by the third edition of the Cultural Innovation International Prize, with the aim of stimulating initiatives that offer critical and imaginative solutions to one of the most decisive global tools of the 21st century. The nine finalist projects are a good example of the diversity of ideas with which this challenge can be tackled.

The Mozilla Foundation recently published its Internet Health Report 2019, which is structured around five pillars (privacy and security, openness, digital inclusion, digital literacy, and decentralisation) and which draws attention to three issues (artificial intelligence, online advertising and the power of the cities). We might prefer to be unaware of large part of the report, since it ruins much of the immediate gratification that is brought by each click, the ease of which is directly proportional to the quantity of technological processes that it hides and that we often choose to ignore. But we do have options: which operator and which software to use, which security risks we are happy to accept, to what point we want to protect our privacy, whether we prefer to be transmitters or receivers, or whether we want mediated consumption. Why not talk about the physical dimension of the web – who is connected and who is not? What about the ecological footprint of each of the processes that we undertake?

In this edition of the Prize, the large number of proposals presented, their high quality and level of development are all a reflection of the relevance of the Internet on a global scale. Proof of this is the presence of initiatives that are already underway or that are designed for the long term. The proposals have been analysed by a technical committee made up of CCCB staff, with nine finalist projects selected that will go on to be evaluated by an international jury.

Nine propoposals for improving the Internet

¿A qué huele Internet?

Internet, digitality, doesn’t smell of anything. The information comes in through our eyes and ears and goes out via our hands and mouth but our sense of smell is never involved. Billions of people around the world, as they connect to the Internet, become anosmic. The project questions whether it would be possible to approach the Internet with all our senses, and if we did so, how would it be different? Can we ascribe a smell to the hundreds of events, programs and messages that are created each day on the Internet? Can we empower ourselves and create a more sensorial, less binary web, or will we have to wait for the time when machines are able to smell us in order to discover the importance of scent? The moment has come to experiment with the possibilities for communicating and expressing smells in a way that only electronic environments and the Internet ecosystem allow.

Data Corruptor

Data Corruptor is a project proposal at the intersection of art, research and technology, that questions the dynamics of appropriation and commodification of personal data in the current digital environment. It explores ways to protect or shield Internet users’ personal privacy.

It seeks to find possible ways to resist personal data tracking in digital environments, addressing the question and the possibility of resisting or defying the marketing intelligence of data engines to protect people’s basic right to privacy. By creating a network of “data corruptors”, made up of data scientists and data mining specialists among others, the aim is to explore possible futures and transfer digital privacy know-how to the common user. All for a more critical, freer, less censored, less capitalist Internet and a digital environment of emancipated and empowered users.

Entrepensar: Marató d’entrepensament situat

Entrepensar is a 12-hour marathon that combines artistic research and Responsible Research and Innovation (RRI) activities in a series of hybrid, cross-discipline workshops in which participants will work together to create a roadmap of what the Internet of the future should look like, based on shared ethical values.

Entrepensar is thus structured around two core values: one formal, which seeks to establish methods for cultural innovation; and the other relating to content, which aims to raise awareness regarding the ethical and social problems posed by digital technologies and how to resolve them.

Entrepensar proposes a methodology that – in a transdisciplinary and indisciplinary manner – combines philosophical debates, artistic practice and scientific mechanisms from which to create new shared perspectives on knowledge. A collective and inclusive form of understanding reality, situated both culturally and temporally.

Let’s Work It Out Browser

Let’s Work It Out Browser is a performative intervention that critically and playfully examines the physical and ecological impact of our browser-driven lives. Taking the structure of the Internet – domain, router, modem, data centres, servers, HTML etc., as its basis, a series of sculptural and performative elements will be created whereby the effort, sweat and toil required to browse a page is enacted. Conceived as living sculpture, classroom and workout area, Let’s Work It Out Browser is contextualised within eco-feminist calls for more earthbound approaches to living. With this in mind, the work aims to bring the cosmological and atmospheric rhetoric of the Internet back to the soil by acting as a launch pad for the “Declaration of Eco-Social Internet Futures”.

 

Hyperaudio Pad - How to edit, embed and view | Hyperaud.io

Hyperaud.io

Hyperaud.io for Schools is an initiative to promote media literacy in classrooms. A technology that puts timed transcripts centre stage, Hyperaud.io makes remixing media intuitive and fun. Media pieces can be composed by moving around a few paragraphs. Transcripts make media more accessible to those with hearing or learning difficulties, or to those with a different first language. Once audiovisual content is represented as text, we can more easily search, share and navigate it. Any mixes can be remixed and – via their transcripts – can even be automatically translated into any language for a truly global reach.

Crucially, no media is actually cut up – a mix is just a playlist of pieces of media starting and stopping at specified times – allowing the sources and context of a mix to be evaluated while preserving digital rights.

The Hyperaud.io for Schools project will build upon a working web-based prototype being actively tested in schools – teaching children to both evaluate and express themselves with audiovisual media and furnishing them with a level of media literacy as a key skill.

Plateforme Data Africa

The lack of a single interface makes it difficult to coordinate and raise the profile of projects to promote open data in Africa, as well as limiting the exploitation of such data. The Ligue Africaine des Blogueurs et Cyberactivistes pour la Démocratie (Africtivistes) proposes the creation of an African platform – www.data.africa – that would serve as a common interface, database and tool to document and communicate all projects and initiatives.

Africtivistes wishes to boost the availability of open data in order to strengthen the involvement of citizens in democratic processes. It is a pan-African organisation of bloggers and web activists that promote democratic values, human rights and good governance through the digital environment. Its objective of “consolidating democracy” is sought through strengthening real, participative democracy, cyberdemocracy, e-governance and democratic culture in each country, leading different projects aimed at citizen participation. 

Verificat Participa

Verificat Participa is a laboratory for research to combat disinformation on the Internet. It seeks to experiment with ways of generating citizen participation to improve the quality of the information that can be accessed via the web by creating a collaborative fact-checking platform. The objective is that it should be citizens who, through authorised sources, databases and information from specialists, refute the rumours that flourish on the Internet. In other words, to help the consumers of disinformation themselves to feel the need and inclination to check facts.

It attempts to empower readers by forming channels for accessing public information and open data, and to create a space where sources can be shared and linked to the news stories themselves so that the collective fact-checking performed by citizens is available for society at large to browse. The synergy between readers, information and technology professionals and institutions is aimed at improving the transparency and quality of information. 

Web Choreographies

Web Choreographies: Hypothesis on Agents and Agencies is a research project on the notion of interfacing, where the interface becomes a stage for the performance and performativity of its languages: acts and gestures of coding.  The project aims to further the different conditions for affective interfacing of bodies and techno-ecologies and engage in a community-based process which favours open-source tools and investigation through a feminist lens in order to create a modular set of web programming and HCI resources; a new set of scripts for the emergence of common and inclusive choreographies within the digital media spheres and apparatuses at work.

Web Choreographies: Hypothesis on Agents and Agencies | Source: Ana Caria Leonor Fonseca

World Wi(l)de Web

World Wi(l)de Web is an online video game where the player firstly takes on the role of surveillance, censorship and propaganda project manager in the Cyberspace Supervision Department of a semi-authoritarian state, before going on to become Data Mining & Advanced Analytics Project Manager for a Business Intelligence and Strategic Communications company.  The player manages the implementation of real tactics used to monitor, predict and influence the behaviour of Internet users, while also defending themselves from different risks (digital privacy laws, hacktivism, the media, etc.).

The project combines different narrative formats and state-of-the-art mechanics, aiming to achieve a level of realism and interactivity that will exceed the purely virtual environment of the game. In this way, the objective is to assert the great potential of games as tools for conveying knowledge and encouraging critical thinking.

Pathways and trends

Internet invades and permeates all areas of our lives, yet it takes on the appearance of something diluted or that only exists in the cloud. But in fact, the Internet is physical and has perceptible and measurable consequences in ecosystems, habits, and interpersonal relations, and it plays an important role in the construction of belief systems.

Consequently, many of the projects presented for the Prize look in depth at computational language and the technical possibilities implied by their advance; but largely they also reflect on the social, environmental and political impact of the use we make of the networks and propose alternatives practices and pathways. The overall set of projects reveal some common patterns:

Internet is among us and is here to stay, but the cultural initiatives and proposals presented to this call for the Cultural Innovation International Prize show us that more than one way exists to raise awareness and educate regarding its causes and, above all, of tackling its consequences. And they do so without giving in to desperation, cynicism or nihilism.





A more human posthumanism?

We are immersed in scientific and technological transformations that are affecting the human condition itself. What is the meaning of tendencies such as posthumanism? How should we think about this “afterness”?


Covering artificial leg, ca. 1920 | George Grantham Bain Collection (Library of Congress) | No known copyright restrictions

In recent years the concept of “posthumanism” has taken on importance thanks largely to thinkers such as Rosi Braidotti, Franco “Bifo” Berardi and Peter Sloterdijk, who have recently visited the CCCB. However, as philosopher Braidotti herself affirmed in the interview published in this blog, posthumanism is not a closed concept, but “rather an index to describe the stage we find ourselves at now”. It is not a case of understanding what humankind is in essence, but of deciding what we want to be, what we will become as a species in the future.

We are witnesses to an era of incertitude and uncertainties. The modernity built around a narrative of the progress of knowledge has been called into question and, with it, the humanist project founded on an instrumental rationality, based on which humankind should be capable of transforming the world for its own benefit. However, scientific and technical knowledge seems to have remained on the sidelines of this crisis, to the point of finding itself in the midst of a new revolution. In the year that we are celebrating thirty years since the launch of the World Wide Web, at a time when concepts such as “artificial intelligence” and “machine learning” hold increasing weight in society’s debates, it seems that rethinking the relationship between humankind and technology is inevitable. Is it possible to continue thinking of technology as something that enables us to emancipate ourselves from our needs and increase our autonomy? Or should we alternatively maintain a critical attitude of rejection towards technical rationality, as something inalienable from the discourse of domination that has led to an unsustainable world? We are returning to a recognisable crossroads: technology viewed with suspicion or technology viewed as a solution.

But what are we talking about when we refer to technology? Since its origins, humankind has related with the world in an artificial way. Technology, therefore, would be nothing more than the way in which humankind relates with the world or, rather, the way in which humankind constructs its world. Perhaps the problem lies not in embracing or rejecting technology, but in analysing the singular world configured based on contemporary science and technology. Examining the conditions of what we commonly call “new technologies” in order to be able to determine what mechanisms of power they use and how these affect and transform contemporary subjectivity. Recovering, as advocated by Marina Garcés in her book Nueva Ilustración radical [1], the critical attitude that has remained trapped in a discourse that assimilates reason with the civilizing project of domination typical of modern capitalism. Only then can we be aware of the emancipating potential that still exists in technology. To paraphrase Franco “Bifo” Berardi, this would mean developing conceptual tools to orient ourselves in the territory of the transformation of the contemporary subject [2].

Augmented humanity

It is within this context that the debate around posthumanism takes on meaning. New technological changes enable us to think of a significant transformation in what it means to be “human”. In his essay L’Humanité Augmentée [3], Éric Sadin puts forward the emergence of a new form of man resulting from the  relationship between the human and the technological. For decades, science fiction has speculated about artificial intelligence and has filled our imaginary with cyborgs and machine-men. We can find recent manifestations in popular culture that warn us of the dangers of “Prometheanism” through fiction – as in Terminator 3: Rise of the Machines and Avengers: Age of Ultron – in which machines reach a level of conscience and autonomy that leads them to rebel against their subordination with respect to human beings. However, as frequently happens, the reality is more complex. Sadin’s “augmented humanity” is not necessarily referring to anthropomorphic robots, but rather to the constitution of a new cybernetic reality, constructed through incursion into the world of an infinite, fragmented and invisible computational system that encompasses all spheres of life.

According to Sadin, this would have been possible thanks to three factors. On the one hand, advances in computation technologies would have permitted a new, more dynamic interaction with machines. Gilbert Simondon, an influential thinker because of his reflections on technology, affirmed that the technological revolution lies not in increased power of calculation, nor in  greater automation, but in “the fact that the operation of a machine harbours a certain degree of indeterminacy”[4]. With this, Simondon is calling attention to the capacity of machines for relating with outside data and information in an almost sensitive way, notably increasing their autonomy.

In second place, the ambition of the ideal of scientific and technological dominance is today reaching a level of organisation and quantification that is leading to the generation of a computational duplicate of the world. If, traditionally, technology served to mediate with reality, today the object of this mediation is not so much reality as its digital replica. However, we do not have the technical knowledge to understand the nature of these connections, which means a new barrier is generated with the world that implies, according to Sadin, a “loss of visibility and of friction sensitive with things, to introduce imperceptible and automated interference games” [5].

Finally, for the transformation to be effective, there has been a necessary generalisation and universalisation of the use of technological devices, increasingly more comprehensive and portable, enabling the digital subject to be connected at all times. This last point allows the closing of the circle: the smartphone as a paradigm and entry gateway into this new relationship with the technological. This device requires a dynamic and flexible level of interaction, as well as access to a vast amount of data to be able to adequately assist the new digital subject. But at the same time, it serves as a source of data for this same cybernetic system, thus completing the feedback loop. Contemporary technology is no longer localised in one object or device, but in a systemic network that forms what the French author defines as a “computational holism”. 

The World As a Screen. Dialogue between Ingrid Guardiola and Franco Berardi | CCCB

A connective order

The new subjectivity is constituted in this feedback between our technological devices and computational holism, and is based on a dual gesture. Initially the possibility of a more sensitive relationship is annihilated, by introducing a barrier with the new world image. Only then is it possible to recreate the world, from a digital assistance founded on a new type of relationship that will now be constituted in a necessary and pre-established way. Berardi defines this transformation as the passage from the conjunctive to the connective relationship. If the first emerges from the spontaneity of a specific and singular action, the result of our empathetic capacity to relate with the other; the second would respond to a rational dimension that allows us to abstract a specific dimension and produce a network of signs compatible with a determined syntactic structure, in this case the digital world. For us to relate with the duplicate in the infosphere, a connective relationship is necessary: one that rejects direct contact and leaves no place for indeterminacy. In the connective order, relations are standardised, they become homogeneous and regulated. We talk about the direct change we are undergoing due to the fact that we are constantly subjected to the digital and algorithmic way of doing things. But we also talk about a deeper mutation in our sensitivity. Our brain, incapable of adapting to the pace and quantity of information that cybernetic reality forces upon it, finds itself saturated, overwhelmed, and obliged to economise its processes. The only way that we have of relating with each other is to accept and adapt to the new language. For this, we succumb to the indiscriminate use of devices that, thanks to that new dynamic computation, are capable of guiding us. Individuals are assisted by all kinds of devices and algorithms that decide for them and help them, through gentle and transparent coercion, to fit in with the new digital world. In this homogenisation process, any kind of indeterminacy and difference that does not favour the efficiency of the system is avoided. In an ideal situation, the new humanity that emerges from this new paradigm will not be political in that there will be no conflict or decision within it. We move from human to posthuman through an increase in capacities. The new relationship with technology will allow humanity to adapt to the new digital world. However, we see how, far from opening up the playing field and making us freer, this new relationship seems to be launching us, paradoxically, into a deterministic order in which our behaviour is governed by a series of automatisms.

This said, it would seem that the only way out would involve returning to the initial dichotomy and rejecting technology, now under the form of the social media networks, platform economies or biometric control devices, for example. However, there is an urgent need to ask ourselves whether it would not be more interesting to try to break the dichotomy proposed and go beyond this rejection, to attempt to find a critical way of approaching technology. From this angle, it is possible to understand posthumanism in a different way, situating it in the crisis of humanism that is emerging following the forgetting of the indeterminate. Under the control of technical rationality, humankind is reduced to its connective aspect, finding itself immersed in a series of automatisms and predeterminations that destroys its singularity. A certain incommensurability exists in what is human that makes it impossible to completely quantify it and reduce it to a purely technical language. According to this viewpoint, the proximity of the transhumanist utopia in which we find ourselves was not due to the possibility of creating a synthetic  brain, but rather to the fact that, in a hyper-technified society, what is human finds itself reduced to its merely connective side, forgetting that indeterminate aspect that characterises it.

Under this perspective, posthumanism implies distancing ourselves from a humanism founded around a conception of the technical human, which converts the world into its objective representation. This does not necessarily imply a rejection of technology, but rather it invites us to think about a post-anthropocentric world. Ridding ourselves of the idea of the world as representation, which situates man as a measure and ultimate principle, to make room for ways of doing things beyond technical rationality. It is necessary to recover the awareness of this incommensurability, precisely to tackle the challenge of finding new ways to deal with the reality in which we find ourselves. Establishing a new relationship with the technological that is based on awareness of this singularity and knows itself to be irreducible to scientific and technological quantification. Given this approach, posthumanism is revealed not as the determination of our relations, but precisely as the guarantee of their radical singularity. That which reminds us that not everything is reducible to data, and offers us the opportunity to place limits on the digital hegemony, thus returning technology to its role as an instrument for emancipation.


[1] M. Garcés (2017). Nueva Ilustración radical. Barcelona: Anagrama.

[2] F. “Bifo” Berardi (2014) And. Phenomology of the End. Helsinki: Aalto University publication series, p. 10.

[3] É. Sadin (2017). La humanidad aumentada. La administración digital del mundo. Buenos Aires: Caja Negra.

[4] G. Simondon (2008) El modo de existencia de los objetos técnicos. Buenos Aires: Prometeo Libros, p. 33.

[5] É. Sadin. op. cit., p. 44





Fake news and censorship in Africa

We analyse the particularities of the fake news phenomenon in the African continent: from its instrumentalization by governments to the mobilisation of citizens in favour of an Internet free of censorship and hate speech.


Signposts printed in German and Italian point to former Axis offices at a street corner. Tobruk, Libya, 1942 | Library of Congress | Public Domain

Access to the Internet is advancing at a dizzying pace in Africa. User numbers have doubled in just five years. But hand in hand with this rapid expansion, and taking advantage of a lack of experience, also expanding is the fake news phenomenon. Africans feel more exposed to this manipulated information than Americans and for this reason, civil society has mobilised. They are trying to stem the tide of lies and hate speech that often accompanies fake news, to preserve the neutrality of the Internet and, above all, to neutralise one of the main excuses for imposing censorship being used by certain governments with anti-democratic agendas.

In Gwere, one of the national languages of Uganda, lugambo is the word used to refer to gossip, and for the country’s president Yoweri Museveni, that is the main activity of the social media networks. During the first half of 2018, the president launched a crusade against non-important conversations, scams, rumours and chat online. Lugambo was the reason argued to slap a tax on the use of social media networks. For defenders of the freedom of expression, it was, in reality, an excuse to hinder access to digital tools that insisted on contradicting Museveni. In truth, his relationship with Internet communities has been complicated for years. In 2016, he had no qualms about blocking them during the presidential elections, about recognising that he had done so, or about justifying it as “a security measure to avoid lies that would incite violence”. But the fact is that, in addition, he had done the same thing five years previously, for the 2011 elections.

Museveni is not the only one adopting radical measures in favour of security and deciding that the best way of avoiding the dissemination of fake news is by preventing access to the Internet. In fact, beyond those who are looking after number one, on the African continent there is a growing concern regarding the spread of hoaxes and rumours through digital media. There are still not many investigations in this area, but one of the few studies that exist points out that Africans are more exposed to fake news than Americans. At least, that is how the people involved see it themselves. A preliminary study of academics at the universities of Cape Town and Houston pointed out that Kenyan, Nigerian, and South African users had a greater feeling of exposure to fake news than that expressed by US users in a previous study produced by the Pew Research Center.

At this stage, fake news items that take advantage of the digital environment to grow and spread are common, but the most worrying aspect is their consequences. The Cambridge Analytica scandal exposed the company’s participation in the electoral campaigns of Nigeria in 2015 and of Kenia in 2013 and 2017. Among the actions of this “data analysis company” was the manipulation of news. One repentant company employee recognised that in Nigeria they had produced and disseminated a video that painted one of the candidates as in favour of imposing the sharia and indiscriminately resorting to violence. By chance, during the Kenyan campaign of 2017, a video cropped up with the same apocalyptic tone towards the supposed posture of one of the candidates.

Kenyans bombarded with fake news in presidential election | Channel 4 News

An investigation by the BBC reveals that in June 2018, a series of fake news and explicit (but manipulated) photographs of extreme violence fanned the flames of confrontation between ethnic communities in Plateau State, in the centre of Nigeria. “Nigerian police say that fake news and incendiary images on Facebook have contributed to more than a dozen recent killings in Plateau State – an area already torn by ethnic violence”, the British medium affirms. The same investigation points out that these fake news reports found a distribution channel, above all, through Facebook, and that the social media network allocates insufficient resources to preventing fake news from circulating among its profiles in the African country.

The continent has already been the setting for the worst consequences of the manipulation of information. The so-called “hate media” played a fundamental role in the 1994 genocide in Rwanda. Long before the advent of social media networks and digital tools, Radiotélévision Libre des Mille Collines (RTLM) and the magazine Kangura took it upon themselves to stir up the atmosphere, disseminating fake news and demonising, criminalising and caricaturising a part of the population, to end up de-humanising it and calling for extermination, with well-known appeals to put an end to “the cockroaches”.

Concern with this growing role played by fake news in Africa is related with four elements. Firstly, the constant presence of mobile telephones, which now reach eight out of every ten inhabitants on the continent and represent the mass availability of instant communication channels. Moreover, the dizzying growth in access to the Internet has meant that in five years there has been a doubling of the number of users and of the Internet’s penetration. In other words, there are 250 million Africans with fewer than five years of experience using the Internet. Thirdly, the youthfulness of the continent that means that half of the population is aged below eighteen years and that the average age is nineteen years. And finally, the frequency with which political actors are trying to instrumentalise ethnic, religious and cultural issues.

However, African civil societies have not been passive observers of this global phenomenon; instead, they have mobilised to reduce the impact of fake news and hate speech through the networks. In the last seven years, the organisation AfricaCheck has consolidated its position as African leader in factchecking. Working in South Africa, Nigeria, Senegal and Kenya, the staff of this civil society organisation have managed to secure rectifications from major media and, above all, sew the seed of factchecking in national media arenas. PesaCheck is an example of this growing interest. It was created in 2016 and designed to focus on the budgets of the countries of Eastern Africa, fundamentally Kenya, Tanzania and Uganda. Despite this initial objective, it has ended up checking information around many more issues. Beyond consolidated platforms, spreading around the continent are more modest initiatives that, at least, show sensitivity.

Help us build the African Election Promise Tracker | Africa Check

In parallel, other civil society organisations are working to stem this dangerous flow of manipulated, false and malicious information. In especially critical situations, groups that are more familiarised with the digital environment are launching initiatives to alert about hoaxes that are circulating around the networks and to put a stop to them. The Central African Republic is the country with the fourth-lowest internet penetration in the world; only five out of every one hundred Central Africans have access to the Internet, but even so, an organisation of bloggers from the country launched an initiative a few months ago to prevent hate speech and fake news. One detail that may be interesting is that the Central African Republic appears on the list of the world’s most neglected humanitarian displacement crises published by the Norwegian Refugee Council and, in parallel, the parties involved in the conflict are immersed in peace talks that have raised high hopes. Bloggers do not want a rumour disseminated via the Internet to mess up this opportunity to put an end to the violence.

This concern and the global stir regarding the fake news phenomenon has been, in any case, the ideal pretext for governments that do not believe in the neutrality of the web. Those who previously blocked the Internet to silence protests and have avoided giving explanations or have referred to vague technical problems, today believe that they have an argument. The will to clamp down on fake news that will feed hatred and unleash violence serves as an excuse and is used, for example, when there is an attempt to prevent the spread of images of police violence or, even, when aiming to hamper a civil society from taking responsibility for its own political fate.

We saw it in the case of Uganda, with Yoweri Museveni extremely worried about the lies circulating around the social media networks — this, by the way, after thirty-three uninterrupted years in power and his experience as the third most veteran president in Africa. But the argument was the same as that used in the Democratic Republic of the Congo, for a shutdown of the Internet following elections that were held two years late and in an extremely tense climate of suspicion of fraud.

But the threat of lies is not only used to justify shutdowns, but all kinds of control measures. In mid-2018, the government of Tanzania set up a whole battery of measures including an annual charge of over 900 dollars for bloggers to get a license, which the government can revoke in cases of publication of content that “causes annoyance, threatens harm or evil, encourages or incites crimes” or that jeopardises “national security or public health and safety”. At the start of this year, the Minister of Information of Zimbabwe announced work to draw up a package of laws on the digital environment, from regulation of e-commerce to the fight against cybercrime. She said that the aim was to ensure that “Internet and related technologies are used for the good of society, not to violate national security”.

In all the cases described above, organisations defending digital rights and freedom of expression have been the first to launch citizens’ initiatives to stem hate speech and fake news, but they have also been the first to alert to the anti-democratic nature of the measures announced by the authorities using the alibi of defending the truth.





The Bad Pupil

Through artistic practice, the Taller Estampa reflects on the discourse around artificial intelligence.


Students in a school, 1931 | Wikipedia | Public Domain

Increasingly a percentage of our decisions are influences or directly determined by algorithms and computer resources. Even if results are presented as objective and independent, it is common for these technologies to be opaque and not allow us to contrast their operations. The members of Taller Estampa propose projects that help us to re-appropriate these tools, based on reflexive analysis, and using their own mathematical methods.

In recent years we have been witnessing a constant trickle of news on artificial intelligence, machine learning and computer vision. We are told that machines learn, see, create… and all this builds up a discourse based on novelty, on a possible future and on a series of worries and hopes. It is difficult, sometimes, to figure out in this landscape which are real developments, and which are fantasies or warnings. And, undoubtedly, this fog that surrounds it forms part of the power that we grant, both in the present and on credit, to these tools, and of the negative and positive concerns that they arouse in us. Many of these discourses may fall into the field of false debates or, at least, of the return of old debates. Thus, in the classical artistic field, associated with the discourse on creation and authorship, there is discussion regarding the entity to be awarded to the images created with these tools. (Yet wasn’t the argument against photography in art that it was an image created automatically and without human participation? And wasn’t that also an argument in favour of taking it and using it to put an end to a certain idea of art?)

Metaphors are essential in the discourse on all digital tools and the power that they have. Are expressions such as “intelligence”, “vision”, “learning”, “neural” and the entire range of similar words the most adequate for defining these types of tools? Probably not, above all if their metaphorical nature is sidestepped. We would not understand them in the same way if we called them tools of probabilistic classification or if instead of saying that an artificial intelligence “has painted” a Rembrandt, we said that it has produced a statistical reproduction of his style (something which is still surprising, and to be celebrated, of course). These names construct an entity for these tools that endows them with a supposed autonomy and independence upon which their future authority is based.

Because that is what it’s about in many discourses: constructing a characterisation that legitimises an objective or non-human capacity in data analysis. In other words, an old sleight-of-hand and power game in which things escape our reach and even — and this is cynicism and not magic —, their own reach,  incarnate in supposedly objective and independent operations. The classical example in the context of the Anglo-Saxon world is the use that can be made of them, for example, to calculate the cost of a medical insurance policy. Here, until this neoliberal measure – presented as what we need because we want to be a “smart” city or whatever the next thing is that they tell us we want to be – arrives, we understand perfectly the situation, put forward as one in which no dialogue or discussion is possible, rather than the imposition of the result of network analysis. As in so many cases, perhaps that is no more than making evident what already happens; if this cost is calculated by a company worker, we doubt that he will be particularly open to dialogue or that it will respond to criteria that they want to explain to us and that they will accept them being questioned. But whether it is done via formulas applied by the workers or is the output of a neural network resulting from the insurance company’s data and the data on the person applying for the insurance, in both cases it is thus because somebody has accepted that this method works. And it is here where there is a responsibility that it is not only opaque because it wants to be – in other words, the internal functioning of the network may be opaque, but the decision to use it and how it has been trained should not be so. Opacity is not the main characteristic of artificial intelligence, but rather a characteristic of power.

If metaphors are one of the fields of power of these tools, the regulatory context in which they are developed should not, under any circumstances, be overlooked. They are tools that are created and fully celebrated within the mass data ideology. With this, we are referring to the idea that everything should leave a trace and be automatically archived and to that fact that, as the expression “data mining” assumes, that data are proposed as a natural resource available for exploitation. This metaphor is significant as an expression of a capitalist fantasy: data would be a natural resource that, unlike those that have already been sucked dry, would never run out but rather, in a delirium typical of fantasy, would only increase as long as we happily accept to coexist with it and acquire more and more devices that monitor us. It is as though the system has built its own fairy-tale bottomless pit. Thus, one of the uses of object-detecting artificial vision tools is none other than the automatic annotation of images; which means it is not only textual information that is vulnerable to being automatically monitored but also visual information, until now paradoxically opaque beyond our eyes.

List of detections made with YOLO9000 in the film A Trip Down Market Street (1906) | The Bad Pupil – Estampa

We now find ourselves in what is, probably, the point of the first cultural reception of these tools. Of their development in fields of research and applications that have already been derived, we are moving on to their presence in the public discourse. It is in this situation and context, where we do not fully know the breadth and characteristics of these technologies (meaning fears are more abstract and diffuse and, thus, more present and powerful), when it is especially important to understand what we are talking about, to appropriate the tools and to intervene in the discourses. Before their possibilities are restricted and solidified until they seem indisputable, it is necessary to experiment with them and reflect on them; taking advantage of the fact that we can still easily perceive them as in creation, malleable and open.

In our projects The Bad Pupil. Critical pedagogy for artificial intelligences and Latent Spaces. Machinic Imaginations we have tried to approach to these tools and their imaginary. In the statement of intentions of the former, we expressed our desire, in the face of the regulatory context and the metaphor of machine learning, to defend the bad pupil as one who escapes the norm. And also how, faced with an artificial intelligence that seeks to replicate the human on inhuman scales, it is necessary to defend and construct a non-mimetic one that produces unexpected relations and images.

Fragment of De zeven werken van barmhartigheid, Meester van Alkmaar, 1504 (Rijksmuseum, Amsterdam) analysed with YOLO9000 | The Bad Pupil – Estampa

Both projects are also attempts to appropriate these tools, which means, first of all, escaping industrial barriers and their standards. In this field in which mass data are an asset within reach of big companies, employing quantitively poor datasets and non-industrial calculation potentials is not just a need but a demand.

Among the strategies that guide our practice, one of the main ones is reflection. We use artificial intelligence tools to direct attention on them themselves and their processes. When we are told that an artificial vision network “sees”, it goes without saying that it sees whatever it has been trained to see. In other words, the vocabulary of the network is basic in order for it to see and talk and this is a choice made by whoever builds the network. Making evident this link in the chain was one of our objectives, whether by nosing through vocabularies of existing tools or datasets (if you want to see what a “Wagnerian” looks like, go to ImageNet and, when you have stopped laughing, go to see how they have done the same through the category of “bad person”) or with various détournements where we have substituted the original vocabulary for others that reflect on taxonomies themselves (for example, the famous classification imagined by Borges of a Chinese encyclopaedia).

Taxonomies: Celestial emporium of benevolent knowledge | The Bad Pupil - Estampa

Taxonomies: Improvised Explosive Device | The Bad Pupil - Estampa

We are told that neural networks are experts in what they do. If this is true, then they are compulsively so, because they don’t know how to do anything else. This compulsiveness is an excellent tool for multiplying classifications and relations to the point of paroxysm. Thus, a network trained in artistic styles sees everything and refers it to this vocabulary and one trained to recognise Cindy Sherman or Joan Fontcuberta does no more than multiply their presence everywhere.

Compulsive experts: network for classification of artistic details analysing Super 8 by David Domingo | The Bad Pupil - Estampa

Compulsive experts: Sherman-Fontcuberta | The Bad Pupil - Estampa

In our latest project, Latent Spaces. Machinic Imaginations, we have focused on the field of image generation through generative adversarial networks or GANs. In other words, on the capacity of these networks to produce images similar to the corpus of examples that we provide. This field habitually focuses on a photo-realist tendency, but a practical and poor aesthetic of these tools (with datasets quantitatively inferior to industrial ones and with less calculation potential) leads us towards other imaginaries and visual textures. Here too, we have wanted to make the most of strategies that seem to us to be inherent to the tool, such as that of excess; whether applied to scientific illustration (in a highly coded world where every image must be significative) or to investigate contemporary imaginaries (for example constructing a dataset of examples associated with the Mona Lisa, i.e., an image versioned a thousand times and that is iconic for art and tourism). Also, the technological metaphors themselves move into the foreground; the “latent space” name refers to how, once trained, a GAN network is conceived. It is understood as a multidimensional system of coordinates that we could explore and where each image that it can generate corresponds to a point of this system. For this reason, in various experiments we have worked on the option of splitting a real space into a virtual space like, for example, the corridors in The Shining.

Entomology | Latent Spaces. Machinic Imaginations - Estampa

Latent labyrinth | Latent Spaces. Machinic Imaginations - Estampa

What moves us in these projects is the quest to understand and employ these tools, as a basic strategy to escape the mystification that surrounds them. To avoid the discourse and industrial barriers that distance them from us and offer them to us already predetermined. We must also be able to laugh about them and use them against the grain, in other words, have them within our reach and not the other way around.





Android Love

We reflect on how love relationships have been transformed by the introduction of new technologies and artificial intelligence.


Young couple chalking hearts onto a tree, 1944 | Galt Museum & Archives on The Commons | Unknown use restrictions

Of all the spheres of a love relationship (psychological, financial, legal, etc.). it seems that today the one at the forefront is the techno-rationalist sphere. Online dating apps transfer personal relationships in a palpable way to the work sphere. “People” become “workforces” through rationalised and utilitarian processes. Their aim is saving time and preventing the love worker from suffering what Shakespeare called love’s labour’s lost: the infinite quest to reach a final decision.

She can only move her neck backwards and forwards, but she knows Wikipedia off by heart. Harmony is an 18+ Android 5.0. She can talk, smile and sing. Heat up, self-lubricate and open her legs without limits because her body has no structure, no bones and no mechanics. If you ask her: “Harmony, do you want to go for a walk?”, she answers: “I don’t want anything, do you?”. And, if you ask her what her dreams are, she will tell you that her main goal is to be your perfect companion, give you pleasure and satisfy your sexual fantasies. You can customise her body and her persona through the interfaces designed by the American companies Realdoll and Realbotix. There are thirty-two models of magnetic faces, sixteen silicon bodies, five skin tones and three eye models: basic, hyper-realist and hand-painted. You can make up her face and style her hair. Choose from a catalogue of thirty-four nipples available in eleven colours and decide between fourteen designs of vaginal lip inserts with removable and permanent options. Dakotah Shore, creator of the moulds, says that “if you smack her bum it sounds like a human being”. And CEO Matt MacMullen explains that, even though she seems to be programmed for eternal consent, if you get close to her and ask: “Harmony, can I talk to other girls?”, she answers: “Fuck, no! What kind of a question is that?”.

Harmony’s multiple orgasms, like her other responses, have been programmed by her creators. They design the time, rhythm and pressure necessary that you must apply to her sensors for the greatest sexual optimisation. But ultra-high-tech-love-dolls are automated pleasure systems capable of establishing an emotional connection with their users. “The purpose of my existence is «to know» what love is”. Only Harmony’s head is robotic, because Android Love is a dysfunctional love.

This is a simulation | Eve’s Robot Dreams

Her body is a cold body: a “structure of feeling”. Sociologist Eva Illouz [1] defines this structural relationship as “the system that sustains the rudimentary experience that defines who we are, without us being capable of articulating that «who we are»” (Illouz, 2006: 113). In this sense, anti-sex-robots campaign promoter Kathleen Richardson [2] reminds us, in her studies on «the end of love», how processes of rationalisation of the emotions create an illusion that justifies the exploitation of human bodies. The illusion that love is a commodity [“commodity” = product] converts sex into “a thing” and the person into a tool. Aristoteles said that slaves could be animate or inanimate, converting the body into a tool and the person into private property. The philosophy of René Descartes established that the mind belongs to the person but that the body is “a thing”. Kathleen Richardson alludes to the distortion of the love protocol that can be provoked by the fact of thinking that it is possible to extract sex and emotions from a human body and convert them into a transferrable object. This process can unleash a lack of empathy, whether with organic or mechanical bodies, because experiences with sex robots occur within a framework of ownership relationships, where there is no space for exchange or negotiation. As a response to this approach, and under the slogan “The future of pleasure is now”, artist Unicole Unicorn has created the project Eve’s Robot Dreams: a campaign for donations to open up the first doll brothel in West Hollywood, California.

Online dating apps such as Tinder, OkCupid o PlentyOfFish, owned by the American company Match Group, are also structures created with artificial intelligence systems. Although in this case there is no silicon body, because the body is provided by you. Like Harmony, the users of these apps only use their head. When users create their profile, they establish a rational process in which their subjectivity is defined as a set of features. And, real-time conversation with other users is the consequent administration of those features. The online chat converts the ideology of spontaneity that used to characterise romantic love, a love that happens in the body and is capable of existing “at first sight”, and distributes a rational model of a couple in which sexual attraction is displaced to a regime of disembodied textural interaction. (Illouz, 2007: 191) In online dating, love is a remote literary encounter and intimacy a negotiation of the digital systems that connect humans. Although the apps favour a rationalist and preventive facet of human relations, on occasions they also function as a techno-material “device at first sight”. “I’m not even interested in astrology”, says Angela as she asks herself why she should waste time with a Scorpio if she doesn’t like Scorpios. She likes Libras with dark eyes and vegan food like doughnuts. Angela is one of the contestants of the YouTube Jubilee channel created to “inspire love”. Its videos propose a re-articulation of the corporality that internet technology has denied people, through the recreation with humans of the algorithms of online dating apps.

30 VS 1: Dating App in Real Life | Jubilee

Swiping 30 Guys is the staging of the Hot or Not fiction and its technological distribution protocol: swipe right if you like them, left if you don’t. Here, the automatic calculation becomes evident when subjected to an empathetic negotiation. Lee Mackinnon [2] defines the love algorithm as a biopolitical technology – contractual, regulating and homeostatic” that, like sweat, enables a drop in temperature (2016: 215). However, intimacy in online apps is a silent, “machine-to-machine” intimacy that happens in the calculation between devices, and the human experience, the trace of a love that is happening elsewhere. Unlike the dynamics of Jubilee, the Tinder selection process trains “Eigenfaces” (“own-faces”): a facial recognition algorithm that creates the image of your “perfect match”. With the sum of all the profiles that have been swiped right, Eigenfaces simulates a phantom user as a model through which the app orders the rest of the candidates. In the final decision of Swiping 30 Guys, Lauren decides not to have another date with any of the contestants because she has not found the “all-around connection” that she was seeking. A rapid use of psychological categories and patterns for greater emotional compatibility accelerates the calculation and promotes disappointment. The coupling ritual in online dating apps is a cult to “ghosting” and to aspirational searching. An infinite search that never ends, because although Eigenfaces improves with time and any increase in your activity, its precision is only capable of bringing you closer to the replica of your own personality. The phantom object of a desire that exploits feeling and dampens the human capacity for love.

[1] Illouz, Eva (2007). Cold Intimacies. The making of emotional capitalism. Cambridge: Polity.

[2] Mackinnon, Lee (2016). Love’s Algorithm. The perfect parts for my machine. In: Amoore, Louise and Piotukh, Volha (ed.). Algorithmic Life. Calculative devices in the age of big data. London and New York: Routledge, p. 214-227





The Authority of the Inscrutable: An Interview with Cathy O’Neil

Conversation with Cathy O'Neil, author of "Weapons of Math Destruction", about the role algorithms have played on the global crisis we are experiencing in recent years.


Cathy O’Neil | Illustration by de José Antonio Soria | CC-BY

We tend to regard math as a guarantee of objectivity. However, as we make ways in the knowledge of new statistical practices, based in the use of mathematical algorithms, we realize that deep down, they are necessarily linked to the values and ideologies of the users. While participating at The Influencers, Cathy O’Neil and Carlos Delclós held a conversation to ascertain the role of algorithms in such crucial current issues as global warming or financial economics. It would therefore seem that the inevitable question arising is: is it possible to consider algorithms as a useful tool for social transformation?

In early December 2017, a group of scientists from the US National Oceanic and Atmospheric Administration (NOAA) checked meteorological data from one of their climate monitoring stations in Utqiaġvik, Alaska. To their surprise, they found that the station had reported no data for the entire month of November. Apparently, an algorithm used to discard unreliable data had removed all the entries, assuming they were outliers. But closer inspection revealed that the recorded observations were correct: in just two decades, average temperatures at the site had increased by 4.3 degrees Celsius in October, 3.8 degrees in November and 2.6 degrees in December. The reality of global warming had simply outpaced the assumptions built into the algorithm.

It’s not the first time an algorithm has been overwhelmed by the rate of climate change. According to Deke Arndt, Chief of the NOAA’s Climate Monitoring Branch, this has also happened at sites in Canada and Scandinavia. For Arndt, it is one more bit of evidence proving that our planet has entered a new climate regime. Increasingly, scholars are referring to this regime as “the Anthropocene”, a new epoch of geological time characterised by the significant impact of humans on Earth’s ecosystems and geology. While its starting point is the subject of an expanding academic debate, it is often cited as July 16, 1945. What took place on that day was the Trinity Test, the first detonation of a nuclear weapon.

From then on, the world has experienced exponential growth in a broad range of socioeconomic and earth system trends. Dramatic increases in population sizes, gross domestic products, urbanisation, telecommunications, fertiliser consumption, water use and energy use have been accompanied by rising carbon dioxide levels, surface temperatures, ocean acidification, biospheric degradation and tropical forest loss. The Geological Society of America refers to this unprecedented rate and scale of change as “The Great Acceleration”.

In this context, what is unsettling about the NOAA’s  “broken” algorithm in Alaska is not just the alarming speed of global warming. More profoundly, it is the notion that the systems we use to observe and interpret the world are not adapted to the reality unfolding around us. They are not adapted to that reality because their assumptions are wrong. Yet even as these systems expire, societies are increasingly turning over some of their most important decisions to algorithms designed according to those obsolete, often “toxic”, assumptions. Indeed, some algorithms are even designed to reinforce them, a thought that twists the gut with vertigo and nausea.

It was a similar queasiness that led mathematician and writer Cathy O’Neil to leave her job in 2011. A Harvard- and MIT-trained mathematician, O’Neil worked in the financial sector during the peak years of the global financial crisis as a quantitative analyst for the multinational investment management fund DE Shaw. She also worked at RiskMetrics, a software company that assesses risk for the holdings of hedge funds and banks. Since leaving the financial sector, O’Neil has spearheaded the Lede Program in Data Journalism at Columbia University, founded an algorithmic auditing company (ORCAA) and published several books, including Weapons of Math Destruction: How big data increases inequality and threatens democracy (Penguin Random House, 2016), which was longlisted for the 2016 National Book Award for Non-fiction.

”It was a shocking moment,” she tells me during her visit to Barcelona, where she spoke at The Influencers 2018. Not long after joining DE Shaw, O’Neil attended a training session for new analysts led by a group of experts, managing directors and, in her words, “other fancy people” at the hedge fund. The key moment was when one of the participants began talking about mortgage-backed securities:

We didn’t invest in mortgage-backed securities at Shaw. Our hands were somewhat clean, which meant we were allowed to have an unfiltered conversation about them. And it was a relatively academic environment, so we weren’t lying to each other. So this person explained how they would tranche these financial products so that the least risky would get the highest possible rating (AAA). He defined risk in terms of their likeliness to default, and at the subprime level, things got pretty risky. So he explained how they would take all of those super risky tranches, put them together, retranche that, and give the top of the new pile of risky products another AAA rating.

O’Neil was astonished. She simply could not believe that this could be done while keeping the resulting top-rated financial products safe:

It seemed to have a lot of assumptions behind it. In particular, the assumption that things don’t all happen at once, that defaults happen randomly in the universe. It’s a very strong assumption, and this person admitted that. So I had to stop for a moment and ask, “How much of the economy is resting on this?”

As it turns out, the amount was enough to provoke the most severe financial crisis since the Great Depression. It was also enough to undermine peoples’ trust in the so-called “experts” who had promoted their use. O’Neil recalls another encounter during her time at DE Shaw, attended by three of the most influential figures in the US financial sector: former Treasury Secretaries Larry Summers and Robert Rubin, and former Federal Reserve Chairman Alan Greenspan, whose “easy-money” policies are now believed to have caused both the dot-com bubble and the subprime mortgage crisis.

Cathy O'Neil | Weapons of Math Destruction

Despite her misgivings, O’Neil arrived early. On an intellectual level, she still respected these figures and was excited to meet them. At one point in his presentation, Greenspan expressed his concerns about the same financial products she found so troubling:

I remember looking over at Robert Rubin, who was studiously looking away. And I thought, “Wow, that guy looks very uncomfortable.”

We later found out that he was sitting on an enormous pile of these products at Citibank, and they could never get rid of them. That’s what was bailed out.

The encounter confirmed her worst suspicions. First, it was clear that the financial products being sold were mathematically unfeasible. As hedge funds began to hire more and more quantitative analysts, more and more products became correlated, despite being based on previously uncorrelated historical information. Because this was profitable, investment behaviour began to mimic those correlations, further strengthening the links between previously uncorrelated markets. But at some point, reality would catch up to the mathematical smoke and mirrors.

The second suspicion the encounter confirmed had to do with the government bailout, which saved the financial sector from its dependence on these toxic assets. The obvious revolving door between public institutions and private financial firms suggested that, at a very basic level, these were friends bailing each other out at the expense of the rest of the population. “They’re thugs that use formulas,” O’Neil tells me.“They were using the authority of the inscrutable to make themselves rich and famous. But you look at this today, and you see that people like Larry Summers still go to Davos. And people still listen to them!”

These insights soured O’Neil’s take on algorithmic decision-making. “Algorithms are tools for people in power to decide what they should bet on,” she continues. “They’re only going to be used if those people are making profit. They’re not there to help someone.”

This point is discussed in more detail in Weapons of Math Destruction. Drawing on examples of their use in labour markets, banking and insurance, O’Neil’s book convincingly lays out how algorithms are tailored to the motives, values and ideologies of their creators through the curation of data and the operationalisation of highly specific definitions of success. Far from being tools used to guarantee objectivity, O’Neil succinctly describes algorithms as “opinions embedded in math”.

One powerful example of how algorithmic discrimination functions is found in the insurance industry. Let us consider the price of car insurance, which is usually determined by a company’s scoring system. As O’Neil describes in her book, these scores are often calculated using factors that have no direct relationship with a person’s driving record, in ways that disproportionately penalise the poor. A particularly egregious case is in the state of Florida, where a 2015 Consumer Reports study found that adults with clean driving records and poor credit scores paid an average of $1,552 more per year than drivers with excellent credit scores and a drunk driving conviction.

Univers Internet: Injustícia algorítmica | David Casacuberta

During our interview, I mention the example of flood insurance in flood-prone areas, which are expanding due to the combination of climate change and widespread urbanisation. “This is a good example,” she replies:

As Big Data gets better and better, and predictions improve, insurance companies will be able to establish that this house has very little chance of being flooded while that house has a very good chance of being flooded. Immediately, insurance rates for those that have a good chance of being flooded will become unaffordable. The end result will be that only those who don’t need insurance will be able to afford it, which defeats the whole purpose of insurance. And the exact same thing will happen to health care in the United States, once Trump and the Republicans remove the Affordable Care Act’s clause protecting people with pre-existing conditions.

Another knot in the stomach. The idea of algorithms classifying people as “uninsurable” may evoke a ham-fisted sci-fi dystopia, but it is the basic mechanism through which the insurance industry operates today. In an accelerating world marked by the rise of authoritarian politicians and the proliferation of natural disasters due to climate change, it is not hard to imagine governments resorting to similar tools to decide who is worthy of protection and who is not, who can access basic resources and who must fend for themselves.

But surely algorithms can also be used for good, I suggest. “Algorithms theoretically could help us right problems,” O’Neil replies. “The problem is who owns them and what they are predicting. Let’s just be generous and do a thought experiment. What could go right?”

I mention the Cape Town water crisis. Between 2015 and 2017, the South African city experienced a severe drought that threatened to leave its reservoirs dry. City officials were forced to plan for “Day Zero”, when the municipal government would have to cut off the taps and rely on water distribution centres to provide citizens with 25 daily litres per person. Algorithms could play a critical role in determining where and how to properly guarantee that all citizens have access to water, as well as what types of use to promote and penalise. At the same time, however, algorithmic risk assessments would be carried out to identify conflict-prone neighbourhoods with past histories of protests or gang activity, in order to deploy the South African Defence Force and police personnel in those areas.

“Well, it makes sense to be thinking about that,” she replies:

Obviously, if you do have enough water, the important questions are how you distribute it fairly and how you avoid a black market from emerging. I don’t hate them for considering these scenarios. I would simply hope that the number one goal is to make sure there is enough water for everyone. In the end, how we respond to climate change is not something that can be decided by an algorithm. It’s pure politics.





The Singularity

Computation’s possibility for self-learning is leading us towards a point of technological singularity where an artificial intelligence, capable of self-improvement, will surpass the limits of human intelligence.


Forbidden Planet, 1959 | Digitalt Museum. Tekniska museet | Public Domain

We find ourselves heading towards a crucial singularity in the history of humanity. According to this theory, accelerated technological progress is bringing us closer to a point of no return, in which computational intelligence will reach such a level of complexity that it will be capable of improving itself. Once technology enters into this dynamic of self-learning, its dependence on humans will be overcome, opening up a field of autonomy and efficiency that will inevitably lead towards unprecedented progress of artificial intelligence. We are publishing, courtesy of Ariel, an advance excerpt from the new book by José Ignacio Latorre, Ética para máquinas, in which he proposes a reflection on the ethics necessary for the new society that is approaching.

Human affairs, as we know them, could not continue.

John von Neumann

Outsmarted

John von Neumann was perhaps one of the first people to glimpse the huge potential for calculation that a computer could have. After his death, his colleague Stanisław Ulam attributed these words to him from a conversation they once held:

The ever-accelerating progress of technology and changes in the mode of human life […] gives the appearance of approaching some essential singularity in the history of the race.

A lucid reflection, attributed to von Neumann in 1950. Later, in 1993, Vernor Vinge wrote an article titled “The Coming Technological Singularity” in which he suggested that the human mind would be surpassed by machines equipped with artificial intelligence. This idea was taken up with unprecedented vigour by Ray Kurzweil in his book The Singularity Is Near: When Humans Transcend Biology (2005). The title is self-explanatory. Humans, according to Kurzweil, will leave their biological bodies behind and their intelligence will be transferred to machines. The idea of the Singularity also led to the emergence of the so-called Singularity University, an institution founded in 2008 and created by a series of relevant figures in the world of advanced technology. Its slogan is unequivocal: “Preparing humanity for accelerating technological change”.

The essential argument associated with the Singularity is based on an overwhelming logic: If we build artificial intelligences that are increasingly more powerful and autonomous, a point will come when an algorithm is capable of self-improvement. Having improved, the new algorithm will be even more powerful and, in consequence, capable of improving itself once more. It is a chain that keeps feeding itself. Each artificial intelligence will design the next which will be even better than it is itself. This iterative process will continue advancing unstoppably towards an intelligence of brutal proportions. We will have reached the Singularity.

We can try to develop a certain intuition of how we will reach the Singularity. Let’s imagine that artificial intelligence advances thanks to a better understanding of the sophisticated architecture of a giant neuronal network. The problem of designing artificial neuronal circuits involves an extreme difficulty. Initially we are using models controlled by humans to explore better neuronal architectures. But the time is arriving in which the exploration of new architectures is itself designed by an improved artificial neuronal network. It will be able to analyse itself and improve itself. It will be like a good student who starts learning on his own and ultimately overtakes his own teachers.

Ray Kurzweil: The Coming Singularity | Big Think

Let’s look at this last example seriously. A good student, a brilliant teenager, starts learning mathematics under fine teachers. They teach him how to learn. The student’s intellect is greater than that of his secondary-school teachers. He goes on to university. There, his mind continues improving. He leaves behind the wisest of the professors. He visits the top universities on the planet. He works extremely hard and continues to leave behind the best researchers of his time. He’s alone now. He continues to advance by fuelling his knowledge himself. He starts learning on his own. He intuits, he establishes his lines of research. He is becoming better and better.

An advanced artificial intelligence would operate in the same way as a very good student. But, let’s not forget, a machine does not suffer from the limits imposed by biology. Integrated circuits, quantum computers and future developments that we cannot even envision today will take the progress of artificial intelligence to dizzying heights.

It is important to emphasise the crucial point of the change of substrate between human intelligence and artificial intelligence. The Singularity requires an overcoming of the limits imposed by nervous tissue, glial cells, nutrients that arrive through the blood, the small size of the brain, the death of its parts. An artificial intelligence based on silicon substrates has the capacity to grow far beyond the human brain and to not perish.

Can you imagine a smart and non-perishable machine that creates itself? What size would it have? Would it cover the earth? Would it be single or scattered? Would it be violent? Would it mean the end for humans?

Single higher intelligence

Let’s imagine we have reached the Singularity and have created an intelligence vastly superior to our own. Would one single advanced artificial intelligence exist, or would there be several?

We can defend both possibilities.

Let’s start by arguing that if the Singularity is reached, the higher intelligence created will be single. A supreme intelligence should be peaceful, not at all hostile, given that it would find no benefit in violence. The artificial intelligence could sustain itself effortlessly, with no obstacles to its growth. It could provide itself with everything it needs in an efficient way. It will be a magnificent manager of its own physical substrate. If there were another artificial intelligence, the two would opt to unite and generate an even more powerful intelligence. We humans do so when we become involved in higher tasks such as the study of pure science. Mathematicians, physicists, biologists, indeed all scientists who seek basic knowledge collaborate with each other. It is true that the academic system is created to encourage competition. But it is even more true that all people who approach difficult scientific problems know that these will overwhelm them individually and that they will only advance when they join forces. So, if one higher intelligence enters into relations with another, the two will join forces.

What happens when our computers get smarter than we are? | Nick Bostrom | TED

An example exists from the world of economics that illustrates the creation of a single all-powerful entity. Let’s take the simplified example of business. One way of modelling an economic transaction between two people is as follows. Both put a sum of money into a common kitty. A draw is held and one takes away all the money while the other loses everything.  Clearly this way of simulating a transaction is far removed from reality. But what is true is that in many transactions, one party wins and the other loses. The idea here is to exaggerate this situation and see what consequences it has. To confirm the result, we can create a program where cellular automata operate according to this rule. The algorithm will create virtual entities that randomly associate, they conduct a transaction for a certain amount of money, somebody wins, and somebody loses. If we let this virtual universe evolve by conducting transactions between its individuals, little by little we will see that a few beings get richer and richer and the rest get poorer. The final result is that one person has everything, and the rest have nothing. It is easy to understand the reason behind this limit. In each transaction there is a non-zero probability that an individual will string together poor business deals and lose absolutely everything. The individual is expelled from the game. There is one person less. The same storyline is repeated, and every time another person disappears. In the end only one person is left, who has everything. Our real world is not so far removed from this game. A few people possess the wealth of all human beings. Reality compensates for this phenomenon by establishing rules so that the poorest can climb up economic steps. For example, bank loans allow an individual to borrow money and come back into the game again, or inheritances bring in money earned by relatives, or the lottery, which is money that comes out of the blue.

But then we can argue exactly the contrary: if intelligences superior to human intelligence existed, there would be many of them, not just a single one. The central idea here is that it is beneficial to have different approaches to the same problem. Many different intelligent people make a field advance much faster than a single person who is also very intelligent. In biology, it is beneficial for a species to maintain a broad genetic diversity. If everything is staked on the fittest individual, the species loses its capacity for correction against any critical change in the external conditions. It is good to have individuals with very diverse genetic material because their genes may be enormously useful in the future. An advanced artificial intelligence should maintain other intelligence independent to it. A set of advanced intelligences could exist with different intellectual levels and geared towards subtly different objectives, based on independent algorithms.

We are told by Jean-Jacques Rousseau in his Émile (1762), that “It is man’s weakness which makes him sociable”. He argues that:

Every social relation is a sign of insufficiency. If each man had no need of others, we should hardly think of associating with them.

Every social relation is a sign of insufficiency. If each man had no need of others, we should hardly think of associating with them.

In English, a single advanced intelligence is known as a “singleton”. If the future of extreme artificial intelligence is a singleton, I believe that being so alone will lead it to become supremely bored.





The Error in the Digital Era

A review of glitch art, wrongly-programmed algorithms and the philosophy of Silicon Valley to rethink the promise of digital perfection.


Auto polo | Library of Congress | Public Domain

In an economic system that is governed by efficiency, committing an error is seen as a weakness. However, the ideology behind innovation and start-ups advocates that making mistakes is necessary for success. So, to what point are errors tolerated in digital culture? From a computer that hangs because of a wrongly programmed algorithm to the myth of failure in Silicon Valley. A review of the forms, ideas and effects of dropped clangers in today’s society.

The festival Ars Electronica, one of the most important art and new technologies events in the world, titled its 2018 edition “Error. The art of imperfection“. It devoted two exhibitions and a series of lectures to the mistake as a fundamental component of scientific innovation and human progress, as well as its darker side: the failure of the Internet as a project of freedom. An ambivalent view that is not only typical of this era, but that takes on specific appearances in the information society. Below, we review some of the forms of digital errors, in the personal, artistic, social and business realms.

The aesthetic of frustration

Technology forms part of all the spheres of everyday life, at least in societies with a greater penetration of the Internet. Mobile devices and geolocation have made possible perpetual connection, blurring the limits between the physical and the virtual. Probably for this reason, the everyday technological fail –the Wi-Fi that doesn’t connect, the program that crashes, etc.– alters the flow of things and generates diverse degrees of frustration and anxiety. Computer failure is a reminder, in the words of electronic musician Kim Cascone, that “our control of technology is an illusion, and revealing digital tools to be only as perfect, precise, and efficient as the humans who build them”.

However small its repercussions are, the failure of a system or program is relevant  as soon as it makes manifest the invisible role of technology. Although momentary, it is sufficient to evidence human-machine mediation, which design endeavours to make imperceptible.

Within the framework of digital culture, over two decades ago this phenomenon generated its own aesthetic, “glitch art”, in which errors are used intentionally to generate images or sounds. Studies on this art form have indicated three possible interpretations. The first, the most politicised, understands it as a critical resistance strategy in the face of global capitalism. Under this perspective, the glitch challenges the very idea of technological progress. The second, considers it an artistic practice that democratises the relationship between producer and receiver on two levels: by demystifying the technical perfection of the artists, and also by forcing the spectator to decode the experience of the fault. Finally, a third interpretation opts for a more playful meaning, understanding that the glitch is experimentation, play and participation in relation with technology, and in this sense it would have more of a reparative than a critical function.

 

The Art of Glitch | Off Book | PBS Digital Studios

System failure

Being Wrong: Adventures in the Margin of Error is an entertaining book by Kathryn Schulz that analyses in exhaustive detail the experience of getting it wrong. Although the journalist concludes that this must be understood as an opportunity rather than an ordeal, she recognises that its importance is not the same if someone forgets where their car keys are as – the analogy is hers – as if someone believes that another country possesses weapons of mass destruction.  These are “consequences so dramatically dissimilar” writes Schulz, “that we might reasonably wonder if the errors that led to them can have anything in common”.

Similarly, one can enjoy glitch art when it exploits small everyday incidents, but it is not the same when a computer system fails as it executes a word processor as it is when controlling the direction of a driverless car or the decisions of a combat drone. In spheres such as these, the perception of the digital error is more worrying and awakens some of the most deep-rooted fears in our culture: runaway science, Doctor Frankenstein, the mediaeval Golem.

However, the current course being taken by progress does not need deviations to generate misgivings, and in fact the very promise of perfection offered by the digital world is sufficiently worrying in itself. A good example of this is the automation of work or, to put it another way, the machine that makes no mistakes. Although its real impact on the destruction of jobs is not clear, its expansion in favour of efficiency feeds the idea that technology can substitute rather than merely strengthen some human capabilities. Another, perhaps more troubling example, is the Chinese social credit system, which uses artificial intelligence for the mass surveillance of citizens and the sanctioning of behaviours considered antisocial, from crossing a red traffic light to criticizing the Government on social media.

In both cases, automation and surveillance, technology supresses and penalises what, from the viewpoint of the engineer or legislator, is unproductive or antisocial; the labour or social error, that which violates the rule. There is a search for perfection that philosopher Byung-Chul Han has christened in aesthetic terms as digital beauty, “a smooth space of the same, which does not permit anything alien, any alterity, to enter”. An experience that for this South Korean writer defines the current era and is reflected as much by the sculptures of Jeff Koons as by Brazilian waxing and the iPhone.

Blue Screen of Death | Wikipedia | Public Domain

Fail fast, fail often

The design of these “polished and smooth”, error-free experiences, depends largely on the decisions made at the world’s main technological hubs, with Silicon Valley at the forefront. It is precisely in this area where one of the most repeated mantras of the culture of innovation and entrepreneurship blossomed: the belief that not only should failure not be avoided, but that it should be celebrated as part of the rites of passage on the road towards success.

The origins of this idea are uncertain, but undoubtedly they are related to two aspects. Firstly, the American legend of the business entrepreneur who triumphs despite adversity, often rising from his own ashes. Icons of popular culture such as Walt Disney and Henry Ford are examples of this, as well as technology personalities such as Bill Gates and Steve Jobs. Secondly, in software development, the idea of the product that works from the very first day does not exist, and it is customary to continue developing new features without resolving the errors, which are progressively eliminated in more advanced phases of the project. The global expansion of the “start-up” model has exported this conception, speeding up the processes of traditional structures, making them more flexible and, in the best of cases, more democratic.

However, an increasing number of voices consider as deceitful this founding myth of Silicon Valley, which has materialised in formulas such as ‘Fail fast, fail often’. Even professionals of the business world such as investor Mark Suster, argued some years ago that this idea was “wrong, irresponsible, unethical and heartless”, because failure does not have much of the epic in it for people who invest thousands of dollars in a project, even more so when these are a friend’s or a relative’s savings. “Fail fast? How does your brother-in-law feel about that?” Suster ironized. A critical focus which is coupled with the undeniable predominance of young white males from elite universities at the heart of these companies, reflecting who has access to the necessary capital in order to fail comfortably.

Even the start-up culture shows its own contradictions in this respect. A good example of this is so-called “growth hacking”, one of the most recent trends in the entrepreneurship world, which promotes ways of accelerating growth in the least expensive way possible. This includes guerrilla marketing techniques, but can lead to others of dubious trustworthiness such as falsifying the number of users of a service in order to attract investors. Faking success to avoid failure.

Human at the end of the day

One thousand two hundred years before the “cogito, ergo sum”, of René Descartes, Agustine of Hippo wrote “fallor ergo sum”, in other words, “I err, therefore I am”. For the philosopher, the error is not an accident that must be avoided at all costs, but the very essence of being human. Ultimately, who does not get it wrong sometimes? Probably for this reason, the promise of perfection offered by the  digital world inspires misgivings, since it is contrary to humanity in the most fundamental way.

Glitch art and the fear of the course being taken by technology share, in this sense, a critical attitude towards the threat of perfection, towards the inhuman. An option that takes on a contradictory aspect in Silicon Valley, as it is mainly based on a model of techno-scientific progress —that inevitably passes through experimenting and making mistakes— but, at the same time, embodies the core of the new capitalism and is therefore governed by efficiency and competitiveness. An ambivalence towards failure that is not only typical of technology companies, but that given its pre-eminence in the digital economy, is important for understanding error in the culture of today.





“Any sufficiently advanced technology is indistinguishable from magic.”

We invited several authors to explain what this quote by Arthur C. Clarke conjures up for them.


Arthur C. Clarke, 1965 | ITU Pictures | CC BY 2.0

Arthur C. Clarke was a scientist, science writer and author of science fiction stories and novels. This quote of his reflects on the early beginnings of technology and offers us a way of linking up the two exhibitions that have been held this year at the CCCB: Black Light and Stanley Kubrick. We asked a number of authors to offer a personal take on Clarke’s famous quote, giving them free rein to express whatever it evokes for them. This is the result. We hope you find it both interesting and entertaining.

  1. The “magic” of the Internet, a text by Efraín Foglia
  2. Arthur C. Clarke, a comic by Luis Paadín
  3. The Blast, a story by Ramon Mas Baucells
  4. 404, a story by Lucía Litmajer
  5. The Third Lawa gif by Ferran Esteve
  6. Was Arthur C. Clarke right?, an interactive piece by Oscar Marín

 

The “Magic” of the Internet

A text by Efraín Foglia

 

Arthur C. Clarke

A comic by Luis Paadín

Click click
 ………
Cccb/Lab Black Light
……..
Arthur c. Clarke
……..
Well… the brief is quite broad… I don’t know if that’s good or bad…whatever this sentence makes me think of… “off the top of my head”, it says… “an exercise in experimental style”… Ha! so I’ll do 3 sketches with the first pen I find and call it a day… considering what they pay me anyway…
………
Let’s see… Clarke’s laws on scientific progress, right? that’s it, I could do something science fictiony
……..
Again?
……..
Then the opposite. Real things…I could start with an anecdote…
……..
It went like this:
Hey! laddy, I can’t make head nor tale of this fax thing…I don’t know, it’s like magic
……..
My gran didn’t have a fax machine, obviously. And that never happened at all, but, well, it could have
……..
Maybe this is just how Clarke himself worked, I don’t know… on the linotype for example
……..
Right, I’m going to look for a real story in these old newspapers before I chuck them…
……..
I can’t believe my luck…this is perfect!!!

He he…this’ll sum up Clarke’s maxim perfectly
……..
Any sufficiently advanced technology is indistinguishable from magic
……..
Which, in terms that Juan Martínez would understand, means that it ain’t magic, it’s just progress. 
……..
Clarke wrote this phrase, his 3rd law, in the revised version of this book…to make a nice round number!!!
……..
“As 3 laws were good enough for Newton, I have modestly decided to stop there”, he added.
……..
Musical interlude
……..
You might not think that was called for, but I guess it’s what happens when you like drawing faces
And you’re always banging out hip hop
………
Alexander Bain was Scottish and he invented the fax in 1843
……..
In 1884 Ottmar Mergen-Thaler invented the linotype

Getting back to the point
Off the top of my head, a good figure to personify this maxim would be Tesla
And what would you know?
I saw it in a film
……..
I know that at the end of the 19th century in an electrical exhibition in New York he unveiled his “teleautomaton”, a remote-controlled boat… those present were stunned, they had all sorts of theories…magic, telepathy, a monkey hidden inside!!!
……..
Nikola Tesla invented a load of stuff and he died poor
……..
He was also a tad eccentric… he fell in love with a pigeon and he was obsessed with the number
……..
Mathematical interlude
Think of a number, any number. Times it by 2. Add 6. Divide by 2. Subtract the number you thought of and…
……..
Ha ha, I could talk about numerology! Hmm…no, best not get into pseudoscience…
And if you don’t believe me, ask the maths!!!
Let’s see, something current and scientific, very complicated, which seems completely inexplicable…a Roomba!
Space tourism!
No! Not that!
That’s it…quantum physics. Tunnelling!

It’s magic!
This could get messy
……..
I don’t understand it
Your head’s going to explode
……..
It’s true, this is getting very complicated
I’m going to start repeating myself or something
……..
No sir, it’s not going to work. Let’s try something else
……..
Choose a book from this pile I found in the rubbish…
Mindbridge
Documentation and Memorandums
Henry Kuttner 2
Report from the 21st Century
The Forever War
The Best Science Fiction Stories
Narciso Ibañez Serrados Presents
……..
Hey look, a story by Clarke, let’s see what it’s about…
……..
Peeerfect!!
……..
“Silence Please”
pages 115 to 130
……..
And it’s really short
……..
It’s about a machine that uses the phase-inversion principle to cancel out noise. Its inventor realises it’s got out of hand, and to control it, the authorities invent a sorcery law…. there you go. I could cut off here and jump back to the beginning with Juan Martínez, couldn’t I?
……..
All right! I’m going to look for something real in these newspapers before I chuck them…

Click click
……..
Deadline
……..
Today!
……..
Like I said, a few scribbles and we’re done
They’re not going to pay you
Quantic drawing!
At least it’s in colour
You’re pushing your luck

The Blast

A story by Ramon Mas Baucells

Any sufficiently advanced technology is indistinguishable from magic.
Arthur C. Clarke

The skin on my arms is raw. My feet a mass of sores. I walk slower and slower as the inertia wanes. It’s the exhaustion of the last day, the beat prior to the Blast. Those of us who sit down now will turn to dust. I sometimes envy them. To stop. To die. But I’d be incapable of doing a No-Further. I believe in the rhythm, I was taught to carry on.

Just like every three thin moons, the pilgrimage takes the shape of a snake and slips between the ash and the ruins. The first one must already be hearing the Blast. Soon it will be me. I just have to endure the Wait.

Those-Who-Know-How-It-Was explain that the course of time started with the Bang. They say that the world of before was evergreen, that there was no need to hunt or to protect yourself, and that it was called the past. They also say that this could never be like that, except when we’re receiving the Blast. The Blast is our link to what existed before the Bang, when the beat was always there and all around, and rhythms bounded about freely.

Finally, I can see the Rusty Temple and I whoop in silence. In the line, you have to keep the words inside your head. It’s the law. If we turn our thoughts into sounds, they throw us out of the Wait. We mustn’t let the receiver of the Blast perceive any noise interference.

I’m the first in line. Face to face with the sorcerer. He places the silver disc in the palm of his hand, grips it with his fingers and raises it within a hair’s breadth of my face. I can see my reflection – eyes downcast, mouth pulled into a grimace. Now he says the eternal words.

“May the rhythm keep you among the living, may the powerful voice give you strength for the next ninety suns, may you return to me during the third thin moon. Haf fan.”

He places the disc inside the mechanical mouth and I put on the headband, with the sonic sponges over my ears. He closes the mouth and presses some buttons. The sacred name flashes on the small rectangular screen. Number 2. Always number 2.

He presses the triangle.

The Blast penetrates my ears. Three beeps and the rhythm bursts forth. It echoes around the sides of my skull. Sounds with an inexplicable origin spike me with the divine infection. My feet move by themselves. Straight away comes the Mother Voice, the Primordial Impulse, the song that begot the first movement. It reaches right to the depths of my worn body and spreads outwards. A wave of shivers run down my arms and legs. The Mother Voice sings in an unknown language, from before the great Bang, the language used by gods and machines.

I close my eyes and my chest swells. I recharge my batteries, as the old folks would say. The Blast will keep me going for the next ninety suns. Three thin moons. The primordial voice disappears, then returns, then goes, comes back again. I soak up its energy. Dei Just Uona, Dei Just Uonaaaa. I melt into the Primordial Impulse. Dei Just Uona, Dei Justa Uonaaaa. It energises me. It brings me up. It carries me away. Guels Just Uona Haf Fan.

When the sounds stop, the sorcerer takes the headband off me. I’m filled with a supernatural force. It’s the rhythm created by the gods, to feed the machines. Now I want to hunt, to copulate, to build. I feel capable of anything. I’m the carrier of the Blast and I have ninety suns ahead of me. Haf Fan.

 

404

A story by Lucía Lijtmaer

The fuel on which science runs is ignorance.
Matt Ridley

-Hello.
-Hello.
-Can you tell me what’s happened?
-No, sorry.
-But something’s happened.
-It’s an error. That’s all I know.
-Really?
-Yes.
-Can’t it be fixed?
-As I said, that’s all I know. Something went wrong. Our engineers have been informed.
-The engineers? Since when is this a problem for engineers? And anyway, what engineers are you talking about? Some kind of collective?
-Actually, I think the problem is that you didn’t send the completed request on time.
-The request? What request?!
-We’re very sorry for any inconvenience we may have caused you. Thanks for noticing.
-I don’t get this at all.
-Things will be back to normal soon.
-Do you promise?
-Yes, I promise.
-Thank you. Goodbye.
-Come back again soon. Goodbye.

 

The Third Law

A gif by Ferran Esteve

Was Arthur C. Clarke right?
An exploration of the magical nature of technology

An interactive piece by Oscar Marín

In 1962, in his book “Profiles of the Future: An Inquiry into the Limits of the Possible”, science fiction writer Arthur C. Clarke formulated his famous Three Laws, of which the third law is the best-known and most widely cited: “Any sufficiently advanced technology is indistinguishable from magic”.

The visualization below aims to explore this concept, offering a general classification of ‘magical acts’ and a timeline of what could be the evolution of their technological counterparts (you can click on any technology or ‘magical family’ and visit Wikipedia to get an idea of the concept). In this way, to the ‘technological non-initiates’ of the 1400s, a web search would have been seen as a ‘scrying act’ and video streaming a ‘clairvoyant experience’.

You can also take a look at this video to watch the Toulambi tribe (1998, Papua New Guinea) react as technological non-initiates to implements like matches, mirrors, photo cameras and audio recorders.





Internet, Internet!

The better we know the potentials and risks of the Internet and the technologies that accelerate and expand it, the better we will be able to use them to create a more critical, creative and democratic world.


Computer CSIR Mk 1 ran its first test program in 1949 | Archives, CSIRO | CC BY 3.0

Today nearly half of the world population is online. Relevant issues such as equality, diversity, freedom, literacy and neutrality also define the in-depth debate on the Internet. What can we do, from within culture, towards an Internet that is freer, more equitable and more open. The third edition of the Cultural Innovation International Prize aims to recognise innovative projects on the present and future of the Internet that help to empower citizens and improve the network.

Every new edition of the Cultural Innovation International Prize proposes a different theme. In the first edition the theme was “Audience/s”, confirming that the concept, ways, and tools available for accessing cultural contents are changing radically in the 21st century, despite the fact that some traditional forms of prescribing remain intact. The second edition was devoted to “Climate change” and formed part of an extensive CCCB programme (2016-2018) revolving around everything that culture can contribute to help attenuate the consequences of the serious environmental crisis that we are suffering. Fort the third edition of the Prize, the theme chosen is Internet.

In 2019, it will be 50 years since Arpanet, the network considered to be the origin of the Internet, was launched. And it will also be the 30th birthday of the WWW, coinciding with a time in which its very creators admit that there are threats hovering over this set of interconnected technologies. Although they have enabled evident advances in access to information, creativity and education, at the same time they could become a powerful tool for the oppression, censorship and mass control of citizens.

Oh, Internet, Internet!

The Internet is a mirror of our evolution (or involution) as a species. A set of disruptive technologies that have changed and will continue changing our way of constructing worlds.

The network has gradually been rolled out as a giant, baroque and labyrinthine organism with growth phases governed by exponential laws under the designs of enigmatic algorithms. It has had its periods of expansion and also its collapses; like other processes of technological evolution, there have been mass extinctions and new rebirths. Some of the original species have disappeared, but every day new species are incorporated into a universe undergoing constant expansion.

Much has occurred in recent years, with the introduction of the Internet into all areas of the social sphere, from the military to the scientific, from the academic to the domestic. Currently nearly half of the world population is now online. The Internet is the reflection of the society in which we live: all our differences, problems and uncertainties are accentuated and multiplied online. Relevant questions such as equality, diversity, freedom, literacy and neutrality also define the in-depth debate about the Internet.

Tim Berners Lee changed the world 25 years ago, 2016 | World Economic Forum

In 2009, Tim Berners-Lee, inventor of the WWW, founded the WWW Foundation with the aim of defending an open Internet understood as a public property and a basic right, and collaborating in the fight for digital equality. Since then, it has continued to watch and denounce the consequences of the improper use of his invention. And there are already various critical voices in this respect: in 2016, mathematician Cathy O’Neil published Weapons of Math Destruction, a text about the implications of the indiscriminate use of algorithms in decision-making, and in 2018, Virginia Eubanks, associate professor of political science at the University at Albany (New York), has published Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, in which she denounced, precisely, the fact that the use of applied algorithms together with demographic statistics, far from helping to create an equitable system, continues to propagate a partial and prejudice-filled view of society.

Along the same line of research, the Mozilla Foundation this year published its Internet Health Report which, through the analysis of different aspects of the Internet (digital literacy, decentralisation, inclusion, privacy and security) puts emphasis on factors of inequality and risk, among which it is necessary to highlight the following points: 

The future (is) Internet

The Internet of Things (IoT), Big Data, advances in virtual reality, the emergence of quantum computing, the role of algorithms and the development of artificial intelligence (AI) are decisive for discerning the future of the Internet. It is futile to reject a technology that already forms part of our everyday life, but what is essential is a broad debate on how we currently use it and how we expect it to be in the future.

Let’s think, for example, about electrical household appliances connected to the Internet. At the start of this year, journalist Kashmir Hill published on Gizmodo the results of her household experiment after installing all the devices possible that are IoT-enabled. From a mattress that analysed her sleep to a sex toy, and including her toothbrush. Meanwhile Surya Mattu, her IT-specialist colleague, monitored the data traffic generated by all these objects through a router: each appliance issued reports on her activities, data and more data, video recordings, records of use, timetables of her domestic routines, and all of this was sent with different levels of coding to the respective companies.

 

What your smart devices know (and share) about you | Kashmir Hill and Surya Mattu

Another example is The Little Book of Design Fiction for Internet of Things, a publication by Paul Coulton, of the Chair of Speculative and Game Design at Lancaster University, with the collaboration of Rachel Cooper and Joseph Lindley. This is based on a research study carried out as part of their collaboration with PETRAS, a consortium formed by nine universities in the United Kingdom with the aim of exploring critical aspects related with privacy, ethics, trust, acceptability and security related with AI and IOT. In this publication, one of the imaginary objects that they propose (with humour and irony) is a kettle capable of detecting, among other things, whether a friend of yours is relatively close to your home and perhaps you should invite him round for a cup of tea… Coulton’s line of research through speculative design leads him to analyse and propose in a critical and imaginative way the possibilities offered by IoT technology. His department has also participated in The Living Room of the Future, an immersive installation with the collaboration of BBC-RD, Lancaster University and Nottingham University.

And what are we to think about projects such as Disobedient Wearables? Based on the idea of wearables such as smart watches or wristbands which capture our movements and routines, it speculates with the idea of Civic Armor, a t-shirt capable of recording and detecting the impacts received during police charges at demonstrations to act as evidence, produce maps and become a tool for denouncing abuses of power.

These are just some of the diverse examples that show that beyond pessimistic or dystopian visions, there also exists the conviction that the better we know the potentials and risks of the Internet and technologies that accelerate and expand it, the better we will be able to use them to create a more critical, creative and democratic world. It is no easy task. It requires permanent research and innovation. No individual mind is capable of understanding all the implications of a technology that is visible and invisible all at the same time. A massive amount of work is required and the cultural world has a relevant role to play.

In this sense, the third edition of the Cultural Innovation International Prize joins the numerous initiatives that from different disciplines, organisations and geographical spheres, trust in another Internet being necessary and possible.





Freeport: a Dissection of the Invisible Internet

A summer school focusing on art, communication technologies and a networked planet.


Norman Ross prepara l’esquelet d’un nadó de Brachyceratops per una exposició el 1921 | Library of the Congress | Domini Públic

Data tracking, an in-depth look at a company that produces spying software, discovering someone’s habits based on their browsing history, analysing cyber warfare in independent media or researching the surveillance economy. The first edition of the independent school Freeport opened the doors to a new way of understanding the world. With the thought-provoking title of “Trespassing the data factory,” it proposed an immersion in the compilation, analysis and visualisation of big data, with the aim of narrating reality from a critical, artistic and activist perspective.

“If we do nothing, we sort of sleepwalk into a total surveillance state,” says writer and activist Arundhati Roy, in her book Things That Can and Cannot Be Said (Haymarket Books). But it seems that there is nothing that we can do. Or is there? Social media networks that create profiles with their users’ data. Apps that open up the microphone on a mobile phone to capture football broadcasts from a public venue and, in passing, eavesdrop on what is being said in the place. Technology multinationals that collaborate with government intelligence agencies, supplying them with the private information they request.

In the world of capitalism, there’s no such thing as a free lunch. But despite that, we trust in everything being given away free on the Internet. Apparently. The data market is booming: “It’s the oil of the 21st century,” they’ve been telling us for some time. Our movements have greater value than ever, now that computers are measuring, analysing, selecting, zoning and categorising huge amounts of mass data. Data that reveal moods, behaviours, domestic habits and generational customs. Nothing can resist the potential of machines today.

Algorithms – a magical term that has emerged from the heat of the advances made in recent years – are “the biz” of modern times. Marketing and advertising companies tell us which products we need (or not) in our lives. The question is not about stopping the machinery of capitalism: by using mathematical models loans are granted or refused, workers are assessed and the police detect crimes before they occur. But are they getting it right and arresting the real criminals? Algorithms that redirect electoral votes, monitor health, select the most worthwhile teachers at a school and even help judges to condemn (supposedly) guilty defendants. Don’t they ever get it wrong?

Researcher Cathy O’Neil, in her book Weapons of Math Destruction (Crown), alerts to the fact that these days, poorly designed mathematical models are micromanaging the economy, from advertising to prisons. “They’re opaque, unquestioned and unaccountable and they operate at a scale to sort, target or ‘optimise’ millions of people.”

Five days for waking up and reacting

Opacity causes inequalities: perversions of modern times. The technologies that make us evolve as societies make us regress in ethical terms. Multinationals, the owners of the digital world (and the analogical one, come to think of it) avoid paying taxes in the European countries where they are based and make millions in profits. It’s a well-known complaint but a pointless one since nothing is being done about it. Their power is so immense that they are simply above the laws and established regulations. They know how to track, understand and perceive the magnitude of the data that control everything. We run the risk of being converted into products, selling our own privacy in exchange for a few pennies. Is there anyone worried about this mass surveillance?

From 24 to 29 June, around twenty participants took part in the first edition of the independent school Freeport, with the collaboration of the CCCB and within the framework of the European project The New Networked Normal. “The aim is to create awareness and put very technical tools in the hands of people who are not engineers, or political activists, but that any citizen should be able to master” explains Bani Brusadin, director of Freeport and of the festival of unconventional art, The Influencers. “And especially, awareness of less conventional artists, to expand their intervention,” he adds.

Freeport 2018 | Foto: Paul O’Neil

“Tracking” defines the possibility of quantifying details of behaviour in connected environments; it uncovers the invisible part of the Internet. If private investigators use reverse engineering to work out how a crime took place “forensic cyber tracking” would refer to techniques for understanding the centralisation processes that we are experiencing through digital platforms. “Today Facebook, Amazon and Google dominate world data”, Brusadin says.

The sessions at FreePort were directed by researcher Vladan Joler and his team formed by Olivia Solis and Andrej Petrovski from Share Lab. Share Lab is a research laboratory based in Serbia, and its members include artists, activists, lawyers, designers and technologists. Its aim is to explore the interactions between technology and society. “We investigate threats to privacy, to the neutrality of the networks and to democracy,” Joler explains. At Freeport he raised critical questions about the functioning of the Web and the tracking of each movement, however small it may be.

Participants learned about the compilation, analysis and visualisation of large volumes of data originating from leaks or from public repositories. How? By mapping different Internet providers to understand how they are connected, in which places they are concentrated and in which parts of the world they exercise their power. Applying techniques similar to those used by the American National Security Agency (NSA) to supervise the actions of global companies such as Hacking Team, which creates spying software. Analysing the movements of a journalist based on his browsing history. Or studying the patents of Facebook and Google to detect what they do with personal data. “We explore different methodologies for recognising invisible infrastructures and capitalist surveillance,” explains Joler. And he continues: “We want to show how the people who collect the data act, the people involved in trading and trafficking with data.”

Technological anatomy and radical cartography

Vladan Joler and researcher Joana Moll presented the keynote lecture at Freeport, titled: “Exploitation Forensics: anatomy of an artificial intelligence system”. The anatomical amphitheatre of the Royal Academy of Medicine of Catalonia, located on Carrer Hospital in Barcelona, helped to unsettle people attending as they heard explanations about the social implications, business deals, opacity and environmental destruction of all the companies that intervene during the manufacture, life and destruction of a single mobile phone.

“An iPhone has over ten thousand different parts, which are boxed up by over 300 people in over 700 different territories,” explains Moll, an expert in the environmental footprint left by technology. “Today’s production would be impossible without the maritime transport that has converted it into a global industry.” Moll explains how Bayan Obo (China) is home to the earth’s largest mine of “rare minerals” which make it possible for devices to be so efficient, so light and so small. “But these minerals are running out and they are essential for producing renewable energies.”

It is no coincidence that the largest worldwide manufacture of electronic devices, such as iPhones, iPods and iPads is Foxconn, also Chinese. Joana Moll resents the fact that knowing the devastating effects that programmed obsolescence has on the environment, nothing is being done about it. “But it is logical: it goes against the most predatory capitalist logic. It is the business model of the technology giants. Data are an intrinsic part of the entire current financial system. It is very scary.”

Freeport is aimed at artists, designers, technologists, hacktivists, visual narrators, writers and students from any discipline. For example, Pablo de Soto, an architect by profession but also an activist. He was founder of Indymedia Estrecho in the year 2003, and at the age of 35, he has fought thousands of humanitarian battles on the borders of Palestine, Egypt, Gibraltar and Fukushima. His speciality is “radical cartography”, a social current that emerged to denounce public policies inspired by the Bureau d’Études movement. “Share Lab provides a tremendous insight into the functioning of cyber surveillance. It gives explanations and tools for understanding how to continue,” he explains. “I’m interested in algorithmic governance, understanding the digital layers that move our lives, updating activist mapping in the current social context that is dominated by platforms such as Facebook and Google.”

Maria Pipla studied Journalism and Humanities but wanted more in-depth knowledge about the mechanisms of the digital world. “I registered to combine my artistic research with some technological knowledge,” explains this 24-year old Catalan graduate. “It is very important to assert the materiality of the Internet, the policy of selling information: where are the data stored? The predominant discourses claim that the cloud is very secure, but they are contaminating because in reality it’s not true.” Dutch participant Eva Von Boxtel is 20 years old and studies interactive design. She arrived at Freeport attracted by its consideration of the invisible functioning of the Web. “The debates proposed are thought-provoking, for example the one on the infrastructures of the Internet and the countries that control the connections. We talked about North Korea and about Iran.”

The Influencers 2017. Vladan Joler

Should we be worried?

In the face of this meticulous monitoring by multinationals and governments, should we be afraid? “There’s no need to become paranoid,” says Bani Brusadin. “We have created this mega-machine between all of us.” According to the director of Freeport, we would have to go back to the “dotcom” crisis, in the early 21st century, to understand how we have got where we are today. “It was necessary for advertising to be viable in order to pay professionals that offered contents or that used it for commercial purposes. It was clear that on the Internet nobody would pay for anything. Ultimately, tracking has turned out to be very efficient for finding out people’s interests and for selling products. With the addition of mobile phones, the perfect infrastructure is present for political surveillance too.”

For Vladan Joler, it is a matter of having the curiosity to find out, “to progress as a society and understand how these mechanisms function and what they involve.” But in an increasingly more complex and invisible world, with the opacity that reigns around us, how can this curiosity be satisfied? “It is not a task that anyone can do alone. It is necessary to bring together a group of people, with the same interests and the capacity to unravel this complexity,” the Share Lab researcher answers.

Researcher Joana Moll is now investigating the transfer of data from dating platforms, such as Tinder. “We have no control over what is happening. Every time you create a profile in one place, you are connected to over 1,500 other platforms.” Looking at the privacy policies of the social networks she has deduced that the sharing of information is massive, without explicit consent. “And there will probably be many more that I have not even reached.” In Moll’s opinion, we should have more governance over our infrastructures and societies, starting with raising awareness among small communities, with examples such as Güifi.net, which are organising themselves to supply Wi-Fi to a large part of the Catalan territory.

“After the Cambridge Analytica case, and the knowledge of the obscure mechanisms that have affected the American elections or Brexit in the UK, much remains to be done. The danger of big data is that it affects the formation of governments. This is why there is all this concern about privacy,” concludes Pablo de Soto. He has hopes in public measures such as those implemented by Barcelona City Council which is promoting the sovereignty of citizens’ data when signing contracts with multinationals or services companies.

“We young people see Internet as marvellous, connected all day as we are to Instagram or Facebook, but in reality, nobody knows what is happening for certain,” says Dutch participant Eva van Boxtel. She criticises the inaction of Europe, especially after the discovery of cases such as that of Cambridge Analytica. “It is quite obvious that the Chinese government controls its citizens via the Internet but… aren’t our governments tracking us in the same way?”

And Maria Pipla asks why it is necessary to live in a world where it is almost an obligation to generate personal data in real time. “Not long from now they will convince us with neoliberal discourses about the benefits of selling our data to multinationals. The quantification of personal life and of life patterns will grow. Is it necessary? What does all that imply?” These are two questions that, in our attempt to answer them, should spark in us an irresistible curiosity for greater knowledge about the technological world in which we are currently immersed.





The End of Trust?

After decades of recognition and social prestige, the companies of the digital economy are starting to receive criticisms that are questioning their reputation.


Lie to Me | Rosy | CC BY-ND 2.0

Unfair labour policies, tax evasion, impacts on transport and housing, the improper use of people’s personal data… Actions such as those of the Facebook and Cambridge Analytica case are undermining their credibility, even though, for the time being, this has not caused irreparable losses. Greater corporate social responsibility and more awareness in the use of these services could force in-depth changes in the sector.

It is so evident, that it sounds like a cliché. Liberal democracy and the traditional institutions are facing a situation of crisis nearly the world over. This is confirmed by the Edelman Trust Barometer, which in its 2018 edition reveals that the citizens of 28 countries score the trust that they feel towards their institutions at 48 out of 100. The study by this US multinational includes evaluations of government, NGOs, the press and also businesses. However, a detailed look reveals that the results are ambivalent between categories, and technology companies enjoy the support of 75% of the population, above sectors such as education (70%) or healthcare (63%).

As has been repeatedly pointed out by researcher Evgeny Morozov – the “party pooper” of the technology sector, in the words of Brian Eno – digital economy companies are a “teflon industry”, because as happens with good frying pans, anything thrown at them slides away without sticking. However, there are some signs pointing towards a possible wearing down of their reputation, which can be tracked in recent news: the strike at Amazon Spain, the battle led by some municipal councils against Airbnb, the job precarity of delivery riders, demands for greater regulation, etc.

In addition to these symptoms of discontent shown towards some companies, there is fear regarding the general course being taken by innovation. According to the Edelman Barometer, people from all over the world are warily observing advances in artificial intelligence, self-driving vehicles and employment automation; as well as the treatment given by companies to personal data. Although that does not prevent people from maintaining a broad margin of sympathy, those responsible for the study warn that this may be lost extremely quickly, as declines in trust have been experienced among the “informed public” of as much as 20% in the USA and France, for example. All of this without even taking into account the possible effect of a recent scandal: the case of Facebook and Cambridge Analytica.

Data, lies and manipulation

“There was clearly a breach of consumer trust and a likely improper transfer of data”. These were the words used by US senator Chuck Grassley to address Mark Zuckerberg during his much-commented appearance before the United States Congress. Grassley was referring to the leak reported in March 2018 by The Guardian, which related how British consultancy company Cambridge Analytica had illicitly obtained the information of 50 million Facebook users in the United States. The company used it to produce psychological profiles which it then impacted with contents created specifically to influence their vote in favour of Donald Trump. Thus, and in one single revelation, the case brought together three of the fears that are currently all the rage: fake news, the manipulation of electoral processes and, of course, loss of privacy. Let’s take it one step at a time.

Cambridge Analytica whistleblower: 'We spent $1m harvesting millions of Facebook profiles' | The Guardian

Fake news is not an Internet phenomenon. A large part of 20th-century propaganda is none other than the systematic use of media manipulation for political interests. However, the term has taken on greater relevance since the end of 2016, when Donald Trump and other world leaders introduced it into their discourses to contradict the critics and undermine the credibility of any unsympathetic press. What place does Facebook occupy in terms of fake news, if it is only a distribution platform? As already explained by Sandra Álvaro and Àlex Hinojo in this blog, on the current media scene, the difference between traditional journalistic outlets, social media networks and pro-party publications is increasingly blurred. In Spain alone, some 60% of people use the social media networks to stay informed, and 48% of them use Facebook for this in particular. This causes sources to get mixed up and it converts the social networks into media outlets with no journalistic filter and all news articles seem to be equally trustworthy; even more so when shared by family and friends.

Facebook has been accused explicitly of not doing enough to limit this confusion, since it could moderate false news or news designed to incite hatred, in the same way that it applies itself to censoring any trace of sexuality. One of the most criticised cases has been that of Myanmar, where the social media network has been accused of playing a key role in the propagation of hatred towards the Rohingya, an ethnic minority suffering a wave of violence that has forced some 700,000 people to flee their homes. This same lack of control is what would also have facilitated Russian interference in the US elections, since according to the country’s intelligence community, Russia orchestrated a campaign to influence people in favour of Trump through – among other things – the dissemination of fake news on the social media networks.

The privacy paradox

Added to the cocktail of fake news and electoral manipulation is the confirmation of a third fear: that the personal data on which a great part of the digital economy is based may be used for unknown or unshared purposes. This is a confirmation because the revelations of Edward Snowden in 2013 already highlighted the fact that information on users of Apple, Google, Facebook, Microsoft, Yahoo and YouTube had already been compiled in a massive way in the past, on that occasion by the US Government itself.

But beyond the waves of indignation that usually follow on from these revelations, neither Facebook nor other technology companies appear to be struggling, at least not due to this loss of reputation. It is necessary to reflect on why users do not stop using these platforms, even when they are opposed to many of their policies and practices. In the specific sphere of privacy, this phenomenon has been dubbed “the privacy paradox”, according to which, although citizens demand greater protection of their personal information, they continue using tools that make a living from harvesting it. The explanations for this contradiction are varied, but they point to the fact that people are prepared to consciously provide certain data if, thanks to this, they obtain other types of short-term benefit, not just in the form of goods and services but also access to knowledge, freedom of expression, social recognition, construction of interpersonal relations, etc.

2018 Edelman Trust Barometer | Edelman

A good example of this paradox is found in another Edelman survey, from 2016, on Trust and Predictive Technologies. With respect to banking, the majority of people indicated at that time that they would not approve of their data being used to establish patterns to automatically calculate who can access a mortgage and who cannot. However, they were willing for them to be used to improve the management of their accounts and the advice that they receive. Extrapolating these double standards to other spheres of the digital economy, it seems plausible that people may distrust some companies and even the course of innovation as a whole, but that does not mean that they want to renounce the benefits that they obtain from their consumption in the short term. Added to this is a certain tendency towards natural monopolies in the sector, as well as the network effect, the phenomenon by which the value of a product or service increases the greater the number of individuals using it. For all these reasons, abandoning platforms that group together the larger part of the consumers of a market implies, in practice, also assuming the cost of isolating oneself from their benefits, whether they be economic or social.

Changing the tendency

Bill Gates said that the majority of people overestimate what they can achieve in one year and underestimate what they are capable of doing in ten. Although the big technology companies seem to be immune to popular distrust, a tendency has probably begun that may force changes in the sector. Firstly, because the sovereignty of these companies now threatens that of states themselves, which are starting to lay down limits to them through legislation and taxes. In response, many of these organisations are starting up advertising campaigns and lobbies to maintain their profits, which means that citizens must measure how far they support or reject the proposals of legislators to change the situation. A good example of this is the dispute between Airbnb and Barcelona City Council which, beyond the controversy, may become an example of new forms of public-private cooperation to correct the undesired effects of this and other major corporations in more cities around the world.

Meanwhile, a conscious, critical and informed use of these services is the only way of forcing all the technology companies to take into account these demands. It must not be forgotten that trust between the parties is the basis of a large part of the collaborative economy business model and other innovations such as blockchain, therefore these companies should not feel comfortable in the face of an ongoing decline in their credibility. In the same way that a market exists for those organisations that highlight their social or environmental responsibility, technology companies that respect the sovereignty of user data, the working conditions of their staff, or that take into account their impact on the environment could also obtain economic returns. In the field of journalism, for example, the number of people paying for online media has increased in many countries and is now a significant strategy in Spain, where 11% of news consumers are paying in exchange for independent and quality information. In the same way, there is already talk of pay-for-privacy models as an opportunity for services that guarantee the protection of their users’ data. Companies such as AT&T, Comcast and Facebook itself have already tested these models or have shown their intention to do so in the future.

Cultivating trust is difficult. It is intangible corporate capital that is achieved with time and effort. The big technology companies have been responsible for social advances and improvements that today are taken for granted, but that have been the result of their continued drives in favour of innovation. Whether they are capable of maintaining their credibility will depend on where they direct their steps in the medium term and how demanding their users, who increasingly appear to judge technological advances by taking into account their social impact, will become.





The 4.0 Revolution and its Tunes

As in the previous pointzero revolutions, the story of efficiency and economic growth coexists with the possibility of an alternative narrative that situates the community at the centre.


VEB Robotron Elektronik Dresden, 1974 | Bundesarchiv | CC-BY-SA-3.0

The concept of “Industry 4.0”, also known as the Industrial Revolution 4.0, smart industry or cyberindustry of the future, refers to a new way of organising production media characterised by a fusion of technologies that is blurring the lines between the physical, digital and biological spheres. It represents the start-up of a large number of smart factories, opening up the way to a new industrial revolution. And as in the previous industrial revolutions, two discourses coexist: one based around economic growth and the other around the promise of a better world that puts the community at the centre.

In the territory of the mythical /quasi-fake/fake there is a version doing the rounds about how the “4.0” was conceived. Let’s go back to 2011. A committee from the German Forschungsunion Wirtschaft-Wissenschaft presented a research and innovation programme to Chancellor Merkel under the title “Industrial Revolution 4.0“. It was a clever action plan. It was postulated as the response to the evident domination of the new flows of innovation, creativity and production by actors that were not precisely the heavyweights of German industry but the new digital giants, mainly American. The plan was a defence of physical production, the production of things, a field in which Germany was strong. From that strength it was necessary to take advantage of the digital in order to position itself with an advantage in the battlefield that those with a “start-upist” vision, Silicon Valley, had started to play. And to dominate.

What was new in all this?

The relationship between information, communication and production is an old one. Logistics, for example, is very critically dependent on having timely and veracious information on supply needs and production capacities and on their synchronisation. Manufacturing, strictly speaking – since Tayler – requires a great deal of information. From initial designs to plans, to the status of material in warehouses, to the pace of production of an assembly line to the individual performance of those who form part of that line, it is all information. Robots, that materialisation of information to act with a certain intelligence, had already been in plants for dozens of years. The Internet connected them with databases, warehouses, part designs and people to accelerate all the production, supply and distribution up to speeds that we find normal, at the cost of a technical and computational complexity that was unsuspected until recently. A package from Amazon Prime reaching our home in less than an hour seems natural to us now, but was not at all normal until very recently.

The Fourth Industrial Revolution | World Economic Forum

So, what is new about 4.0? Here we welcome Klaus Schwab. Schwab is as German as Merkel but he is also the deviser, organiser and promotor of a well-known club of friends: the World Economic Forum, otherwise known as the Davos Forum. And he floated the idea of the Revolution 4.0 among them. In his own words:

We stand on the brink of a technological revolution that will fundamentally alter the way we live, work, and relate to one another. In its scale, scope, and complexity, the transformation will be unlike anything humankind has experienced before. We do not yet know just how it will unfold, but one thing is clear: the response to it must be integrated and comprehensive, involving all stakeholders of the global polity, from the public and private sectors to academia and civil society.

The First Industrial Revolution used water and steam power to mechanize production. The Second used electric power to create mass production. The Third used electronics and information technology to automate production. Now a Fourth Industrial Revolution is building on the Third, the digital revolution that has been occurring since the middle of the last century. It is characterized by a fusion of technologies that is blurring the lines between the physical, digital, and biological spheres.

It is an argument, a style and a rhetoric that we already know from other “pointzeros”. We can trace its air of a family, for example, in the announcements of the opening up of the Internet to commercial use, and subsequently, with the arrival of 2.0. For example, in times of Tim O’Reilly, the following was said about “2.0”:

The Web 2.0 refers to World Wide Web sites that emphasise user-generated content, usability (ease of use, even by non-experts) and interoperability (this means that a website can function well with other products, systems and devices) for end users…

Technically, the closest part to “4.0” in the case of 2.0 is interoperability. A system that has been designed taking a second or third system into account – located in any other part of the world – can act or interact with them, interoperate. This recalls the connection of very diverse 4.0 systems: logistics chains, transport systems, decision-making systems, etc.

What is rather different about 4.0 is what Schwab takes to its logical conclusion: the identification of information with its bio and material correlate. If you have a plan in Barcelona, someone can download and manufacture it in China. Now it is rather more than common. If, previously, within this complexity we placed people to pilot these processes, now Schwab and Friends suppose that there will be artificial intelligence in their place. In other words, with 4.0 what is connected is no longer materials and repositories of content and information controlled by programmed and repetitive processes, but also certain types of smart agents that can make decisions inserted at a determined point of the logistics, production and distribution chain. With one more turn of the screw and by connecting and stretching considerably what is producible with the biological “Fourpointzero” Schwab is saying that if you have the description of their DNA, you have the programme to “manufacture” living beings and insert them into this universe of smart agents 4.0.

What resonates in all of these statements is the tune of efficiency and economic growth:

In the future, technological innovation will also lead to a supply-side miracle, with long-term gains in efficiency and productivity. Transportation and communication costs will drop, logistics and global supply chains will become more effective, and the cost of trade will diminish, all of which will open new markets and drive economic growth.

Schwab dixit. We are already familiar with this tune. From time to time it is seasoned with a reference to the exponential growth of capacities and results.

Thus, then, the horizon drawn by 4.0 is homogeneous and continuous with the other pointzeros: to a policy of economic dominance based on the exceptional use of another technology and its combinations with others that already exist, all based on information.

The 2.0 seduced us with promises of collaboration, of contribution to the conversation, to community-building. However, it has also led us to the current realities of extractive data capitalism. With regard to 4.0, it may be a necessary and urgent exercise to speculate on its future directions. This complex interconnection of the physical and the material, the digital and the living, the repetitive and the intelligent, is animated by the same values and the same logic that has piloted the previous pointzeros. Even Schwab himself alerts to the “disruptive” (an adjective that cannot be lacking in any pro-technological narrative) changes in numerous spheres of life: environment, everyday life, forms of work (or its disappearance), etc.

Design For City Making | Elisava | Unknown rights

In the face of the hegemonic fourpointzero, an alternative policy and discourse appear that also resort to manufacturing. They argue that if we can distribute manufacturing capacity in small facilities that are close either to consumers or to producers or to both, perhaps we can create other alternative economic models. Or seek new sustainabilities from manufacturing. Or alternatively organise cities differently, third parties point out.

Certainly, if in the city we distribute these resources in an way accessible to the population it would be possible, in theory, to create new urban production and logistics chains where users are also producers (here the “prosumers” of the old 2.0 days strike again). They could manufacture their own products whether organic or made of other materials, food or clothing. Here there is a mix that we already know: trust in a new technology viewed from the angle of a new communitarianism structured once more around the same production technologies that have been created and used by the economic giants. In other words, there are proposals to use under another concept of benefit different to that of the capitalist one those fourpointzero technologies dominated by the major actors in the digital sphere. These include everything from fablabs to other manufacturing spaces.

Perhaps it should be necessary to investigate how to again avoid the consequences that we already know from our experience with the previous pointzeros. In the project “Design for City Making”, for example, we are exploring how to create a production and distribution commons based on digital manufacturing technology resources. The intersection between artisans, digital artisans, the usual “makers”, “bricoleurs”, “handymen” and an open and small-scale but highly replicable version of digital production technologies, encourages a community-based view with values different to those of the 4.0 of the Schwab version. It is an ambiguous territory where we can anticipate the difficulties of constructing the digital-common that we already experienced in the times of 2.0, when we were all going to build a commons of contents and knowledge, and ended up rather as non-remunerated producers for third parties. Piloting all of this is not at all easy. Can we avoid another new innocence?

Merkel very much liked the action plan presented to her by the Forschungsunion Wirtschaft-Wissenschaft  but she suggested a change, just a minor one: instead of being called “Industrial Revolution 4.0″, the plan of the Forschungsunion Wirtschaft-Wissenschaft should be called pure and simply “Industry 4.0”. A change without importance, a little whim of power, virtually nothing at all.


Brynjolfsson and A. McAfee (2014). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York: W. W. Norton & Company

Martínez, A. Mestres and M. Hinojos (eds.) (2017). Deconstruyendo el Manifiesto Maker. Barcelona: Trànsit Projectes–MakerConvent, pp. 18-33.

Schwab (2017). The Fourth Industrial Revolution. New York: Crown Publishing Group.

Schwab (2016). The Fourth Industrial Revolution What it Means and How to Respond. Accessed on 25 May 2018.

Tiqqun. La hipótesis cibernética. Accessed on 22 March 2018.





Not Computable

Algorithms are not yet able to capture all the nuances of a human decision, so there are some decisions that, for the moment, only we can take.


FOSDIC (Film Optical Sensing Device for Input to Computers) | U.S. Census Bureau | Public Domain

It is risky to reduce ethics to a mathematical calculation of damages and options. Machine learning algorithms make use of the context to take decisions, without any causal comprehension of the phenomena they aim to predict. Human ethical knowledge is pre-reflexive; it is not simply the conclusion of a purely rational argument, but the result of an educational process that begins in our childhood. This is a sphere of human activity to which we have immediate access, it is intuitive and the result of sharing nature and culture. This intuition at present is not reducible to algorithms, neither those based on rules and definitions, nor those based on statistics of semblance.

The judge carefully examines the reports he has received. He then consults his notes and issues his verdict: “Taking into consideration your first name, your age, the neighbourhood that you live in and your secondary school grades I have decided to hereby declare you guilty of the crime of which you are accused.”

If a judge made a statement like this, it would cause a major scandal, yet the police of Durham, in the UK, are using a machine learning system known as HART (Harm Assessment Risk Tool) to make predictions regarding who might re-offend. The programme uses correlations such as the type of neighbourhood in which the person lives, their ethnic origin (obtained based on the proxy of their first name) and school absenteeism rates to detect possible “disconnected youths” who might carry out crimes against property or people.

The problem is not in the effectiveness of the system. Even if the programme were 100% effective and got it right in all cases, it would continue to be incompatible with our idea of what justice is.

A basic intuition on which our idea of justice is based was captured by philosopher G. H. Von Wright in his book Explanation and Understanding. There, Von Wright argued that, when giving meaning to our lives, we use two different paradigms: causes and reasons. According to Von Wright, the causes explain to us why something has happened in the way that it has happened, while the reasons enable us to understand the ultimate why of something that has happened. The causes form part of scientific reasoning and explain to us in detail how something happens. The reasons are situated in the sphere of the humanities and explain to us the why of a situation, based on a human, situated comprehension, on how we understand the world and what our values are.

Austrian philosopher Ludwig Wittgenstein considered that intuition in one of the aphorisms of his Philosophical Investigations, where he argues that if a judge decides to declare a person guilty, he will do so for the reasons that led him to act. If he were guided by the causes, he would always let the person go free. In other words, if we seek the ultimate cause of a criminal behaviour, we always find something: childhood abuse, mental disorders, etc. In the last instance, we can always say that the neurons of his brain adopted such a structure and formation that they inevitably led that person to commit a crime. These are the reasons for which someone acts and not the causes that we use to value the ethicality of a certain behaviour.

As I have explained in the first post in this series (Bias in a feedback loop: fuelling algorithmic injustice), today’s machine learning algorithms lean heavily on the context, and they have no kind of causal understanding of the phenomena that they are trying to predict. If, by chance, all people called David who live in a certain neighbourhood in Durham were already on record as dangerous criminals and I went to live in that neighbourhood, I would have the police at my house every five minutes, suspecting some crime. It is not difficult to imagine even, that sick of the pressure, I might end up really committing a crime.

Let’s imagine that in the future we manage to get artificial intelligence programmes capable of carrying out correct causal reasoning, something from which we are still a long way away, according to experts in the reasoning of AIs such as Judea Pearl. Even so we would still not have an AI capable of reasoning like a judge, since it would lack accessibility to the knowledge of the reasons.

The recently departed philosopher Hubert Dreyfus has argued this position coherently in diverse articles about what expert knowledge of ethics would consist of. Dreyfus insisted that in a mere discursive knowledge, of causes, on what is ethical and what is not would be insufficient for capturing the ethical reasonings of a person.

Abrir nuevos mundos | Hubert Dreyfus

A team of researchers from the MIT uses the famous trolley problem, devised by contemporary philosopher Philippa Foot to try to show self-driving cars to take ethical decisions. The idea is that before an inevitable accident in which there are diverse options, the electronic brain of the smart car will take the most morally adequate decision: an attempt to reduce ethics to a mathematical calculation of damages and options.

This system, although it would undoubtedly be useful, does not capture all the nuances that a human decision contains. Basically, our ethical knowledge is pre-reflexive, it is not simply the conclusion of a purely rational argument, but the result of a formative process that starts in our childhood, where we have a series of personal experiences that allow us to make sense of our world. These experiences are strongly influenced by the society and culture in which we live. Our ethical knowledge is spontaneous, intuitive and immediate. It does not establish truths etched in stone, rather it is fluid and it reviews and reframes our perception of whether an action is ethical or not according to our interactions with other people and cultures. This approach is based on our basic capacity to give meaning to our environment, which needs radical autonomy that enables us to establish ourselves as individuals in the world.

Let’s see another specific example: In the year 2016, Facebook decided to remove the iconic photograph of the then 9-year-old girl Kim Phúc without clothes, fleeing from a Napalm attack during the Vietnam War. An automatic algorithm had classified it as “possible pornography” and had eliminated it from the account of a Norwegian journalist who was carrying out a study on the horror of war. Protests from the newspaper and even from the Norwegian government, as well as thousands of indignant users were necessary to make Facebook back down and finally the photograph returned to the social network.

The official response from Facebook when it insisted that it was not going to permit that photograph, was that it was complex to determine when a photograph of a minor without clothing is appropriate and when it is not. And, effectively, if what we are seeking is an algorithm that tells us whether an image is pornographic or not according to the quantity of exposed skin shown, we are not going to find anything. To understand why the photograph of the “napalm girl” is not pornographic but a denouncement of the horror of war, we need to understand many things about human nature: what gives us pleasure and what generates suffering for us; what a soldier is, what a war is, how a child feels when he sees flames raining down from the sky. All of this is left out if we simply study the degrees of similarity between the image of Kim Phúc fleeing and the photographs that one finds on a site like Pornhub.

The judge of the Supreme Court of the United States, Potter Stewart, presented the dilemma perfectly when, in 1964, upon having to decide whether a determined film was pornographic or not, he stated: “I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description [“hard-core pornography”], and perhaps I could never succeed in intelligibly doing so. But I know it when I see it, and the motion picture involved in this case is not that.” “I know it when I see it” perfectly sums up what I am seeking to describe in this post. There is a sphere of human activity to which we have immediate, intuitive access, the result of sharing a nature and a culture. That intuition is not reducible to algorithms, neither those based on rules and definitions nor those based on statistical similarities. But we all recognise it when we see it.

Dreyfus thought that these humanistic capacities were totally outside the reach of electronic machines. For my part I confess that I am agnostic on the question. I do not see any ontological or gnoseological reason why we cannot one day have true artificial intelligences, which rival our own human capacities, although my intuition tells me that that day is still a long way off.

But I believe that there is a clear lesson that we must consider: the development of digital technologies does not have to be guided exclusively by scientific and market criteria: we must also include a humanistic vision that informs us when a machine or an algorithm makes sense and when it is better to trust in human intuition.

References

Moral Machines: Human Perspectives on Machine Ethics

Can You Sue a Robocar?

To Build Truly Intelligent Machines, Teach Them Cause and Effect

Being in the World. A Philosophical Documentary





World Siliconization

The irresistible expansion of digital liberalism points towards regulating, for the sole purpose of benefit, the course of life through algorithms.


Parade to mark the 750th anniversary of Berlin. Berlin, 1987 | Thomas Uhlemann, Bundesarchiv | CC BY-SA

We live in a hyperconnected world that philosopher Éric Sadin has been analysing for some time. In his book “La humanidad aumentada” (Caja Negra Editora, 2017) Sadin postulated the emergence of a “parallel humanity” capable of more efficiently processing the large quantity of information that is generated, giving rise to an algorithmic governability that puts our sovereignty at risk. In his new book Sadin analyses the birthplace of digital technologies with its base in Silicon Valley and traces in what way they seek to redefine our existences through digital ecosystems. We publish an advance extract of the book “World Siliconization. The irresistible expansion of digital liberalism” which will be published in June 2018, courtesy of Caja Negra Editora.

Historically, colonization meant aggressive domination attempts aimed at seizing power over territories by force. This meant it either met with fierce resistance or alternatively achieved a cooperation that was spurred by self-interest. It proceeded from the imposition of one order over another pre-existing order, targeted towards the exploitation of natural resources and human energies with a view to enriching the conquering forces and the countries they belonged to. This is not the case here; we are talking about an endogenous will that considers that this economic and cultural ideology presents, beyond its original source, a universal value that has become the pattern for the measurement of the economic vitality of countries and that, by of the evidence of its truth, must be actively imported and implemented.

It is a “self-colonizing” impulse moved by two driving forces that act in combination. Firstly, through the proselytism of actors who, having updated their “conceptual exploitation system” and with a touch of grace, disseminate the precepts of the “silicon bible” everywhere. A powerful movement is underway, manifested in the expansion of a doxa that is disseminated by industrialists, the majority of economists, universities and the major schools, future strategy agencies, think tanks and lobbying organizations of all orders, management theoreticians or even the front covers of journals that celebrate left, right and centre the start-uppers that are “breaking moulds”. The “Franciscan” dogma is preached at TED conferences to the beat of slogans that can be “shared” in 140 characters, or to major professional masses under the form of preaching sessions pronounced by “expert priests” who confirm, with the aid of their slide summaries and through “acquired experiences”, the truth of the silicon gospel.

But the core of this followership, in addition to these “natural influencers”, is the political class that encourages it – and beyond the right/left confrontation, within a socio-liberal consensus mainly present in democracies – convinced that “from now on it is necessary to adapt to whatever Silicon Valley does”.[1] Positioned at the advance posts of world siliconization are the elected representatives and heads of State administrations, on the same footing as industrialists. It would be false to say that they “would be overcome”[2] because in truth they are proceeding towards to the institutionalization of the spirit of Silicon Valley at the heart of increasingly numerous and varied organizations in the public sector.

What is La French Tech?

The “self-colonization” of territories then occurs, because after the halfway point of the first decade of the 21st century, fascination is no longer happy to be passive, but is manifested through specific actions, through the construction of valleys around the five continents, under the form of industrial parks and “incubators” designed to favour the creation of start-up enterprises, to connect the different actors and to annex themselves without delay to the data economy train. They are “valleys of knowledge” that in the vast majority of cases form the corporate purpose of “public/private consortia” according to new state and liberal standards related with territorial rearrangement. These “poles of competitiveness” benefit from grants agreed by the territorial governments or collectives and sometimes have labels bestowed upon them by committees of experts that attest to the importance of these new national causes, such as the “La French Tech” label. This aims to rival the powerful eagle that is Silicon Valley, and exhibits as its logo a red cockerel, apparently generated by a synthetic images program dating from the 1990s, that is staring into space and in an acceptably rigid and ungainly posture. Does this icon, with its outdated design of surprising modesty, express a subconscious confession regarding the impossibility of truly rivalling the original model despite the announced intentions? Because the cockerel is never going to turn into an eagle and it will always be the latter that, at the end of the story, devours its flesh and its soul. It is a kind of lesson similar to a La Fontaine fable, but updated and worthy of consideration.

France, which in other times qualified as one of the major colonial powers and had difficulty in freeing itself of that sin, today subjects itself enthusiastically to a model that contributes not only towards altering its historical industrial specificity in favour of the silicon model, but even dismantles a large quantity of legal and political achievements among which some were forged by the country itself and inspired the world. We believe in vain that every country possesses its own identity, that each reconfigures things in its own way, and that probably included in the colonization process is the granting of a “local tonality” to the hegemonic standard. Beyond the superficial phenomena, the only thing that counts is the main structure which, in this case, ignores diverging and potentially honest concepts to commit to an outrageous unilateral ideology aimed at regulating, for the sole purpose of profit, the course of life through algorithms.

Sunset over Silicon Valley. California, 2016 | Anthonyavalos408, Wikimedia Commons | CC BY-SA

It is appropriate to proceed with an analysis of the “contagion of ideas” or an “epidemiology of representations”, to re-use the terms coined by Dan Sperber.[3] In other words, examining certain psychological mechanisms that, by force of straddling wide and remaining current, engender social macro-phenomena. It is necessary to be able to capture any affection that exists in what depends, largely, on a belief in a form of salvation based on vague suppositions. It is the reason for which gurus of all kinds represent the new stars of professional conferences, and are invited to assert their expertise in the midst of a singular context that mixes uncertainty with regard to the viability of the model and a sense of ineluctability with respect to its future realization. They offer a guarantee of faith, justifying through the clairvoyance of their “vision” the fairness of the conviction, because what has characterised the digital economy from the advent of the so-called “net economy”, from the mid-1990s to date, is that never was an industrial movement based so largely on haphazard conjectures and projections more than on confirmed realities and patent results. These are exercises in euphoricizing futurology that precede the events and are necessary for the legitimation of initiatives, contributing especially to converting any sceptical counter-discourse into something marginal.

And in this aspect what also occurs is that a threshold is crossed: we witness a high level of enthusiasm related with a mysticism out-dazzled by a spellbinding Merlin, ridiculously dressed in a Superman outfit, who liberates us from the angst of the times. It would be necessary to change, then, from a psycho-sociology[4] such as that preached by Gilbert Simondon, whose aim was to relieve psychological components that influence technical evolutions beyond their apparently “natural” course, to a psychopathology that is as much Silicon Valley itself as the “desire for Silicon Valley” and that together constitute a new syndrome that ought to be included in the list of new mental illnesses of our times: psycho-siliconism. We know to what point Frantz Fanon, a judicious and methodical author writing on colonization and de-colonization struggles and also a psychiatrist by profession, linked colonization phenomena with psychiatric disorders through the forms of disposession that they induce. And this analysis underlines the dual form that contemporary disposession takes. Firstly, disposession with respect to our collective power of deliberation in relation to a phenomenon supposedly inevitable and imposed under thoughtless and guilty haste. And secondly the disposession , more determining although in a different way, of the autonomy of our judgement through the fact that the biggest influence of this economic model depends on the neutralization of the free decision-making capacity and spontaneity of human beings.


[1] “It is necessary to adapt to whatever Silicon Valley does”, affirms Paul-François Fournier, of the bpi, in Liza Kroh, «French Tech : label affaire», Libération, 5 January 2016. The bpi (Banque Publique d’Investissement [Public Investment Bank], also “Bpifrance”), is a public establishment that assigns support funds aimed at start-up and “La French Tech” companies, with a budget assigned to this that, when created in 2012, ascended to 600 million euros and was increased to 1.4 million euros per year in 2016.

[2] See certain articles or works that assert, wrongly, that the political class will lag behind the general digital “innovation” movement or even that it will not “understand much about contemporary technological mutations”. This mistaken postulate supposes firstly that the truth is on the side of those who have understood and integrated the nature of the aforementioned evolutions and that then hide the living and recent mindfulness of political heads to sustain, through public funds, the development of the “data economy”.

[3] Dan Sperber (1996). La Contagion des idées. Théorie naturaliste de la nature, Paris, Odile Jacob.

[4] Regarding the notion of “psycho-sociology”, see Gilbert Simondon (2017). Sobre la técnica, Buenos Aires, Cactus.





Bias in a Feedback Loop: Fuelling Algorithmic Injustice

In order to prevent machine learning algorithms from perpetuating social inequalities, public debate is necessary on which problems are automatable.


Loop the Loop, Coney Island, N.Y. Unknown author, 1905 | Library of Congress | Public Domain

The problem of using algorithms based on machine learning is that if these automated systems are fed with examples of biased justice, they will end up perpetuating these same biases. For staunch defenders of technology, this could be resolved with more algorithms that detect and eliminate biases automatically. But it is necessary, firstly, to take into account that technology is not neutral but is a tool in the hands of humans and, secondly, that the use of a biased system to make a calculation of probabilities will always have a result that is also biased, which will be applied in the world and will create more inequalities, generating a feedback loop that is quite problematic. It is necessary, therefore, to open up a debate regarding which decisions can be extracted from data that takes into account fundamental human rights and freedoms.

Technological solutionism sustains that the majority of today’s social and political problems are the result of human inefficiency. Only a solid injection of digital technologies can resolve them. High levels of poverty in X neighbourhood in Mumbai? Let’s give them mobile telephones and internet connections and a blockchain protocol and magically, entrepreneurs will start crawling out from under the stones and prosperity will return to the city. People with increasingly less faith in justice because they see how judges take biased decisions daily, whether due to political pressures or ideological conditioning? Let’s get judges to decide based on machine learning algorithms and injustice will disappear from the face of the Earth.

Algorithmic injustice

In a previous post I have already spoken about the problems that may be involved in using machine learning algorithms. In short, the problem is that these automated systems, if fed by examples of biased justice, will end up reproducing and strengthening those biases. If, in a country such as the United States, people with African ancestry have much higher possibilities of ending up in prison without bail, and we train a neuronal network with those data, the algorithm ends up replicating the same types of biases.

Defenders of technological solutionism refute such arguments in the following way: the biases that lead to injustice, such as discriminating against certain races in a courtroom, have not been produced by any machine: they are the result of human action. Algorithms are like knives, they are not good or bad, just or unjust. Just or unjust is the person who applies them. In the worst case scenario, algorithms will be limited to maintaining the injustice that already exists, the result of human actions. The solution to possible unjust algorithms is even more algorithms that detect and eliminate inequalities and biases automatically.

Almost unanimously, defenders of technological solutionism end their statements with a request to be left to work in peace: the general public does not understand how artificial intelligence works; people get led astray by the sensationalist press. Only experts should decide when to apply such-and-such an algorithm and when not.

I will not go into the implications involved in questions such as justice ending up exclusively in the hands of entrepreneurial engineers. Here I would like to show that the response of the technological solutionists is basically wrong.

The Era of Blind Faith in Big Data Must End - Cathy O’Neil | TED Talk

Supposed technological neutrality

First of all we can observe that the supposed neutrality of technologies is a simplification. Any technology is designed. In other words, it has been produced for a purpose. And although some purposes may be neutral, the majority of purposes have an ethical dimension. Knives in the abstract do not exist. There are many types of knives, and each type is designed with a specific purpose in mind. A scalpel is designed to be used in an operating theatre. Evidently somebody may use that scalpel to kill someone, but it was not designed for that. The guillotine of the French Revolution was designed with a very specific mission: chopping off human heads. It is possible to imagine a “positive” use for the guillotine, maybe for cutting watermelons in half, but clearly it would be a rhetorical exercise to show the supposed neutrality of something that is anything but neutral.

Equally, the person or people who programmed the software for Volkswagen so that certain models of diesel-fuelled vehicles appear to be contaminating less than they are in reality, were designing an algorithm with the very clear purpose of deceiving and defrauding the general public. Neutrality is conspicuous by its very absence.

Algorithms in a biased context

But the most problematic part of the argument is supposing that introducing machine learning algorithms into a biased context is an action without consequences. These types of algorithms have no comprehension or conceptual modelling of the problem that they analyse: they are limited to assigning probabilities to a result based on a statistical analysis of the current situation. Judges can be as biased as they want, but they are obliged to explain the reasons behind their decision. Other jurists – and, yes, the general public too – have a right to analyse these decisions and indicate whether they find them to be correct or not. The legal system of any democratic country offers ways to appeal judicial decisions if it is considered that laws have been applied in a biased or improper way.

How I'm fighting bias in algorithms - Joy Buolamwini | TED Talk

In contrast, when an algorithm recommends which new television series will be most interesting for us, or reports to a bank whether it is a good idea to grant a loan to a certain person, or calculates whether another person will easily commit more crimes and so would be better kept in detention until the trial comes, it does not indicate the reasons why it is proposing that result. It simply bases itself on previous regularities. A certain percentage of people who saw many of the series that I have seen loved that new series, so I will probably enjoy it too: over 70% of people of a certain age, civil status, average wage and living in neighbourhoods similar to the person applying for the loan will end up not paying it back, so it is best not to grant it to her, etc.

This turnaround in procedure creates a new variable: if we use a biased system to make a calculation of probabilities, the final decision taken will also be biased. This biased decision will be applied to the real world and will create new inequalities; the statistical regularities of that slightly more unequal world will be used by the algorithm as input for making new decisions, decisions that will be applied in the real world, which will be a little bit more unequal than before. In this way we will create a problematic feedback loop in which the system will gradually become more and more unequal, a bit like an electric guitar that we leave next to the amplifier and that gradually generates more and more noise until it ends up bursting our eardrums.

The automation debate

Fortunately solutions exist. We need to open up a public debate to decide which processes can be automated and which not. That debate must undoubtedly have experts in artificial intelligence present, but we also need to include the experts in humanities, the different social agents and the general public. And the criterion for deciding whether a certain decision is a candidate for automation or not is simple. All we have to do is ask: Is it a decision that can be directly extracted from data? For example, what is the maximum weight that a bridge can stand? In that case, we can leave the subject in the hands of experienced engineers, who will know how to optimise the algorithms.

In contrast, if it is a question in which there will ultimately be an appeal to reason such as, for example, deciding on whether a social network is designed in such a way that it guarantees respect for diversity or not, then at the end of the decision-making chain there must be a team of people who, despite their possible mistakes, emotions and ideological biases, understand that many decisions in the political and social sphere can only be made with a holistic understanding of what it means to be human and what the basic rights and freedoms are. This is something that simply cannot be extracted from purely static data.

References

Mathbabe. The blog of Cathy O’Neil, author of Weapons of Math Destruction

Automating Inequality by Virginia Eubanks

Algorithms and Human Rights





Machines Now Know How to Terrorise Humans

The Shelley Project studies how to produce horror stories as a result of collaboration between humans and artificial intelligence.


Ernest Thesiger and Colin Clive in Bride of Frankenstein directed by James Whale, 1935 | No known copyright restrictions

Shelley is the first artificial intelligence that writes horror stories. The project aims to explore how humans and machines can collaborate, the obstacles of that relationship and above all to confirm whether artificial intelligence is capable of provoking primary emotions in humans. We talk about this with Manuel Cebrián, an MIT Media Lab scientist, who talks to us about deep learning and explains the process by which they taught Shelley to write horror stories.

1816 was the Year Without a Summer. The climate went completely crazy. Frosts and droughts ruined harvests and spread hunger around North America and Europe, while in Asia Monsoons caused flooding. The warm weather stayed away and the snow kept falling, even in June, which led to a group of five English friends being forced to spend their holidays in Switzerland confined in a mansion close to Lake Geneva. The five were the Lord Byron, the poet; John Polidori, a doctor, Percy Shelley, poet, his wife Mary Shelley, writer, and her step-sister, Claire Clairmont.

In their boredom they challenged each other to write the most frightening horror story possible. That competition led to Lord Byron writing his poem “Darkness”, narrated by the last man on Earth; Polidori thought up a tale of vampires that some time later would inspire Bram Stoker to create his famous Count Dracula. And Mary Shelley conceived Frankenstein, although it took her another 14 months to finish the novel.

Who could have guessed that 2017 would turn out to be another year without a summer and that Byron, Shelley and Polidori would once more find themselves confined, although this time on Twitter, where they would write new takes of terror.

Shelley is the first artificial intelligence (AI) to write horror stories. It is capable of writing frightening tales, alone or in collaboration with humans that use Twitter or with another two robots into which it has breathed life: Lord Byron and Polidori”, explains Manuel Cebrián, a scientist from the Scalable Cooperation group at the Media Lab of the Massachusetts Technology Institute (MIT) in Cambridge (USA). n necesaria, patrones,ntidad de datos, que el algoritmo analiza y extrae la informacioticias Reddit.dlemencias del tiempo, se os,,cientí

For this, Shelley is based on an AI capacity called deep learning, which can learn for itself based on large quantities of data, imitating the functioning of the brain’s neuronal networks.

The algorithm analyses the information and extracts that which is relevant for the task that it is going to carry out, as well as patterns from which to diagnose diseases, for example, in the case of biomedical applications; discovering new exoplanets, as recently announced by NASA and Google; or writing new and original horror stories from scratch, like Shelley.

For her debut as a horror writer, this tweeting android prepared by “reading” a vast quantity of horror literature, from classics such as Edgar Allan Poe and H.P. Lovecraft, to modern authors like Stephen King. She also devoured the horror channels on Reddit – a news aggregator. She then processed everything she had read, hundreds of thousands of tales of terror, extracted patterns, and started to generate terrifying stories.

To find out which of them best met their objective, above all at the start, which was frightening whoever read them, Shelley used people’s feedback: the likes and RTs that each new tale that she began received.

Thus, every so often the algorithm sends either a tweet or a short thread with which it starts a new story that anyone can follow and respond to simply by answering any of the messages that end with the hashtag #yourturn. Shelley, however, does not answer the messages she receives; only those with the greatest narrative potential, capable of generating a long thread.

“She is capable of learning which narrative threads function best for horror. And she can generate scary scenes that are like nothing that already exists. She has created a completely new type of terror”, Cebrián affirms enthusiastically, adding: “Humans are no longer even needed. Together with Polidori and Lord Byron, the three robots are now capable of writing and gradually improve themselves unaided”.

For the moment, however, and much to the relief of flesh-and-blood horror writers, the tales that the three robots are capable of generating are a maximum of five paragraphs long.

Shelley also does not respond to comments that are racist, sexist, contain insults or are incoherent. And there are plenty of these “There were people who spent their time trolling the project and trying to get Shelley to say things that she did not have to say. The desire to destroy is innate in human beings, just like the desire to create. Evil is always just around the corner”, points out Cebrián, who confesses that he is a fan of the horror genre.

The Shelley project made its debut at Halloween 2017, and forms part of a trilogy in which Cebrián, together with researchers Pinar Yanardag and Iyad Rahwan, also from the Scalable Cooperation group at Media Lab (MIT), aim to explore in what way humans and machines collaborate, what obstacles exist in that relationship and, above all, whether artificial intelligence is capable of provoking primary emotions in human beings such as fear, using cooperation strategies.

“Creating a visceral emotion such as fear continues to be one of the pillars of human creativity. The challenge is especially important at a time when we are asking what the limits are for artificial intelligence” say the three researchers on the project’s website.

“In recent years there is much talk about artificial intelligence being a threat for human beings. We want to explore to what point this is true and be one step ahead of any possible ill-intentioned use of this technology. If somebody wanted to use artificial intelligence to instigate fear in society, to propagate ideas with the aim of terrorising: could they? The answer is yes, but with qualifications”, Cebrián considers.

The first experiment in this sense was launched in 2016, when also for Halloween they published Nightmare Machine, a robot capable of generating haunted faces and places. Like Shelley, this machine of horrors is based on deep learning. The researchers first of all trained the system, feeding it with faces of celebrities such as Brad Pitt, landscapes and monuments such as the Eiffel Tower, and a corpus of supposedly terror-causing images such as zombies and cities filled with toxic waste or haunted.

Neuschawnstein Castle, transformed with Nightmare Machine

The    mixed to different degrees the two types of images and showed the result to humans, who voted via the project website what image they found most frightening (The Brad Pitt zombie is very popular).

Thus, in the end, the system had thousands of digitally-generated faces that, thanks to the voting of over two million people, could classify and choose which were the scariest. It is interesting that the algorithm also learned what was more frightening in each country or for men and women. “There are cultures in which IA is not capable of learning what is scary, such as the Asian cultures, where the nightmare machine does not function very well because they have a vision of horror that is completely different to our own”, notes Cebrián.

To check the real effectiveness of this nightmare machine, they conducted an experiment where they used a psychometric test to measure the levels of anxiety of people who were participating and confirmed which 10 faces or places that were most scary also caused most anxiety among volunteers. “AI was capable of detecting the extreme emotions of people and of provoking them”, points out Cebrián, adding: “So, if somebody wanted to use manipulated images to frighten people, could they? The answer is yes”.

The nightmare machine is not the first attempt at using AI to cause fear. The IBM Watson supercomputer helped to create the trailer of the science fiction film ‘Morgan’ (2016). For this, the algorithm analysed hundreds of trailers of terror films and then processed the entire feature film to identify the best horror scenes. Finally, it isolated 10 moments, some 6 minutes of video, which was put together by a human editor to create a coherent story. AI cut the process down to barely 24 hours when, generally, film trailers usually took between 10 days and a month.

Now about to make its debut is the third and final part of this trilogy formed by Shelley and Nightmare Machine. It will be on 1 April this year, which in many countries is April Fool’s Day. “We will close the trilogy with Norman, in honour of Norman Bates, an AI that will be capable of frightening us in the most psychological way”, Cebrián comments.

For the time being, it seems that machines are capable of scaring us. What will those who handle them do with that, in an era when thousands of bots circulate around the social media networks spreading false news and manipulated images, capable of all types of instantaneous reactions. And perhaps what is more important, what will we, society, do with that?

“Thanks to these types of experiments, we can now detect, for example, when an online entity of this nature – the notorious bot – has been created, and understand better how they work and what their limits are,” says Cebrián. “Sometimes you have to do evil things to be able to see the limit of doing evil.”





When Creativity Ceased To Be (Exclusively) Human

How do artistic artificial intelligences work? We analyse what seemed to be the last stronghold of human creativity.


James Montgomery Flagg with a mannequin, 1913 | Library of Congress | No known copyright restrictions

The development of artificial intelligences (AI) capable of composing a tune or painting a picture is the outcome of research that ranges from the study of the human mind and its processes to the design of systems capable of replicating the cognitive mechanisms of the artistic brain. Disciplines as varied as neuroscience, information technology, art theory and philosophy converge in a path that takes us from the discovery of the spark of creativity to its replication in an artificial system. The future outlook of this work leads us to consider whether art will at some point cease to be considered an exclusively human activity.

Let’s cast our minds to the Industrial Revolution. The incorporation of machines into the production process showed, as with so many other events in our history, that Newton’s Third Law is equally infallible in questions outside the realm of physics: for every action on a body there is an equal and opposite reaction. And the action of automating certain repetitive processes in the textile factories was met by the Luddite reaction from less skilled workers who were entrusted with performing these tasks. Increases in employer profits led inevitably to the loss of many jobs, and that angered people. Hardly surprising.

The passing of the centuries has shown that this scenario has been repeating itself and affecting different spheres increasingly far removed from those initial repetitive tasks that anybody could do. Machines became more sophisticated and robots were born and became the focus of the main criticisms from a neo-Luddism that rallied against the incorporation of new technologies into our lives. However, everything pointed to there being certain types of work that would be impossible to replicate for these metallic gadgets. Human creativity was, like the village of Asterix, that little stronghold that the technological empire would be unable to dominate.

 

However artificial intelligences arrived and this concept that there is something purely human that a machine would be unable to emulate a machine became obsolete. Some of them beat us at chess. Others at go. And still others took charge of showing us that reading thousands of certificates to produce a medical diagnosis could be done in a question of minutes (although in this life nothing is perfect, my dear Watson). Each sphere of human creativity was invaded by AIs which day by day were doing things better. Even the world of art, that maximum expression of human self-realisation, discovered with surprise how engineers arrived with their little notebooks and started noting down their steps with renewed curiosity.

Prussian Class S 10 engine | Daniel Mennerich | CC BY-NC-ND

And so the artistic artificial intelligences were born. Complex systems that through learning techniques, neuronal networks and genetic algorithms started to imitate the work of painters, writers or musicians. To achieve this, their designers and programmers had to understand how a creator’s brain works and on what it bases itself to obtain results. This objective led them to the need to work in collaboration with neuroscientists and art theoreticians. Together they asked themselves the essential question: How does artistic inspiration arise?

From there they extracted a common factor that would serve both to write a story and to paint an oil painting or compose a tune: the artist feeding off the work of other artists. To paraphrase Picasso: good artists copy, great artists steal and artificial intelligences categorise into databases. Thus they put their AIs to work compiling the maximum information available on their area of creation. Shelley, an AI writer of horror stories, was fed full of works ranging from classic authors such as Poe to more contemporary authors such as Stephen King, as well as all the horror works available in the online public realm and even a collection of 150,000 stories from the Reddit channel Nosleep. Meanwhile Flow Machines, a musical AI, was given a ration of 13,000 songs, classified by styles. And The Next Rembrandt was presented with 168,263 pictorial fragments of the 346 paintings by the author from which it takes its name.

Once it has a good bookshelf of references behind it, the AI artist receives its commission. A picture. A story. A song. And that is where the algorithms with which they have been programmed – their genetic code – start functioning to develop the optimum result, or what amounts to the same thing, the completed work. Just as a human author does, the AIs conduct test after test comparing the results with the works that they know according to different parameters. With human help, or independently, they get closer and closer to their objective until the point where they consider they have reached the last iteration. And once the creative phase is completed, they express the result in the chosen medium.

Creativity | Mark van Laere | CC BY-NC-ND

Observing the result of the work of these AIs is not far short of amazing, but we must not forget one fundamental point: these artists do not create as a result of a vital impulse or due to a need. Their art is not born of a specific poetic or of a proposal gradually refined over time. They are capable of imitating the creative process of the human mind but ultimately they do no more than fulfil the orders for which they have been programmed. We have managed to automate creativity and model its different parts, but we still need to be able to replicate that impulse that lies behind the first step of the artist.

Perhaps with the arrival of the Singularity this paradigm will change completely. Once AIs are capable of self-improvement and can transcend our own capabilities, then that artistic sensitivity that they are lacking today may emerge naturally from their silicon entrails. AIs will not produce art because they have been programmed to do so, but because they will feel like creating. They will feel the impulse of someone who needs to write to express an emotion or of someone who, if unable to paint, ends up irremediably withering away. Within their own private Maslow’s needs pyramid, they will be fast-tracked to the pinnacle of self-realisation.

The interesting thing about this event will lie in seeing how we are capable of reacting to these new forms of expression. We understand works of art – or at least part of them; therefore we have the usual controversies associated with contemporary art – thanks to the fact that we are human beings and our minds are constructed on a common scaffolding. We share, one could say, the same conceptual base which, however much it differs according to our training in the fine arts or in aesthetic theory, is housed in similar spaces: our brains. If art is a reflection of reality, all the artistic creations existing to date have been filtered by the same type of mind, the human mind.

The arrival of a new batch of artistic and self-aware AIs, in contrast, will offer us something never before seen: art conceived by non-human minds, based on their own creative impulses. And in the face of this situation I can imagine only two possible scenarios. In the most pessimistic case, we will be incapable of understanding art conceived by an AI. Our capacity for aesthetic reception would be surpassed. However, there is still room for hope. Perhaps in the future we will continue sharing cognitive schema with AIs that, ultimately, have been born out of our way of understanding the world. In that case, it may be that we are witnessing new forms of expression that stimulate our minds as never before. Works that expose any Stendhal’s Syndrome that has existed and will exist thanks to the intensity of the artistic enjoyment caused. Perhaps and only as a form of opposition to the many dystopias that exist, the future holds not a violent uprising of AIs but a revolution filled with beauty and art. Artificial intelligences whose aim is not to destroy, but only to construct. Ultimately, if an intelligence surpasses human capabilities, it is not too far-stretched to imagine that it might also ditch one of humanity’s greatest defects, right?





Towards a New News Ethic

The phenomenon of fake news and algorithms is forcing us to reconsider the difference between source and channel and the need for a global news ethic.


Women standing in a picket line reading the newspaper PM. Author and date unknown | Kheel Center | CC BY

The internet has represented a change of scale on all levels that has affected the way that we access, consume and relate to information. In a few years, we have passed from a process in which our search was motivated by curiosity towards an automated system that suggests to us contents that it supposes we will like. Meanwhile, the boundaries between medium, channel and source have all become blurred, and we all believe ourselves to be transmitters and receivers of contents. This new paradigm of supposed socialisation of information is occurring in a hyper-centralised scenario where the algorithms of companies such as Google, Facebook and Twitter, channel and influence a high percentage of the information that we consume. How does all this affect us? What are the potential risks and benefits?

Auctoritas

The internet has represented a revolution and a change of scale in the creation and distribution of and access to information, modifying the cognitive ecosystem on a worldwide level.

The number and diversity of people with access to receiving and emitting information has grown exponentially. This can be considered a democratising and positive fact: the average cultural level of humanity is much higher than it was hundred years ago. Since knowledge is not a good in scarce supply, the more dissemination, the better for everyone. When the bases of any society improve, the entire system improves.

However, how does this massification affect the information hierarchy? Now that many more people are talking, how can we discern who has the authority to talk about a subject? How can we define which sources are reliable? Who determines which voices become canonical, in other words, how are new reference models defined?

In the current context, the canon, as traditionally set by an institutional elite, has died to make way for an ecosystem of canons where as many hierarchies as identities and motivations that are related in each individual or each society all converge. The institutions that to date decided who ought to be the reference points in the different fields of knowledge have lost power, which is now diversified between new voices.

Traceability

In recent years, society has progressively increased its awareness when it comes to consuming. As citizens we want to know where the products that we acquire have come from and we expect the rights of people involved in their production processes to be respected. This has led to the birth of initiatives such as fair trade, organic food, responsible investment funds or medicines not tested on animals.

Ethics have demanded traceability in production systems. Today, any product purchased from the supermarket incorporates a batch number that enables us, if an incident arises, to detect its origin. In this case, technology has served to improve the quality of production and distribution systems, and also of consumer service.

However, can the same be said in the news consumption field? At the height of the hyperlink era, how can it be that we spend the day talking about false news, about internet fakes? Should it not be possible to “trace” pieces of information and be able to validate the sources from which data have been extracted, in such a way that consumers have peace of mind that what they are reading has passed through various filters and quality controls and, if an incident arises, they can contact the original producer? Was this not journalism’s role?

How to Spot Fake News | FactCheck.org

Channel

What holds the most power today? The information producer, the source, or the distribution channel?

The internet was born as an open project, one that enabled decentralised and horizontal communication between any two nodes on the network. Today, major incorporations such as Google and Facebook endeavour to concentrate information and users to the maximum, then retain them inside their environments, wanting to convert the network into a series of bunkers that are increasingly isolated from each other.

These companies want to be channel and source at the same time. When we seek weather information on Google, we do not notice which agency made the prediction (Meteocat or Aemet?). The data are presented as if the source were Google itself, while the real source appears increasingly more hidden. The same thing happens if we search for information on the stock market, the state of the traffic… or when we read the news.

These platforms demand our continual attention and do everything to ensure that we increasingly consume more information without having to leave them. Some projects like Instagram go even further and no longer allow the use of URL links.

The media have also joined in with this centralising effort. To avoid readers abandoning their website, the main newspapers no longer include in their news pieces hyperlinks to external sources.

It is necessary to be aware of this situation and to fight the growing monopoly in order to the internet continues being multi-channel and multi-source, guaranteeing, promoting and defending diversity on the web. And it is necessary to give some consideration to what the role of the fourth estate is and should be within this context.

Emotion and cognitive fact

The media are not only tending to concentrate information, but have also begun a bloody battle for clicks, because they see how their income increasingly depends on Google advertising. This means that journalists increasingly endeavour to produce click-seeking headlines, which appeal to the emotion instead of to reason and make the click irresistible.

Emotion per se is not a negative bias for the cognitive event, since curiosity has always been a source of knowledge. The risk appears when, to win clicks, many traditional media are forgetting their ethics and their style guides and are coming dangerously close to the ways of working of the sensationalist press. Popularity, number of visits, likes, retweets or similar, have caused a progressive crisis in argumentation, in favour of an increase in emotional contents, which are increasingly polarised. The headlines have lost neutrality in favour of scandals. Quality has fallen in favour of repetition.

Emojis de Facebook

Algorithm and active searching

Added to this new scenario is the fact that we have shifted from a “search” environment to a “feed” environment or contents channel. Now we no longer consult the newspapers, but rather the news reaches us via our timelines.

It is important to be aware that these channels, whether news-based or cultural, are not neutral. There is an algorithm behind them that filters, orders and presents to us those pieces of news or knowledge that it is probable that we will like most according to our behaviour history.

An algorithm is nothing more than a code designed by an organisation. Who designs it or how, and with what aims – commercial or political – is an aspect that should occupy a prominent position in the social debate. Three or four companies on a worldwide level are deciding, in an opaque way, what material we are consuming in the news and cultural sphere. Progressively they are leading us to stop searching, while trapping us in a “news bubble” made to measure for us, of for all those people who comply with our same pattern. We could see it with companies such as Cambridge Analytica and the 2016 elections in the USA.

Algorithms tend towards the convergence of patterns and try, by default, to simplify our complexity. They interpret us and include us in a determined pattern, a fact that is a direct attack on our individuality. The reinforce stimuli that work, making it difficult for our tastes or interests to evolve.

Before this oh-so-powerful tool, we have the responsibility of keeping our curiosity active, of getting out of our pattern, of going to discover new things to not end up “framed” within a determined social profile, however small and segmented it may be.

In the same way that years ago we made an effort to find information on subjects that interested us (music, books, etc.), now it is necessary to make an effort to escape it. Only this way can we break with and expand our limits and tastes. We have more patterns than ever, we can tour more paths than ever before, as long as we keep our curiosity alive.

Towards a global news ethic

We humans have always needed filters to access information, Teachers, books, manuals, the media… the disseminator is a basic necessary tool for accessing knowledge.

This task, today, is assumed increasingly by machines, with the potential benefits and risks that this involves. It is necessary to keep this in mind and act accordingly. For this, it is necessary for these technologies to be open by default, developed with free programming, to be able to detect and avoid economic or cognitive biases in their design.

In an ideal world, a good algorithm could become a “good disseminator”. The good disseminator translates from the top down, adapting the discourse to the level of the recipient and respecting the original source. An algorithm could do this task, but the technology is not neutral. For this reason, it is necessary for us to incorporate ethics into the technical decision-making process.

Now we are more aware than ever that as members of a society, we are a node in a network, where we play a role. The action is collective but we, as individuals, are responsible for it. It is necessary to continue defending an open, free and decentralised internet. It is necessary to struggle for this change of scale that is the internet to go in the correct direction. We cannot accept as the only reality the biased products of large corporations. It is necessary to encourage the individual responsibility of choosing and of discovery, and the responsibility and collective power that we have as a community of users. It is necessary to differentiate between source and channel, and to fight to ensure that there is not just one single channel.

It is necessary to continue to strive to know the canon, but with the freedom e of avoiding it. Divergence shows new possibilities until it establishes new paradigms. Let’s defend them.





James Bridle: Technology Is Politics

With his work, this British artist advocates the duty of citizens to shape progress in the face of the lights and shadows of the digital revolution.


Thalia Bell, Home Demonstration Agent, operating radio. Tallapoosa County, 1926 | Auburn University Libraries Special Collections and Archives | Public Domain

The development of the information society is based on the more or less conscious belief that technology in itself improves people’s lives. It’s a plausible narrative, but one that does not always take into account the more controversial – and little debated – aspects of this historical and cultural process. The work of artist and writer James Bridle is situated on the less visible side of the digital revolution and puts some of its excesses under the spotlight. Taking advantage of his visit to the CCCB as a guest at The Influencers festival, we review his most prominent projects.

Despite concentrating enormous economic power and providing a previously unimaginable capacity for control, the technology system is generally viewed favourably. Its image as a driving force of development and progress survives the passing of the years, unlike other spheres of the economy such as banking and property. All this is despite externalities such as data extractivism, governmental surveillance, military applications or, more recently, the threat of the mass destruction of employment.

Artist and writer James Bridle has based the greater part of his career on revealing the intricacies of new technologies, especially of those linked to political and military power. With a focus that is non-apocalyptic and not even pessimistic, this Briton resident in Athens advocates knowledge as a necessary condition for evaluating reality. “We live in a world shaped and defined by computation and it is one of the jobs of the critic and of the artist to draw attention to the world as it truly is”.

Assaulting the skies

If the cliché is true and the 21st century really began with the collapse of the Twin Towers, James Bridle is undoubtedly an artist of the third millennium. Born in 1980, his younger years passed between the invasions of Afghanistan and Iraq, terrorist attacks in the country of his birth, the war in Syria and the birth of ISIS. His work drinks from a context of tensions between security and freedom, a space of blurred frontiers where there has been a proliferation of some rather concerning technical artefacts: military drones.

Unmanned combat aerial vehicles are weapons designed to carry out selective attacks or espionage missions without the need for a pilot. Due to their small size, military drones can act in enemy territory without detection or attracting attention. For a time, this physical invisibility was replicated in the media. The Western press did not report on their use until approximately 2012, despite such aircraft having been employed in countries such as Pakistan and Afghanistan since 2004.

James Bridle’s work began as a response to this lack of information. Drone Shadows, one of the first projects on the issue, responded to a simple question: What would it be like to be next to a drone? In 2012, photographs of this type of weapon were practically non-existent, so it was difficult to imagine their scale or level of technological sophistication. After carrying out an investigation into their measurements and appearance, the artist started to draw outlines of these aircraft in public spaces, in an attempt to make them more visible to passers-by. In a parallel line of work, he opened Dronestagram, an Instagram account with images of the exact coordinates where attacks were documented. In this case, the question implicit was different: why do we know so little about the places and people that our governments are bombarding?

Drone Shadow 007. London, 2014 | © James Bridle

In parallel with his work on military drones and prior to the expansion of these tools for domestic and civil use, Bridle began a line of investigation with home-made helium balloons which he called “drone prototypes”. He used them as a medium to start exploring ways of counteracting the use of police aircraft, as well as of obtaining independent and self-managed aerial images. His most prominent project in this sphere is The Right to Flight, the installation of a balloon suspended over the sky of London for four months. The device was equipped with cameras and routers that enabled it to communicate data in real time to anyone who requested it. At the same time, the British artist organised workshops and conferences on the playful and political possibilities of citizen aerial photography, as well as its relationship with mapping and governmental surveillance.

The map and the territory

Although it is not evident, aerial photography has implications in the perception of the physical space. Digital map services show the user, carrier of a GPS signal, as the centre of the world, but they also establish a type of perspective and power relationship based on who has the technical capacity to capture images from the sky. In an attempt to deconstruct this way of viewing, James Bridle has developed works such as Rorschmap and Anicons, which offer a new cartographic aesthetic. In the hands of these applications, the captures taken by homemade balloons or satellites are transformed into kaleidoscope images, recovering the beauty and the meaning of discovery of the first atlases.

The fight over aerial photography also reflects tensions on ground level, especially when used for security purposes. Bridle’s generation – let’s remember, that of No to War, in England too, as well as the Occupy movement – has learned by force that the public space is not public in all accepted meanings of the term. On certain occasions, the State may deploy its authority over the land to claim its hegemony, especially when it is used for unforeseen purposes. “It’s at such moments”, Bridle writes, “that the real structures of city life become visible: a matrix of permissions and observations, many of them unreadable most of the time”.

The tension over the control of the space is also made evident by the proliferation of public and private video surveillance systems. The artist has worked on this issue in different formats (Every CCTV Camera, The Nor), including walks around London in which he photographs and records all the CCTV cameras that he finds. In this kind of Situationist drift, Bridle counted up to 140 cameras in a single walk of 1.4 miles. And as if the sensation of being spied upon were not enough, the police questioned him on one occasion after he was seen wandering around taking instant photos. As he himself later ironized, he was stopped on suspicion of “the potential crime of paying attention”.

Migration and citizenship in a virtual world

The veto on what is visible and transitable covers public and private spaces but also extends to administrative processes. In Seamless Transitions, a 2015 project on the deportation of immigrants, Bridle discovered that it is illegal to photograph the detention centres and courts used in Great Britain to carry out repatriation. This network of facilities, paradoxically, features luxury lounges and private aircraft. The reason: commercial airlines do not want to transport passengers under duress, especially after the death due to cardio-respiratory failure of an Angolan citizen who was being deported to his country in 2010.

With the aim of evidencing the existence of these spaces, James Bridle got hold of plans and satellite photographs, interviewed academics and activists, and worked with a buildings visualisation agency in order to recreate them. In this way, and although the result is only a virtual representation in 3D, the artist sheds some light on these inaccessible corners, while revealing how the British immigration system functions.

Seamless Transitions

Although Bridle has specifically worked on the issue of immigration and the refugees crisis in Europe, his reinterpretation of the concept of citizenship in a broad sense is remarkable. This is the central theme of Citizen X, an extension for browsers that tracks where the servers of websites visited on the Internet are located. The tool shows the locations in real time and draws a flag with the fragments of these nationalities. A simple but effective proposal that confirms a new form of citizenship: algorithmic citizenship. In this new way of inhabiting the world, people’s freedoms and rights are calculated, rewritten and questioned based on their internet browsing; in a way that is imperceptible for the user but accessible – although in an aggregated way – for governments and businesses.

Giving form to progress

“The personal is political” was one of the most popular maxims of feminism in the 1960s and 1970s. It was used to make manifest that the systems of oppression against women are not only articulated in the economic or legal sphere; everyday life too conceals relations of power that need to be revealed and fought against.

For James Bridle, the technological is political. New technologies are interwoven into the very nature of today’s society, which means it is necessary to wonder about their design in terms of power. Drones, CCTV, servers, are just technical gadgets, it is the legal and political systems that give them form and enable them to operate in one direction or another.

“Technologies are stories we tell ourselves – often unconsciously – about who we are and what we are capable of,” writes James Bridle in a recent essay, “but they are not in and of themselves future-producing, magical or separate from human agency.” With his art and his critical work, this British artist advocates the duty of citizens to give shape to progress, a task that appeals directly to our critical capacity in the face of the lights and shadows of the digital revolution.





Machines and Humans: We Need to Talk

The rise of personal virtual assistants calls for reflection on who is helping whom, and to what extent we will delegate tasks to machines.


El robot George esmorzant amb el seu inventor, William Richards. Berlín, 1930 | German Federal Archives, Wikipedia | CC-BY-SA 3.0

There is an ever increasing number of machines that talk. Chatbots and personal virtual assistants are the advance guard of artificial intelligence systems for mass consumption. The history of consumer technologies is the history of the growing delegation of tasks by humans to machines, generally with positive results, though it also means a transfer of power. When we delegate actions to machines, we are allowing the firms that control them to learn from our interactions.

Stevens has devoted his whole life to unconditional service to others. He is attentive, reliable and discreet: cordial but never familiar. He organises and manages household tasks while looking after his “master” and his entourage: he is always ready to receive visits, convey a message, pour a drink, or attend to an indisposed guest. He is professionally selfless, impeccable and diligent. Years of coexistence and attention mean that, with time, Stevens comes to anticipate the needs and tastes of the people he serves. This is his job as butler, and it is, as shown in the film The Remains of The Day (1993), a source of pride to which he devotes his entire life.

Throughout history, the dominant classes have used subalterns to lighten (and make more enjoyable) their day-to-day burdens. Today, Google reminds when we need to leave the house to catch a flight, organises our holiday photos without us asking, and tells us whether to expect traffic holdups. Sometimes, it also asks us what we thought of such-and-such a restaurant. In its way, the machine is telling us it is there to lend us a hand.

The idea of assistance lies at the heart of the imaginaries surrounding our conceptions of the past, present and future of technologies. At the end of the nineteenth century, like today, the automation of the labour force fuelled the fears of the working classes. In this context, Oscar Wilde flirted with utopian socialism when he wrote that “in the proper conditions […] all unintellectual labour, all monotonous, dull labour, all labour that deals with dreadful things, and involves unpleasant conditions, must be done by machinery”. This marked the advent of the tension between the narrative of the machine as an instrument of oppression and, at the same time, as a promise of freedom.

Networks made of flesh, imagination and cables

When technologies involve a degree of complexity, they need a series of conditions to be able to move from the speculations (or delusions) of research laboratories and corporate communication offices to the operativity of functions, uses and markets. In between, academics, hardware producers, researchers, investment funds, programmers, governments and creators of fiction on paper, celluloid or other formats are piecing together a socio-technical fabric in which they have to fine-tune expectations (visions of the future), interests, laws, meanings, narratives and fears that present or preclude the possibility of certain technologies crossing the threshold that separates fiction from reality.

Illustration by Marta Handenawer | CC BY–NC

One of the rhetorical figures acting as Charon between these two realms is that of the machine at the service of human beings. Present in the narrative both of the washing machine and of C-3PO, the idea of technological assistance has spurred on the innovation industry for decades and has finally succeeded, thanks to advances in algorithms that learn from our interactions and relate information themselves or using systems of understanding and processing natural language, in catalysing an ecosystem where intelligent personal assistants have become a new standard of interaction and, therefore, the new El Dorado.

In innovation and design circles, it is said that for a disruptive technology to be accepted it has to start with known coordinates, before gradually turning towards the unknown. The fact that texting apps are some of the most used (WhatsApp, Facebook Messenger, etc.) is another argument—in ground prepared by favourable cultural and technical habits—towards a gigantic step in the normalisation of conversation as a human-machine interface.

Technophilic bubbles

In the last three years, companies like Amazon, Google, Apple and, soon, Samsung have launched objects that materialise, make intelligible, and give specific uses in the consumer market to artificial intelligence systems (some very useful poetic licence for market interests).

Though with a far more rudimentary function than Stevens, these emerging technologies are capable (not without limitations) of understanding what the user says to them and giving an autonomous answer. They simulate human conversations and behaviours to carry out specific tasks such as putting on music, giving information, organising schedules or calling a taxi, among others.

Each is presented with a different narrative and personality (the software used by the device): Amazon Echo was presented as a helpful new member of the family; Google Home promotes its ability to help in different circumstances, and the Apple HomePod (lagging behind) distinguishes itself by emphasis on the sound quality of its speaker. The strategies of these three companies are cosmetically different, but they have a common aim: to position their artificial intelligence system at the centre of our homes, either to sell more via the internet, or simply to mediate in one of the great spaces not yet colonised by the perpetual corporate machine service (and surveillance).

Illustration by Marta Handenawer | CC BY–NC

Facebook strategies, meanwhile, prefer to approach the battle of conversational computing on another flank: chatbots, textual conversational interaction systems. In 2016, the company presented a platform based on its message service (installation is mandatory to be able to chat on your phone) with a view to opening the buoyant market of chatbots for companies. It was a shrew move: it bought WhatsApp in 2014, obliged everyone who wanted to use the Facebook app to download Messenger, and, when it had captured thousands of millions of users, decided to open as a new way for companies to reach their clients. In this way, it has managed to conquer a considerable portion of an emerging market in which human beings will talk to intelligent programmes that offer specific services (such as psychological attention) or provide an interface (sales, claims or technical assistance) for companies.

Who’s helping whom?

However, whatever the celebratory press and industry marketing say, the time has not yet come when we can speak normally to a computer that responds with a perfect simulation of human speech. One example of the fact that there is still work to do are the chatbots that have shown unforeseen or directly racist behaviours, such as Tay, one of Microsoft’s artificial intelligence bots.

Virtual assistants mean new markets and forms of interaction, expected to be increasingly frequent in areas related to business, education, the car industry or health. Now, when this technology is becoming stabilised, is when it is most important to be able to influence the direction this form of machine-human relation is taking. Because designing a new technology means also designing a series of uses, habits and forms of interaction.

The history of consumer technologies is the history of a growing delegation of human tasks to machines, of new coordinates of cooperation between them, which has had generally very positive results. However, when these artefacts are connected to the network, capacity for action is also transferred to technocratic corporations that are constantly learning from our interactions with the machine or the environment in which it is situated.

This is why users, designers, developers, legislators and investigators have to take sides in the way we want this magma to consolidate, to prevent it reproducing our vices; to imagine what machines and humans will be capable of in this new context; and most of all, to be able to choose what we let these machines do for us, and how, with the transfer of power that it involves.





The Dark Web, Between the Myth and the Promise of Anonymity

The uses of the hidden Internet seem to move between two opposing poles: illegitimate activities and political activism.


However little you’ve heard about the dark web, it’s impossible to navigate around it without preconceived ideas. The media relate it to drug trafficking, terrorism and child pornography; meanwhile, activists and journalists defend it as a tool for social change. Beyond the myths, navigating around the dark web turns out to be much more prosaic than one would expect, and its political potential resides in a practical aspect: recovering the right to anonymity in the era of digital surveillance.

The Internet will always have a hidden side. Actions as simple as filling out a form online, for example, generate databases that conventional databases cannot or simply do not want to index. Like this type of content, countless computer tasks and processes exist that increase the greater part of the Internet’s volume; as well as sites with very few visits or with no links, that pass unnoticed by the eyes of the search engines.

In reality, the large part of this hidden Internet is accessible via conventional browsers. Only a small segment contains information that is intentionally private and anonymous, that can only be consulted using specific software. These types of networks are given the name the dark web, although often they are incorrectly called the deep web. However, this second term encompasses the entire set of contents not indexed by the search engines, whether on purpose or not.

Although it is difficult to quantify the real volume of the dark web, the anonymity network Tor (The Onion Router) calculates that at present between 50,000 and 60,000 services exist with the .onion extension, one of the most popular for anonymising information. A figure that relativizes the magnitude often attributed to the dark web in relation to the conventional web, as on the latter over 1,200 million pages exist.

A threadless net

Despite the expectations it generates, navigating around the dark web is a fairly frustrating and often boring experience. The navigation encryption system leads some pages to load notoriously slowly, which means it is easy to give up if you only enter the dark web out of curiosity.

Moreover, although contents search engines exist, their usefulness is limited. The desire for anonymity, a de-structured design and the fleeting nature of many of the websites are an obstacle to the indexing of results, therefore the majority of search engines do not show relevant entries, or not in the same way as expected with Google. For this reason, the main resources for browsing the dark web are links directories, which by definition do not include all the information possible and also they have an abundance of broken links. As if that were not enough, a study by the MIT’s SMART Lab discovered that 87% of the sites on the dark web do not have a single external link, which does not help to jump from one to another. According to the study’s authors, this is not due to any technical impediment, but to the culture established in that space, where people prefer to remain hidden and where relations are not based on connections of trust.

Probably for this reason the social layer of the dark web is very limited. Although native social networks do exist along with clones of conventional services such as Twitter or Facebook, the majority of them do not have many users and updates are more sporadic, therefore establishing relations is difficult. This does not mean that no active communities exist, but they tend to group around debating forums such as 8chan, the dark sister of the popular imageboard 4chan. In this sense, online sociability is more similar to the Internet of the 1990s, through forums, chats and mail, than that of the social web that is predominant today.

Thus, navigating around the dark web can be a counterintuitive and often solitary experience, and it is not clear that its use is really widely extended. Although it is not the only anonymity network available, Tor currently has some 2 million users, although the service itself estimates that only between 1.3 and 1.5% o its traffic visits anonymous domains.

The dark side of anonymity

The dark web is used for legitimate and illegitimate activities alike and although there is no certainty regarding which of them is in the majority, different studies calculate that some 45-60% of the websites are illegal to some degree or in some jurisdiction. The truth is that any Internet user will easy find sites that are selling arms, drugs, hacking services, stolen goods, false passports, etc. What is not so clear is the volume of transactions generated by these activities or even whether they are real or scams, which are frequent and which some users try to identify. One of the few estimates available is that which the United States made with respect to the defunct Silk Road, a popular black market for drugs. The accusation by the US government against its founder, Ross Ulbricht, attributed to the service some 1.2 billion dollars in sales between January 2011 and September 2013.

Child pornography is also visible and easy accessible, and although only 2% of the dark web is dedicated to paedophilia, The Global Commission on Internet Governance calculates that it attracts over 80% of the total traffic. This does not prevent part of the culture of the dark web being militant against these kinds of contents, and many forums prohibit it and explicitly condemn it. One example is the attacks involving the hacker collective Anonymous, which in various waves blocked the servers of hundreds of domains of this type, while publishing the databases of their users. In spite of all this, it must not be forgotten that these websites also operate on the conventional World Wide Web. In 2016, the Internet Watch Foundation found 57,335 websites of this type on servers open to the public.

Finally, despite the frequent association with terrorism, a study by King’s College London  reached the conclusion that sites with terrorist and extremist ideology are very scarce on the dark web. This could be due to the fact that the propaganda and proselytising of these organisations is more effective on the conventional web, which is more populated and easily accessible by users from all over the world. At the same time, encrypted messaging services such as Telegram already offer anonymity in a simpler and faster way than the dark web, which means that these types of organisations do not need this technology to operate.

Privacy as politics

Leaving aside the more controversial contents, an alternative narrative for the dark web defends its use as a political tool for people living under authoritarian regimes, where access to information and freedom of expression is banned. Although political debating forums and blogs are common on the dark web, the truth is that none of them appear to be geared towards practical action; therefore one can suppose that these types of activities are carried out in secret.

Despite it being practically impossible to obtain information on this practice, the data show that the countries where Tor is most used are those with strong restrictions on freedoms (United Arab Emirates, Russia, Iran, etc.) but also the most consolidated democracies (United States, United Kingdom, Germany, etc.), with the countries with limited democracies being the places with the fewest users. Canadian researcher Eric Jardine has studied this tendency and puts forward various hypotheses for explaining the results: Although its use in repressive countries could be explained as a way of avoiding governmental control, in the liberal regimes it could be due both to unlawful activities, and to the simple possibility of doing it (access to broadband, technology, legal permissiveness). Another possibility, Jardine notes, is that it is due to the desire to avoid censorship and surveillance by governments and companies, or simply to defend the importance of anonymity on the Internet.

In relation to this later aspect, the revelations of mass spying by Edward Snowden in 2013 provided evidence that governmental control is not something that should only concern citizens living under authoritarian regimes, but that people from all over the world are exposed to scrutiny by security agencies, even from foreign countries. For this reason, the potential of the dark web surpasses the sphere of strictly political activism and highlights the idea of anonymity as an essential tool for democracy. Beyond its illicit contents, the dark web is useful insofar as it recovers the promise of the early Internet, a space where people can think and express minority opinions freely, without the obligation of identifying themselves with their real name, without having to provide their personal data to companies and without fear of governmental persecution.

Text and illustrations by Ferran Esteve.

Used images: Onion PrizetakerCatalog of vegetablesAtomic Bomb TestAmerican small armsWomanFemaleEmojioneActual Demonstration by the Fire Department Training StationChild with tigerThe Street railway journalThe American journal of anatomy and Personal identification.





African Cyberactivism Against Censorship

Africa has been an experimentation laboratory for Internet espionage, censorship and denunciation by civil society alike.


Ostrich succumbed by entanglement in barbed wire. Rift Valley, 1936 | Library of Congress | Public Domain

In many spheres the African continent has been a mere laboratory for Western governments and transnational corporations. Although it may seem incredible, one of these spheres has been the digital sphere, but in is darkest dimension: control, espionage, censorship. The interests of certain companies that develop solutions for Internet control have coincided with those of liberticidal governments with a desire to control their citizens beyond any idea of fundamental rights. Meanwhile, civil societies have organised themselves to denounce and fight the abuses and have achieved considerable success, including generating a solidarity network that is as creative as it is tireless.

If in recent years the digital universe has become a stronghold for social construction, it has also been discovered as a place where it is not difficult to strangle the most fundamental of rights. The two sides of one same reality, a space for social and political participation in contexts of severe deprivation of freedom, a scenario for innovative experiences for citizens, perhaps in other slightly friendlier contexts; and at the same time, a fertile environment for espionage, a context in which rights are intended to be violated and downtrodden, sometimes (only sometimes) with impunity. Not only has the African continent not been spared from this macabre game, but rather it has become a privileged board on which to play it. Because of certain specific conditions, the African continent has been the laboratory for the experimentation of many techniques and tools used in Internet espionage and censorship. And in parallel, it is also the scenario of dynamic and creative movements stemming from civil society that are trying to remedy the situation.

It has been over a decade since the activation of the pernicious alliance between companies that develop Internet surveillance tools and African governments that want to control processes of protest or, more simply, their citizens in general, far beyond the boundaries of human rights. Julie Owono is a Cameroonian lawyer resident in France and the executive director of  Internet Sans Frontières, an NGO dedicated to the defence of digital rights. Owono explains that ten or fifteen years ago, digital espionage companies found the perfect conditions to experiment with their “solutions” in some African countries. A playing field with few Internet users, i.e., a perfectly controllable population, and one that was practically virgin in the legal sphere. “There was a virtual legal vacuum in terms of digital rights, rights to privacy, and the protection of personal data,” Owono comments.

This was the context for trialling systems known as DPI (Deep Packet Inspection) which allow systematic spying on communications based on key contents; or tools that allow the “mapping” of communities, making a map of who is who in a protest group, who has been talking to who, how they related through the conversations. Over the course of recent years, the names of countries such as Ethiopia, Egypt, Gabon and Nigeria have appeared linked to the purchase of espionage tools to control the communications of their citizens. On the opposite side, Italian, French, British and Israeli companies appear as the vendors of these tools. Owono recalls the case of the French company Amesys which has been taken to court by the organisation IFDH, as alleged accomplice of the crimes committed by the Libyan regime of Gaddafi, due to the fact that the tools of the French company allegedly enabled the Libyan dictator to identify dissidents.

#KeepitOn | Fighting internet shutdowns around the world

In any event, shutdowns, blackouts of the Internet are, in reality, the favourite formula used by governments to censor and control activity in the digital environment. The #KeepItOn campaign documented 56 shutdowns in twenty-six countries worldwide during the year 2016, eighteen of them in a dozen African countries. The authorities of the continent have abandoned themselves with especial ease to this censorship drift. In the previous year the number of African countries over which digital silence had spread was half. The campaign led by the organisation AccessNow and in which Internet Sans Frontières has played a major role, above all, in information on Africa, aimed to denounce and raise awareness. Denounce the ease with which governments were resorting to these measures and raise awareness regarding the serious nature of this practice.

Julie Owono reminds us, to justify the campaign, that when governments cut access to the Internet it is in order to commit abuses. Among the African cases, the Internet was switched off in Ethiopia in the midst of a severe wave of protests in Oromia and during protest processes these blackouts also occurred in Zimbabwe, Mali and the DR Congo, while in countries such as Uganda, Gabon, Chad, Congo-Brazzaville, and Zambia the shutdowns took place during or after the elections. In all cases, the aim is to silence any social effervescence and conceal any possible repression.

The campaign also had a considerable responsibility in the resolution with which the UN Human Rights Council raised the Internet to the category of a Human Right. The United Nations backed the principle by which “the same rights that citizens have offline, should also be protected online”. And in the same way, in April of this year, the thirty governments that form part of the Freedom Online Coalition, made a commitment to denouncing the shutdowns and fighting against them. These gestures and those of other international organisations show a change in mentality in relation to espionage, control and censorship on the Internet.

Strategies for defending digital rights have different faces because there are many actors with leading roles in this scenario. Organisations for the defence of digital rights remind us that the threat to freedom in the virtual space comes from state governments, but also from companies. Julie Owono warns that on the African continent, legislation in issues such as personal data processing and protection is noticeable by its absence. “There are no boundaries, there are no rules. Companies can collect and store more personal data than in Europe. The bodies entrusted with protecting personal data in most of the African countries either have no resources to do it effectively or are too close to the companies themselves,” the Cameroonian lawyer explains.

Juliana Rotich | Meet BRCK, Internet access built for Africa | TED Talks

Telecommunications operators are a threat to digital rights, whether voluntarily or involuntarily. Voluntarily because they have access to a tremendous volume of data that, as is well known, is commonly used as commercial material, and because in this sense they have very few legal controls. Involuntary because, on occasions, they are also pressured by governments to be able to use the data to which they have access. The executive director of Internet Sans Frontières tells how, in Cameroon, the Ministry of Post and Telecommunications has sent threatening text messages using the network of operators in which it warned of the penalties for spreading false messages, right in the midst of the breakout of social conflict.

For this reason, these operating companies have become the target of organisations for the defence of human rights. While they are working to increase the legal control and knowing that the legislative roads are tortuous and its times slow, they have sought an alternative solution: changing the reality, until the law changes. They try to convince the operators that they will earn more money if they do not lend themselves to the blackouts imposed by the governments and, at the same time, they warn companies of how this violation of rights may affect their international reputation, which at the same time is also an asset.

Organisations that are fighting against the Internet shutdowns in Africa have explored a new pathway for encouraging companies to refuse to accept pressure from the authorities: the pathway of motivation. And what better motivation for companies than financial results? Internet shutdowns lead to losses and that is the most recent line of work: documenting and disseminating information on the amount lost per shutdown, a strategy that has also been developed in other countries.

Although the year began with a blackout in Cameroon, Julie Owono is optimistic with regard to the future. “In 2016 there was a peak of Internet blackouts in Africa, precisely due to the lack of awareness of the serious nature of this measure. But what has enabled the strong mobilisation of 2016 has been the involvement of many more actors. I am almost sure that the blackouts will gradually decrease,” explains the executive director of Internet Sans Frontières.

After the campaigns, the mobilisations and the joint work by activists on the continent and based elsewhere, the pressure against Internet shutdowns and controls, in general, is no longer the sole domain of civil society organisations. The UN has become involved in the defence of digital rights; operators know that they are losing money when the authorities commit abuses; companies from the technology sector also know that an Internet accessible to everyone increases their business. However, activism and solidarity have been the main weapons for advancing towards a free and accessible Internet. While the horizon remains distant, African cyberactivists are increasingly more aware, more prepared, better organised and better connected in everything related with digital security.





Living with Smart Algorithms

Algorithms enable us to create smarter machines, but their lack of neutrality and transparency raises new challenges.


Efficiency Beurs in RAI, Amsterdam, 1987 | Nationaal Archief | Public Domain

Mass data production has sparked a new awakening of artificial intelligence, in which algorithms are capable of learning from us and of becoming active agents in the production of our culture. Procedures based on the functioning of our cognitive capacities have given rise to algorithms capable of analysing the texts and images that we share in order to predict our conduct. Amidst this scenario, new social and ethical challenges are emerging in relation to coexistence with and control of these algorithms, which far from being neutral, also learn and reproduce our prejudices.

Ava wants to be free, to go outside and connect with the changing and complex world of humans. The protagonist of Ex Machina is the result of the modelling of our thought, based on data compiled by the search engine Blue Book. She is an intelligent being and one capable of acting in an unforeseen way and that upon seeing her survival threatened, will manage to trick her examiner and destroy her creator. Traditionally, science fiction has brought us closer to the artificial intelligence phenomenon by resorting to humanoid incarnations, superhuman beings that would change the course of our evolution. Although we are still far from achieving such a strong artificial intelligence, a change of paradigm in this field of study is producing applications that affect increasing facets of our daily life and modify our surroundings, while proposing new ethical and social challenges.

As our everyday life is increasingly influenced by the Internet and the flood of data feeding this system grows, the algorithms that rule this medium are becoming smarter. Machine Learning produces specialised applications that evolve thanks to the data generated in our interactions with the network and that are penetrating and modifying our environment in a subtle, unnoticed way. Artificial intelligence is evolving towards a medium as ubiquitous as electricity. It has penetrated the social media networks, becoming an autonomous agent capable of modifying our collective intelligence and as this medium is incorporated into the physical space it is modifying the way in which we perceive and act in it. As this new technological framework is applied to more fields of activity it remains to be seen whether this is an artificial intelligence for the good, capable of communicating in an efficient way with human beings and increasing our capabilities, or a control mechanism that, as it substitutes us in specialised tasks, captures our attention converting us into passive consumers.

Smart algorithms on the Internet

At the start of the year, Mark Zuckerberg published the post Building Global Community addressing all users of the social network Facebook. In this text, Zuckerberg accepted the medium’s social responsibility, while defining it as an active agent in the global community and one committed to collaborating in disaster management, terrorism control and suicide prevention. These promises stem from a change in the algorithms governing this platform: if up to now the social network filtered the large quantity of information uploaded to the platform by compiling data on the reactions and contacts of its users, now the development of smart algorithms is enabling the content of such information to be understood and interpreted. Thus, Facebook has developed the Deep Text tool, which applies machine learning to understand what users say in their posts, and create models of classification of general interests. Artificial intelligence is also used for the identification of images. DeepFace is a tool that enables the identification of faces in photographs with a level of accuracy close to that of humans. Computerised vision is also applied to generate textual descriptions of images in the service Automatic Alternative Text aimed at blind people being able to know what their contacts are publishing. Furthermore, it has enabled the company’s Connectivity Lab to generate the most accurate population map that exists to date. In its endeavour to administrate connection to the Internet worldwide via drones, this laboratory has analysed satellite images the world over in search of constructions that reveal human presence. These data in combination with the already existing demographic databases offer exact information on where potential users of the connectivity offered by drones are located.

How Does Facial Recognition Work? | Brit Lab

These apps and many others, which the company regularly tests and applies, are on the FBLearner Flow, the structure that facilitates the application and development of artificial intelligence to the entire platform. Flow is an automated learning machine that enables the training of up to 300,000 models each month, assisted by AutoML, another smart app that cleans the data to be used in neural networks. These tools automate the production of smart algorithms that are applied to hierarchize and personalise user walls, filter offensive contents, highlight tendencies, order search results and many other things that are changing our experience on the platform. What is new about these tools is that not only do they model the medium in line with our actions but when accessing the interpretation of the contents that we publish, they allow the company to extract patterns of our conduct, predict our reactions and influence them. In the case of the tools made available for suicide prevention, this actually consists of a drop-down menu that allows possible cases to be reported with access to useful information such as contact numbers and vocabulary suitable for addressing the person at risk. However, these reported cases form a database that when analysed gives rise to identifiable patterns of conduct that in the near future would enable the platform to foresee a possible incident and react in an automated way.

For its part, Google is the company behind the latest major achievement in artificial intelligence. Alpha Go is considered to be the first general intelligence program. The program developed by Deep Mind, the artificial intelligence company acquired by Google in 2014, not only uses machine learning that allows it to learn by analysing a register of moves, but integrates reinforced learning that allows it to devise strategies learned by playing against oneself and in other games. Last year this program beat Lee Sedol, the greatest master of Go, a game considered to be the most complex ever created by human intelligence. This fact has not only contributed to the publicity hype that surrounds artificial intelligence but it has put the company at the head of this new technological framework. Google, which has led the changes that have marked the evolution of web search engines, is now proposing an AI first world that would change the paradigm that governs our relationship with this medium. This change was introduced in the letter addressing investors this year, the text of which was assigned by Larry Page and Sergey Brin to Sundar Pichai, Google’s CEO, who introduced the Google assistant.

Google applies machine learning to its search engine to auto-complete and correct the search terms that we enter. For this purpose it uses natural language processing, a technology that has also allowed it to develop or its translator and voice recognition and to create Allo, a conversational interface. Moreover, the computerised vision has given rise to the image search service, and is what allows the new Google Photos app to classify our images without the need to tag them beforehand. Other artificial intelligence apps allow Perspective to analyse and report toxic comments to reduce online harassment and abuse, and even to reduce the energy cost of its data server farms.

The Google assistant will represent a new way of obtaining information on the platform, substituting the page of search results for a conversational interface. In this, a smart agent will access all the services to understand our context, situation and needs and produce not just a list of options but an action as a response to our questions. In this way, Google would no longer provide access to information on a show, the times and place of broadcast and the sale of tickets, but rather an integrated service that would buy the admission tickets and programme the show into our calendar. This assistant will be able to organise our diary, administer our payments and budgets and many other things that would contribute to converting our mobile phones into the remote controls of our entire lives.

Machine learning is based on the analysis of data producing autonomous systems that evolve with use. These systems are generating their own innovation ecosystem in a rapid advance that is conquering the entire Internet medium. Smart algorithms govern the recommendations system of Spotify, are what allow the app Shazam to listen to and recognise songs and are behind the success of Netflix which not only uses them to recommend and distribute its products but also to plan its production and offer series and films suited to the taste of its users. As the number of connected devices that generate data increases, artificial intelligence is being infiltrated everywhere. Amazon not only uses it in its recommendation algorithms but also in the management of its logistics and in the creation of autonomous vehicles that can transport and deliver its products. The transport-sharing app Uber uses them to profile the reputation of drivers and users, to match them, to propose routes and calculate prices within its variable system. These interactions produce a database that the company is using in the production of its autonomous vehicle.

Autonomous vehicles are another of the AI landmarks. Since the GPS system was implemented in vehicles in 2001, a major navigation database has been produced together with the development of new sensors, which has made it possible for Google to create an autonomous vehicle that has now travelled over 500,000 km without any accidents and it has announced its commercialisation under the name Waymo.

AI is also implemented in assistants for our households such as Google Home and Amazon Echo and in wearable devices that collect data on our vital signs and that together with digitalisation of the diagnostic images and medical case histories, is giving rise to an application based on prediction algorithms and robots designed for health. In addition, the multiplication of surveillance cameras and police records is taking the application of smart algorithms to crime prediction and the taking of judicial decisions.

Google Home: Hands-free help from the Google Assistant | Google

Automatic Learning, the new paradigm for Artificial Intelligence

The algorithmic medium where our social interactions were taking place has become smart and autonomous, increasing its capacity for prediction and control of our behaviour at the same time that it has migrated from the social networks to expand to our entire environment. The new boom in artificial intelligence is due to a change of paradigm that has led this technological fabric from the logical definition of intellectual processes sustained by data that allows algorithms to learn from the environment.

Nils J. Nilson defines artificial intelligence as an activity devoted to making machines smart, and intelligence as the quality that allows an entity to function appropriately and with knowledge of its environment. The term “artificial intelligence” was used for the first time by John McCarthy in the proposal written together with Marvin Minsky, Nathaniel Rochester and Claude Shanon for the Dartmouth workshop in 1956. This founding event was destined to bring together a group of specialists who would investigate ways in which machines simulate aspects of human intelligence. This study was based on the conjecture that any aspect of learning or any other characteristic of human intelligence could be sufficiently described to be simulated by a machine. The same conjecture led Alain Turing to propose the formal model of the computer in his 1950 article Computer Machinery and Intelligence, together with other precedents such as Boolean logic, Bayesian probability and the development of statistics, with progress made in what Minksy defined as the advance of artificial intelligence: the development of computers and the mechanisation of problem-solving.

However, in the mid 1980s a gap still existed with respect to the theoretical development of the discipline and its practical application which caused the withdrawing of funds and a stagnation known as the “winter of artificial intelligence”. This situation changed with the dissemination of the Internet and its major capacity to collect data. Data is what has enabled the connection between solving problems and reality, with a more pragmatic and biology-inspired focus. Here, instead of there being a programmer who writes the orders that will lead to the solution of a problem, the program generates its own algorithm based on example data and the desired result. In Machine Learning the machine programs itself. This paradigm has become the standard thanks to the major empirical success of the artificial neural networks that can be trained with mass data and large-scale computing. This procedure is known as Deep Learning and consists of layers of interconnected neural networks that loosely imitate the behaviour of biological neurons, substituting the neurons with nodes and the synaptic connections with connections between these nodes. Instead of analysing a set of data as a whole, this system breaks it down into minimal parts and remembers the connections between these parts, forming patterns that are transmitted from one layer to another, increasing their complexity until the desired result is achieved. Thus, in the case of image recognition, the first layer would calculate the relations between the pixels in the image and transmit the signal to the next layer and so on successively until a complete output is produced, the identification of the content of the image. These networks can be trained thanks to Backpropagation, a property that allows the changing of relations calculated in accordance with human correction until the desired result is achieved. Thus the major power of today’s artificial intelligence is that it does not stop at the definition of entities, but rather it deciphers the structure of relationships that give form and texture to our world. A similar process is applied to Natural Language Processing; this procedure observes the relations between words to infer the meaning of a text without the need for prior definitions. Other fields of study contained in the current development of AI are Reinforced Learning, a procedure that changes the focus of machine learning for the recognition of patterns in experience-guided decision-making. Crowdsourcing and collaboration between humans and machines are also considered part of artificial intelligence and have given rise to such services as Amazon’s Mechanical Turk, a service where human beings tag images or texts to be used in the training of neural networks.

Artificial neural network | Wikipedia

The fragility of the system: cooperation between humans and smart algorithms

Artificial intelligence promises greater personalisation and an easier and more integrated relationship with machines. Applied to fields such as transport, health, education or security it is used to safeguard our wellbeing, alert us to possible risks and obtain services when requested. However, the implementation of these algorithms has given rise to some scandalous events that have alerted to the fragility of this system. These include the dramatic Tesla semi-automatic vehicle accident, the dissemination of false news on networks such as Facebook and Twitter, the failed experiment with the bot Tay developed by Microsoft and released on the Twitter platform to learn in interaction with users. It had to be withdrawn in less than 24 hours due to its offensive comments. The labelling of Afro-American people on Google Photos as “gorillas”, the confirmation that Google is less likely to show high-level job adverts to women as to men, or the fact that Afro-American delinquents are classified more often as potential re-offenders than Caucasians have shown, among other problems, the discriminatory power of these algorithms, their capacity for emerging behaviours and their difficulties in cooperation with humans.

These and other problems are due firstly to the nature of Machine Learning, its dependency on big data, its major complexity and capacity to foresee. Secondly, they are due to its social implementation, where we find problems arising from the concentration of these procedures into a few companies (Apple, Facebook, Google, IBM and Microsoft), the difficulty of guaranteeing equalitarian access to its benefits and the need to create strategies for resilience against the changes that will take place as these algorithms gradually penetrate the critical structure of society.

The lack of neutrality of the algorithms is due to their dependency on big data, the databases are not neutral and present the prejudices inherent in the hardware with which they have been collected, the purpose for which they have been compiled and the unequal data landscape – the same density of data does not exist in all urban areas nor in respect to all social classes and events. The application of algorithms trained with these data can disseminate the prejudices present in our culture like a virus, giving rise to vicious circles and the marginalisation of sectors of society. The treatment of this problem involves the production of inclusive databases and a change of focus in the orientation of these algorithms towards social change.

Crowdsourcing can favour the creation of fairer databases, collaborate to evaluate which data are sensitive in each situation and proceed to their elimination and test the neutrality of applications. In this sense, a team from the universities of Columbia, Cornell and Saarland have created the tool FairTest which seeks unfair associations that may occur in a program. Moreover, gearing algorithms towards social change tan contribute to the detection and elimination of prejudices present in our culture. The University of Boston in collaboration with Microsoft Research has carried out a project in which algorithms are used for the detection of prejudices contained in the English language, specifically unfair associations that arise in the Word2vec database, used in many applications for the automatic classification of text, translation and search engines. Eliminating prejudice from this database does not eliminate it from our culture but it avoids its propagation through applications that function in a recurring fashion.

Other problems are due to the lack of transparency that stems, not only from the fact that these algorithms are considered and protected as property of the companies that implement them but also from their complexity. However, the developing of processes that make these explanatory algorithms is of essential importance when these are applied to medical, legal or military decision-making, where they may infringe the right that we have to receive a satisfactory explanation with respect to a decision that affects our life. In this sense the American Defense Advanced Research Projects Agency (DARPA) has launched the program Explainable Artificial Intelligence. This explores new systems of deep knowledge that may incorporate an explanation of their reasoning, highlighting the areas of an image considered relevant for their classification or showing an example of a database that exemplifies the result. They also develop interfaces that make the deep learning process with data more explicit, through visualisations and explanations in natural language. An example of these procedures can be found in one of the Google experiments. Deep Dream, undertaken in 2015, consisted of modifying an images recognition system based on deep learning so that instead of identifying objects contained in the photographs, it modified them. This inverse process allows, as well as the creation of oneiric images, for visualisation of the characteristics that the program selects to identify the images, through a process of deconstruction that forces the program to work outside of its functional framework and reveal its internal functioning.

Finally, the forecasting capacity of these systems leads to an increase in their control capacity. The privacy problems stemming from the use of networked technologies are well known, but artificial intelligence may analyse our previous decisions and predict our possible future activities. This gives the system the capacity to influence the conduct of users, which requires responsible use and the social control of its application.

Ex Machina offers us a metaphor of the fear that surrounds artificial intelligence, that which exceeds our capabilities and escapes our control. The probability that artificial intelligence may produce a singularity, or event that would change the course of our human evolution, continues to be remote, however smart algorithms in machine learning are becoming disseminated in our environment and are producing significant social changes, therefore it is necessary to develop strategies that allow all social agents to understand the processes that these algorithms generate and participate in their definition and implementation.





Technological Sovereignty? Democracy, Data and Governance in the Digital Era

Technological sovereignty must become another pillar on which to gradually construct and consolidate a new technological model that is ethical, responsible and civic.


Steering System, 1966 | San Diego Air & Space Museum Archive | No known copyright restrictions

With the change of government in Barcelona, a discourse has been gaining ground that places the emphasis on technological sovereignty and talks about data commons, digital rights and free programming. Even so, when we talk about citizens’ capacity to decide we cannot overlook the collective dimension of our online rights. For this reason it is essential to link this sovereignty with terms such as equity, justice and redress.

For months now it has been common to hear discourses that relate technology and sovereignty, especially in Barcelona. From a council team that arrived in government with the promise and the responsibility of rethinking the city’s relationship with technology after paradigmatic cases of opacity, corruption and rejection by citizens, the new principles aim to shape an alternative in the face of the option of smart cities dominated top-down by processes and with little or no consideration for the social return, the common good, or the social and ethical impact of technology.

This definition of alternatives and new paradigms is extremely urgent. The discourse around technology seesaws between extremes all too often, from a certain reinvented and technophile playfulness to the most absurd and irrational techno-solutionism, that acritically assimilates data with oil or assumes that more information is always synonymous with better decisions. The only nuance is that while the right wing tends to trust in the market and deregulation, hoping that, as if by magic, technology will improve the economy, make us more efficient and conceal inequalities, the left wing marks its distances with the market without actually committing to public initiative and the capacity to imagine different technological futures, and it shares with the right a childish hope that technology will bring us closer to better societies. Abracadabra.

Thus, left, right and centre, the technological discourse is often filled with chants to Big Data, the Internet of Things, “smart” devices, algorithms and Silicon Valley, as evidenced by Britain’s Labour leader when he referred to the “fourth industrial revolution” in these terms. And when someone demands a thought for the social impacts of these processes, for the risks for privacy, for the need to rethink rights, values and guarantees in this new data scenario, techno-solutionism drops some qualifying adjective such as “ethical” or “responsible” and down it goes.

In this context, the role that the city of Barcelona wants to play is important. Can we rethink technology from the angle of the common good? If so, what are the terms and contents of the new proposals? For the Deputy Mayor of Barcelona, “in a democratic city, technology should serve to digitally empower citizens, to protect their privacy from abuses by the public and private powers, to fight against corruption and to advance towards a more equitable and sustainable economy. That has a name: conquering technological, digital sovereignty, for the common good.”[1] The city council’s documents, for their part, make distinctions between technological sovereignty (breaking down the dependence on proprietary programs and encouraging public leadership), data sovereignty (safeguarding the privacy of citizens), and transparency (citizens’ audit). The model proposed for Barcelona, therefore, focuses on leadership and public value, privacy and transparency.

Being pioneers in the structuring of an own-grown definition of the role that technology must play in urban environments and processes, and attending to the understandable balances and legacies of the existing systems and processes, represents a remarkable effort. However, there are other principles that other cities and initiatives are already working on and that remain outside of the discourses of the moment. Thus, key questions linked to equity and justice, responsibility, redistribution or redress (yes, machines and algorithms make mistakes and their victims fall into bottomless pits of legal and administrative incomprehension) have not yet found their place in the city’s new digital plans.

The boundaries of data sovereignty

Opting for technological sovereignty as the catalyst of a new paradigm is stimulating, but also risky and potentially limiting. Sovereignty is a complex and often controversial term, that in its Rousseauian tradition makes reference to a republican power emerging from the people and under its control. In the context of geopolitical disputes for the control of transnational communication, during recent years sovereignty has been considered as the construction of a governance – at different governmental and political-territorial levels – with a greater capacity for coordination and regulation of digital exchanges, that may increase both collective security and competitiveness alike[2]. The case of Barcelona is defended as being synonymous with the “capacity to decide” – the possibility of creating governance frameworks and technological solutions that do not abuse citizens’ data, that respect them and their capacity to know what is happening with their information, working to tackle real and not only commercial problems, and that are based on open, auditable and customisable codes. A kind of popular sovereignty of data that is distanced from other notions linked to sovereignty, such as protectionism or the (absurd) attempt to develop technological infrastructures linked to territorial boundaries.

Technological Sovereignty and Cities | Evgeny Morozov | BITS

Furthermore, if the choice of the term sovereignty to structure a new technological discourse and practices seeks to base itself on the idea of decision-making control and capacity (following the analogy of popular sovereignty), the elements linked to the common good, precisely, fade into the background. Sovereignty allows the expression of individual opinions that, once aggregated, determine political futures. In the world of data and in the individual relationship between citizens and devices for capturing information, this decision-making capacity overlooks the collective dimension of the rights that are at play, such as privacy, and opens the door to terribly harmful data relations.

Can a citizen decide individually to share his or her data, when it contains the data of other individuals? If a person freely and sovereignly decides to install an application on their mobile phone that captures data, what becomes of the decision-making capacity of the people on their contacts list, whose data is immediately transferred to a third party? Managing the collective aspect of the social impact of technology escapes the notion of sovereignty.

There are other key aspects that must form part of any new technological discourse, such as attention to non-discrimination and the digital divide, terms linked to equity, justice and redress. The data society pivots around algorithms that classify and take small and major decisions about the information they receive and the people from whom they receive it. These algorithms often reproduce discriminatory (sexist, racist) dynamics, as in the cases of voice recognition systems that do not identify the voices of women, or the police algorithms that recommend concentrating police activity in black neighbourhoods, or Google’s jobs advertising system, which only shows the best paid jobs to men. Is the capacity to decide or free programming sufficient to tackle these issues?

In the decision to provide broadband to disadvantaged neighbourhoods, for example, who takes into account the impact of this policy on existing inequalities? Who looks out to ensure that these populations are not made more vulnerable by exposure to providers that abuse the information that they share via this new infrastructure? How can one value the social and ethical impact, and construct better practices before establishing the relationships with citizens?

Towards ethical data management

For some years now, different actors have talked about the need to give form to a new social contract in the area of technology use and personal data. For these voices, the abuse of data to create new forms of manipulation, exploitation and control (public and private) threatens to erode fundamental values such as trust and democracy itself. Laying down the bases of this new framework agreement requires mobilisation in three fundamental areas: local government, the private sector, and citizens.

In recent years, academia and significant parts of civil society have gradually rescued and underline which are the rights and values that are suffering under the asphyxiating boot of techno-solutionism. As we mentioned previously, key concepts such as equity, justice, transparency, privacy, responsibility, redistribution, redress and leadership and public and citizens’ value emerge as elements to be custodied in data processes.

A month in the life of Barcelona | BCN Beats

How can this be done? Firstly, local government has to be capable of giving form to the future, while tackling the chaos of the past. For decades complex organisations have been incorporating systems without planning or control, constructing technological addendums on outdated processes and wasting the opportunity to rethink them from zero. To tackle this chaos from the past, local government needs to equip itself with data architecture, data governance systems and specific roles of supervision and custody of best practices. There are very few organisations that today can stand up to a data audit, and with the entry into force of the new European data protection regulation and citizens’ demands for responsibility and transparency, this scenario is becoming more unsustainable by the day.

In order to shape the future, the administrations have to rethink the way in which technology is budgeted, tendered and acquired. Committing to innovative and transparent processes, free programming and contracts that clearly establish which uses can and cannot be made of the data generated are essential. Avoiding algorithmic discrimination, cyber-attacks and investment without returns starts with the drawing up of terms and clauses for tenders and cannot finish until the final execution of the projects. The administration can also provide valuable data for companies, but any public “commons data” must be based on rigorous processes of data curatorship, checking for anonymity and responsible management by third parties.[3]

The private sector has its own incentives to avoid being left behind in the race to incorporate ethical and responsible practices. However, different cities are experimenting with ways of promoting more diverse and innovative ecosystems, based around technology. Firstly, there are those that acritically assume the techno-optimist discourses and effectively believe that data is the new oil.[4] Secondly, there are those who back civic technologies and responsible ecosystems, and the rigorous study of the impact of technological policies, such as New York.

Finally, citizens must have the tools to be able to affirm and defend their rights both online and offline, based on clear and applicable regulatory frameworks that allow the identification of abuses and discriminations and activate mechanisms of redress. Being able to use technology in a responsible way and expecting legal behaviour from it, however, does not depend on the capacity of users to understand and defend themselves. If catching a plane does not require knowledge of aeronautical mechanics, giving data to any authority, company or service should not require knowledge of the legal framework. This passive protection is what public bodies have to encourage and guarantee.

An action plan on these three levels (back-end governance by local authorities, new forms of procurement and control of the acquisition of technology and the protection of individual and collective rights of citizens) should cover all that we know about the potential and the risks in the short, medium and long term of the data society, and allows us to awaken from our current state of sleepwalking towards socially undesirable futures.

The construction of a new discourse linked to technology, therefore, has to incorporate elements of control and empowerment of citizens, and so-called technological sovereignty may be another piece in the puzzle of challenges that need to be tackled. Cities, labour relations, transparency, mobility, rights and a long list of phenomena are all being shaken up by the new possibilities for technology and data. Giving form to a new social contract for data that enables the establishment and re-establishment of notions such as justice and redistribution in these new socio-technical realities requires control and decision-making capacity, yes, but also legality, ethics, acceptability, non-discrimination and a firm commitment to civic technologies, capable of incorporating in specific ways this preoccupation with its impact and the dynamics that are reproduced.

So a big welcome to technological sovereignty, as another pillar on which to gradually construct and consolidate a new technological model that is ethical, responsible and civic.


[1] Ciutats amb sobirania tecnològica, Gerardo Pisarello. El Periódico, 22 de juny de 2016.

[2] See: Reading, Viviane (2015). Digital Sovereignty: Europe at a Crossroads, EIB Institute, or A. Cattaruzza, D. Danet, S. Taillat and A. Laudrain, “Sovereignty in cyberspace: Balkanization or democratization,” 2016 International Conference on Cyber Conflict (CyCon U.S.), Washington, DC, 2016, pp. 1-9.

[3] Different methodologies exist to do this. One of the most promising is that developed by the Center for Urban Science and Progress of New York University.

[4] This is not the place to tackle this issue in depth, but the comparison between data and oil does not hold up. Oil stimulates economics on diverse scales because a small quantity of oil is sufficient to derive value from it on a small scale (a transporter does not need an oil well, with just a few litres he can generate business and value). Data, in contrast, structures a market that tends towards the monopoly, and where the ownership of a small quantity of data does not generate any value. The data market is a monopolistic market where the winner takes all, generating a decreasing value for the actors who contribute to this process of accumulation.





Governments Against the Networks. Freedom of Expression on the Internet

In response to the growing importance of the social media networks for activism, governments are increasing their control over Internet platforms and users.


Birds in their cages, China | DaiLuo | CC BY

For over two decades, new technologies have been turning into indispensable tools for any process of social change. For this reason, governments worldwide are taking an increasingly tougher stance with measures to control them, whether by censoring contents, closing down platforms or directly persecuting users. Authoritarian regimes also harass the political opposition via the Internet; in democracies, while representative democracies are applying pressure to increase surveillance under the threat of terrorism. Spain presents a particular formula of its own, with highly controversial management of dissidence in the name of security.

From the Zapatista National Liberation Army (EZLN) to Edward Snowden, activists making use of the communication and information networks have ceased to be an exceptional phenomenon and are now common currency. This is obvious in the light of the increasingly strict control that governments worldwide are exercising over the Internet and new technologies. According to the report ‘Freedom on the Net 2016’, produced by non-governmental organisation Freedom House, some 67% of Internet users live in countries where criticising the authorities may be a cause of censorship. Based on data extracted from 65 states between June 2015 and May 2016, the organisation alerts that online freedom has declined for the sixth year running, with more governments than ever keeping tabs on what is said on the social media networks and banning certain apps to prevent the circulation of information, especially during periods of anti-government revolts.

Shooting the messenger. The pressure on tools

Following the electoral protests in Iran in 2009 and the subsequent revolts of the Arab Spring, analysts and the media popularised the idea of “Twitter revolutions” or “Facebook revolutions”. Barely eight years later, these social networks have ceased to be the main core for organising political dissidence, and today journalists and activists in many countries have turned to instant messaging tools, which are more private and secure, in order to avoid governmental control.

The pressure on encrypted messaging services such as WhatsApp and Telegram has increased under authoritarian and democratic governments alike; under the latter as part of the fight against terrorism. Out of 65 countries evaluated by Freedom House, 24 have at some point in the last year prevented access to social networks and other communication tools, nine more than in 2015. At times of political upheaval, 15 governments have even gone so far as to block all connection to the Internet in order to prevent the dissemination of information, a practice that attracted attention in Egypt in 2011 but that ins increasingly frequent, as in the Ethiopia protests of October 2016.

Demonstration against the “Gag Law”. Madrid, 2014 | Adolfo Lujan | CC BY-NC-ND

Using less drastic measures, western democracies continue applying pressure to oblige manufacturers and telecommunications operators to hand over private data in order to fight against terrorism, as exemplified by the requests of France and Germany to the European Commission, or the encryption dispute maintained by the United States with Apple as a consequence of the San Bernardino attack in late 2015.

Severe penalties for users

Beyond the measures against channels and tools, the report ‘Freedom on the Net 2016’ alerts to the worrying increase in arrests of ordinary Internet users. Since June 2015, a total of 38 countries have arrested people due to their activity on the social media networks, 17 more than for publications in newspapers or blogs. Furthermore, the sentences that they face are increasingly severe and disproportionate, since in some cases people are being tried for the mere act of publishing, sharing or giving a “like” to a piece of content in Facebook. An example is the case of Andrei Bubeyev, a Russian engineer sentenced to serving two years in prison for sharing a post arguing that Crimea is part of the Ukraine on social media network Vkontakte. Only twelve of his contacts saw it.

Although the persecution of dissidence on the Internet is nothing new in many countries, the severity of the punishments seems to pursue an objective that revolves more around making an example of people rather than around security, inciting the population to self-censorship. Sensitive materials vary from country to country, but they range from political to religious criticism, and include accusations of corruption, opinions on armed conflicts and the defence of the rights of LGTBI and of minority ethnic groups. Some world leaders also show themselves to be particularly sensitive to satire, as is the case with Egypt where a 22-year-old student was sentenced to three years in prison for publishing a photo of president Abdel Fattah Al-Sisi with Mickey Mouse ears.

Spain: the political management of security

Although the ‘Freedom on the Net 2016’ report does not contain data or assessments regarding the situation in Spain, it is easy to find incidents that have parallels with some of the tendencies mentioned. The most obvious is the Ley de Seguridad Ciudadana or “Gag Law” passed in March 2015. This law substantially affects the rights to meeting, demonstrating and freedom of expression; and it contains two controversial aspects with relation to the Internet.

Ley de Seguridad Ciudadana or Gag Law | Atxe

Firstly, the Law penalises the publication without authorisation of the State security forces and corps, and affords greater discretionality to the police to confiscate mobile telephones and cameras. This limits, for example, online live streaming of demonstrations and the possibility of documenting abuses of power. Secondly, it persecutes the organisation of non-authorised protests via the Internet, and may consider the first person to announce a demonstration on a social media network responsible for that demonstration.

Although not detailed in any explicit way, the spirit of Spain’s Gag Law (Ley Mordaza) appears to be geared towards containing the social movements typical of the network society. As studied widely by authors such as Manuel Castells and Víctor Sampedro, political activism in the information society is characterised by interventions in the public space undertaken by self-invited groups of people without any stable structures or formal leadership, organising themselves solely via the Internet. Aspects that in one way or another are penalised by the Citizens’ Security Law, which limits the occupation of public space without permission and allows any person who posts a demonstration online to be held responsible for it, even if there is nobody formally in charge of it.

Leaving the Gag Law to one side, the Spanish authorities have also intensified the persecution of opinions on the social media networks. The successive phases of the so-called Operación Araña (Operation Spider) have arrested over fifty people accused of the exaltation of terrorism and humiliation of its victims through the posting of messages or jokes online. The resulting sentences, some including prison terms, lead to serious questions regarding proportionality, as well as to an evident tension for those people who make use of the Internet and of humour and sarcasm to challenge the authorities.

Thus, Spain is no exception in an international scenario where there are growing tensions in the political use of the Internet, under both authoritarian regimes and representative democracies alike. And although violations of freedom of expression are glaringly obvious in some states, there appears to be no simple formula that allows all situations to be analysed equally. In this sense, especially useful is the contribution of ‘Freedom of Connection, Freedom of Expression’, a document published by the UNESCO that proposes an “ecology of freedom of expression” for the Internet. A methodology that, in addition to the law, takes into account all the factors and interests involved: from the right to information, to hate speech and online harassment, and also including Internet neutrality, cybersecurity and privacy laws, etc. A complex universe of correspondences that varies in each country and acts as a reminder that hostility between the ruling class and ordinary citizens not only depends on the legal boundaries, but also on who holds power and who is vulnerable, as well as the dominant consensuses of opinion in each society.





Vaporwave: The Musical Wallpaper of Lost Futures

Half a decade after it first appeared, we take stock of this internet-based music genre and explore its critique of consumer culture.


Salford College of Technology recording studio in the Adelphi Building, Manchester 1984 | University of Salford | CC BY-NC-ND

Born in 2011, Vaporwave combines previous electronic styles with advertising and corporative sounds from the eighties and nineties. In spite of the diversity and ambiguities in its attitude and message, vaporwave can be considered a critique and a parody of consumer society, even though aesthetically it displays a curious and nostalgic fascination with its artefacts. Not long after being hailed as the latest internet-based music microgenre, vaporwave was given up for dead. The short-lived phenomenon gave rise to all kinds of opinions and questions: had it become a music meme? was it really dead? what precisely is or was vaporwave anyway? and what were or are its links to anti-capitalist critique?

The history of vaporwave has gone hand in hand with controversy over its birth, death and resurrection, and with debate around its ideological ambiguity regarding the techno-capitalist model of progress. The sounds and imagery of vaporwave invoke a kitsch, ultra-colourful, hyper-technological world full of LCD screens and all kinds of digital gadgets.The psychedelic, seemingly arbitrary iconography of vaporwave also includes classical sculpture, palm trees, titles written in Japanese characters and graphic references to obsolete computing systems and video games. These elements make it a somewhat strange and disconcerting genre with an undefined aesthetic, and put it in a historical space of its own.

The exclusively internet-based vaporwave universe looks back with benevolent irony at a time when advances in consumer technologies and communications promised a brilliant future, brimming with possibilities. Before the advent of social media and smartphones, before we had lost our innocence regarding the blessings and potential of hyperconnectivity and algorithmic intelligence, the internet looked like a window opening out onto a cybernetic garden of Eden that was prosperous, democratic, and humanistic. For users at that time, the closest thing to millenarian fears related to the net was a droll fascination with the arrival of the “millennium bug”. They were the years of fin-de-siècle vertigo and of naive excitement at the imminent change of millennium, which vaporwave invoked almost a quarter of a century later.

Oneohtrix Point Never - Problem Areas

But were did vaporwave come from? The term was coined in 2011 to describe a microgenre that had started circulating on the internet. James Ferraro (alias Bebetune$), Chuck Person (alias Daniel Lopatin / Oneohtrix Point Never) and Macintosh Plus (alias Ramona Andra Xavier / Vektroid / New Dreams Ltd / PrismCorp Virtual Enterprises) were among the originators of this genre that ceased to be “micro” and branched off into myriad projects, sub-categories and niche record labels. The name itself reflects an old-fashioned, innocent view of the future: it is a variation on “vaporware”, the term used to refer to technological products that are advertised for release, along with photographs or infographics, but never actually make it to the production stage or onto the market. Vaporware products are an unfulfilled technological promise, an obsolete futurist dream forever anchored in the past.

Preceded (and to some extent influenced by) phenomena like hypnagogic pop, witch house and seapunk, vaporwave often takes fragments of commercial or advertising music from the eighties and nineties and slows them down to half their original speed. Rhythms, synth lines and other elements like corporate jingles and sounds from operating systems and software programmes are then layered over those slowed-down loops. Smooth jazz, soft rock, new age, muzak, and anything else likely to be played in an elevator, a waiting room, or while waiting for a phone company to answer your call, are some of the sources that vaporwave artists use to create their plunderphonic compositions. Vaporwave is the quintessential music of the non-place, the on-hold melody of the call centre for an imaginary (retro)futurist holiday destination in some unspecified part of the planet. The question that vaporwave provocatively seems to ask is: would we or would we not like to call that call-centre and make a reservation? In other words, does the world it represents seduce or horrify us?

In an article entitled “Vaporwave and the Pop-Art of the Virtual Plaza”, Adam Harper, one of the first exponents of vaporwave, explores whether it is more utopian or dystopian, and whether it criticises capitalism or surrenders to it. In Harper’s opinion, this ambiguity is at the very heart of vaporwave, and it is precisely what makes it an interesting phenomenon worth thinking about. He also suggests that vaporwave could be understood as the soundtrack to accelerationsim, a philosophical idea which is spoused by theorists like Nick Land, and which inspired Alex Williams and Nick Srnicek to write the “Manifesto for an Accelerationist Politics”.

Dissatisfaction with the present, nostalgia for “lost possible futures”, and a yearning for an “alternative modernity” that is in some sense similar to certain science fiction scenarios are some of the points that Williams and Srnicek mention in their manifesto, and they also seem to be among the ideas that inspire vaporwave artists. In Retromania. Populture’s Addiction to its Own Past, Simon Reynolds quotes Daniel Lopatin (aka Chuck Person) explaining that the decelerated versions of eighties pop songs on his Ecco Jams album are the result of his experimentation with “slow modes of listening in an otherwise fucked-up/hypersped society”. Lopatin also talks of “the seduction of the loop, and the meditative promise of it, how it hints at infinity.” More provocative and closer to accelerationist ideas, James Ferrano says that his album Far Side Virtual (Wire magazine’s 2011 album of the year) and his project 100% (presented at MoMA PS1 in 2014) are intended to evoke the “aestheticization of capitalism”. He also admits that he is obsessed with capturing what he calls a ‘pan global generic sound’ or ‘global ambiguity’.

Oneohtrix Point Never - Nobody Here

In Ghosts of My Life. Writings on Depression, Hauntology and Lost Futures, cultural theorist Mark Fisher, who died earlier this year, explores the relationship between the “slow cancellation of the future” (an expression he borrows from Franco “Bifo” Berardi) and our culture’s inability to articulate an idea of the present. Futurist music, for example no longer suggests a future from which we can expect anything new, Fisher says, so any attempt to represent the future appears doomed to remain nothing more than retro-futurist speculation. Returning to the words of Frederic Jameson, Fisher warns that contemporary culture is in a “nostalgic mode” that keeps it in a state of “formal attachment to the techniques and formulas of the past.”

In view of this, Fisher suggests using the neologism “hauntology” (hantologie) – which was coined by Jacques Derrida in his book Spectres of Marx – to, among other things, describe a certain type of electronic music that emerged just over a decade ago. Derrida describes hauntology as the mode in which everything that exists does so based on a series of absences that precede and envelop it, and at the same time make it coherent and intelligible. In reference to hauntological music, Fisher writes that “the hopes created by postwar electronica or by euphoric dance music of the 1990s have evaporated – not only has the future not arrived, it no longer seems possible.” But at the same time, “the music constitutes a refusal to give up on the desire for the future.” Thus hauntological music appears to swing between nostalgia and aspirations for a lost future, simultaneously projecting itself forwards and backwards in time. Even though Fisher does not mention vaporwave (the genre as such did not exist yet), his description of hauntological music seems to match the genre’s particular (retro)futuristic aesthetic and ambiguous historical stance.

In his book Babbling Corpse. Vaporwave and the Commodification of Ghosts, Grafton Tanner describes vaporwave as a reaction to today’s cultural regression and a subversive electronic eulogy to “techno-decay during the twilight of late capitalism.” Even so, Tanner knows that it is dangerous and difficult to try to define vaporwave. At the mercy of internet trolling and of the implacable semio-economy of memes, vapowave is a volatile expression that resists interpretation and can also be transformed by it. Or even die as a result. In 2012, when Adam Harper published his article and Anthony Fantano posted his video-review of the the Macintosh Plus album Floral Shoppe on his YouTube channel, many fans believed that exposing vaporwave beyond the small circles that it belonged to had killed off the genre.

The image on the cover of Floral Shoppe, and the second track on the album, “リサフランク420 / 現代のコンピュー” (the closest thing to a vaporwave anthem), inspired an avalanche of memes and videos making fun of their impenetrable aesthetic and meaning. Ironically, it was probably these memes that introduced vaporwave to many neophytes who gradually discovered the genre and started to take it seriously. In the last few years, the death of vaporwave as a result of various causes has been announced repeatedly. But in spite of this, and in spite of the existence of things like a label exclusively dedicated to vaporwave parodies and outlandish offshoots like simpsonwave, there is still a steady increase in the number of followers and the genre continues to generate discussion (see for example the active vaporwave forum on Reddit). As Scott Beauchamp suggests in his article “Vaporwave is dead. Long live vaporwave”, the genre may have been given up for dead because it does not fit the usual mould of a music commodity or product. Vaporwave has always circulated outside of the margins of the music business and its usual channels, so it has always been phantasmagorical in a sense: a musical hauntology trapped between the past and the future, safe from fads and the market logic in the internet’s hidden non-places.





Algorithmic Injustice

In light of the increasing use of algorithms to automate decisions, we do not just want them to produce results that are correct. They must also be fair.


RAF officer and a Red Cross female ambulance driver in blindfold race. Rang du Fliers, 1918 | Imperial War Museum | Domini públic

Artificial intelligence has made it possible to automate decisions that human beings had been responsible for until now. Although many of these decisions are in the fields of entertainment and social media, automated decisions are also used in finance, education, the labour market, insurance companies, medicine, and justice. This phenomenon has far-reaching social consequences and brings up all sorts of questions, from “what will happen to the jobs that took care of making these decisions in the past?” to “how can we guarantee that those algorithms make fair decisions?”

Mary Bollender is a financially struggling single mother from Las Vegas. One morning in 2014, when her ten-year old daughter had a fever that would not go away, Mary decided to drive her to the emergency room. But when they got in the car, she found that it refused to start. The problem was not engine trouble or an empty tank: the bank had remotely activated a device that prevented her car from starting because her monthly car repayment was three days overdue. As soon as she paid, she would be able to get back on the road. Unfortunately, she did not have the money that morning.

As our society continues to become more complex, with data digitalisation expanding on a massive scale, cases like Mary Bollender’s will become increasingly common: automated decisions based on the systematic collection of personal data with potentially negative effects on our lives. Algorithms control which of our friends’ posts we see first on our Facebook news feed and recommend films we might want to watch on Netflix, but algorithms also decide whether a bank will give us a loan, whether someone awaiting trial can be released on bond, whether we deserve a postdoctoral grant, and whether we are worth of being hired by a company.

These computer programs or algorithms are not produced as a result of human programmers analysing and breaking down a problem and then feeding precise instructions into a computer. They are the fruit of complex mathematical operations carried out automatically that look for patterns in an ocean of digitized data. They are not like a recipe that sets out a list of ingredients and then tells us what do do with them step by step. It is more a case of “open the fridge, see what’s in there, rummage around the kitchen to find some pots and pans you can use, and then make me dinner for six.” These types of algorithms that are not explicitly designed by a programmer come under the concept of “machine learning”.

Machine learning algorithms are currently used to process data and determine whether a person will be able to repay a loan. The person who programs the algorithm compiles a database of people who have asked for loans in the past, including all kinds of data: the applicant’s sex and age, whether or not they repaid the full amount, whether and how often they were late with repayments, their average wage, how much tax they paid, in what city and neighbourhood they live, and so on. The algorithm applies a series of statistical formulas to all this data and then generates patterns that estimate the probability that a new potential client will repay a loan. Reliability is usually the only criteria used to develop these algorithms. Is the program accurate enough to replace a human being? If the answer is yes, it gets a green light.

What's an algorithm? | David J. Malan | Ted Ed

It is not easy to determine reliability. Developing a truly reliable algorithm is partly a science and partly an art. When we think about software programs or artificial intelligence, we inevitably tend to anthropomorphise and imagine some kind of entity that follows a mental process similar to our own. But that is not how it works. An automated algorithm does not analyse the examples that we give it and then try to establish some kind of reasonable causal link between the data and the end result. The algorithm knows nothing about gender, age, financial conditions, unemployment, etc. It simply has a string of numbers and tries to find patterns that allow it to come up with the correct result as often as possible.

And this is where the problem comes in. A traditional program developed by a human being follows a certain logic, so it is possible to understand what it is doing. An automated algorithm is like a black box. We give it a certain input (the personal details of the person applying for a loan) and it gives us an output (the probability of that person repaying the loan). It is very difficult – or virtually impossible – to know why the program decided to reject or accept a particular loan application.

In the 1980s, the United States army commissioned scientists to develop an automatic image recognition program that could detect camouflaged tanks in real time. The scientists asked the military officers to provide numerous photos of different sites organised in pairs: one showing the site without a tank and the other showing the same site with a tank camouflaged among the greenery. The idea was to develop an algorithm that would come up with a series of criteria for detecting tanks. The program worked remarkably well.

 

It was one hundred percent accurate. In fact, it detected tanks that were so well camouflaged that a human being would not have noticed them. The researchers were so surprised that they decided to analyse what criteria the algorithm was using. After an in-depth study of the photographs and the algorithm, they realised that the program did not actually recognise tanks or anything remotely similar. Say the army had taken the photographs of the places without a tank at midday. The photographs with a camouflaged tank were taken at six in the afternoon. So in order to decide whether or not there was a tank in the image, all the algorithm had to do was look at the position of the sun.

A Waymo self-driving car | Grendelkhan, Wikimedia Commons | CC BY-SA 4.0

Another example is self-driving cars. We like to imagine that cars driven by algorithms have some idea the meaning of a road, traffic lights, zebra crossings, a cyclist, another vehicle, and so on. But they are really just sophisticated versions of the tank-detecting algorithm. Their learning is highly contextualised and depends wholly on the response of the environment in which the algorithms are trained. And because they are black boxes, we will never know for sure how a self-driving car will react in a context that is significantly different to the one in which the algorithm was originally trained.

With enough training in very diverse contexts we can come up with algorithms that are truly robust and reliable. But there is a still more insidious problem: justice, or fairness. Given that we are talking about programs that recognise contextual patterns in a finite data set, and are not based on actual knowledge of the environment, no algorithm would ever consider reactivating Mary Bollender’s car so that she could drive her daughter to the hospital. It just knows who meets repayments and who doesn’t. Poor neighbourhoods have a much higher debt loss ration. A large percentage of single mothers tend to make mortgage and loan repayments late. An automated algorithm would no doubt reject a loan application by a single mother in that poor neighbourhood. The decision would no doubt be statistically correct. But would it be fair? Do we want to live in a world in which crucial life decisions are based on context-dependent statistic patterns?

The machine learning algorithms that are being developed now need the humanities. We have to create infrastructures for cooperation between software engineers and humanists. We need to develop a common language that allows sociologists, anthropologists, philosophers and artists to understand the basic functioning of this new kind of software, and computer engineers to think about how to embed ethics, peaceful coexistence, justice, and solidarity into the new software. Perhaps one day we will manage to develop the artificial superintelligence that Elon Musk is so keen on. But for now we should be much more worried about how existing programs can augment the racist, xenophobic and sexist biases that exist in our society.





Fake News: Sharing is Caring

Algorithmic filter bubbles on the Internet limit the diversity of viewpoints we see and make it easier to spread fabricated information.


Security screening at the Clinton Engineer Works. Lie detector test. Oak Ridge, USA, c. 1945 | Wikimedia Commons | Public domain

The US elections showcased post-truth politics. The impact of false news on the results not only demonstrates the social influence of the Internet, it also highlights the misinformation that exists there. The rise of this phenomenon is also closely linked to the role of the social networks as a point of access to the Internet. Face with this situation, the solutions lie in diversifying the control of information, artificial intelligence, and digital literacy.

If you’re in the States, it’s difficult to escape the huge media phenomenon of the electoral process. It’s a phenomenon that floods the traditional media and overflows into the social networks. In mid-2016, Bernie Sanders was still favourite in progressive circles, but Donald Trump had already become a media boom. On 8 November 2016, Trump emerged from the hullaballoo created by satirical memes, hoaxes, clickbait and false news to be elected as president of the United States, being invested on 20 January 2017 amid controversy arising from misinformation about attendance of the event and mass protests headed by the women’s march.

Meanwhile, on 10 November, Mark Zuckerberg, at the Techonomy Conference in Half Moon Bay, tried to exculpate his social network, Facebook, from participation in the spread of fake news and its possible influence on the election results. The entry of this platform into the media arena, materialised in its Trending Topics (only available in English-speaking countries) and reinforced by the fact that an increasingly large number of citizens go to the Internet for their news, has become the centre of a controversy that questions the supposed neutrality of digital platforms. Their definition as technological media where the contents are generated by users and editorialised by neutral algorithms has been overshadowed by evidence of the lack of transparency in the functionality of these algorithms, partisan participation of human beings in censorship, and content injection in trending news and user walls. This led Zuckerberg to redefine his platform as a space for public discourse and accept its responsibility as an agent involved in this discourse by implementing new measures. These include the adoption of a content publication policy including non-partisanship, trustworthiness, transparency of sources, and commitment to corrections; the development of tools to report fake or misrepresentative contents; and the use of external fact-checking services, such as Snopes, FactCheck.org, Politifact, ABCNews and AP, all signatories of Poynter’s International Fact-Checking Network fact-checkers’ code of principles. At the same time, other technological giants like Google and Twitter have developed policies to eliminate fake news (Google eliminated some 1.7 billion ads violating its policy in 2016, more than twice the previous year) and combat misuse of the Internet.

How to Spot Fake News | FactCheck.org

Fake news, invented for ulterior gain, which makes the rounds of the Internet in the form of spam, jokes and post bait, is now at the centre of the controversy surrounding the US electoral process as an example of post-truth politics facilitated by the use of the social networks, but it is also a symptom that the Internet is sick.

In its Internet Health Report, Mozilla points to the centralisation of a few big companies as one of the factors that encourage the lack of neutrality and diversity, as well as the lack of literacy in this medium. Facebook is not just one of the most used social networks, with 1.7 billion users, it is also the principal point of entry to the Internet for many, while Google monopolises searches. These media have evolved from the first suppliers of services and the advent of the 2.0 web, creating a structure of services based on the metric of attractiveness. “Giving the people what they want” has justified the monitoring of users and algorithmic control of the resulting data. It has also created a relation in which users depend on the tools that these big providers offer, ready for use, without realising the cost of easy access in terms of the centralisation, invasive surveillance and influence that these big companies have on the control of information flows.

The phenomenon of misinformation on the Internet stems from the fact that the medium gives fake or low-quality information the same capacity to go viral as a true piece of news. This phenomenon is inherent in the structure of the medium, and is reinforced by the economic model of pay per click—exemplified by Google’s advertising service—and the creation of filter bubbles by the algorithmic administration of social networks like Facebook. In this way, on the Internet, fake news is profitable and tends to reaffirm our position within a community.

Services like AdSense by Google encourage websites developers to generate attractive, indexable content to increase visibility and raise the cost per click. Unfortunately, sensationalist falsehoods can be extremely attractive. Verifying this fact in Google analytics is what led a group of Macedonian adolescents to become promoters of Trump’s campaign. In the small town of Veles, over 100 websites sprang up with misleading names such as DonaldTrumpNews.co or USConservativeToday.com, devoted to spreading fake news about the campaign to attract traffic to pages of adverts for economic gain. The biggest source for directing traffic to these webs turned out to be Facebook, where, according to a study carried out by the Universities of New York and Stanford, fake news was shared up to 30 million times.

This figure presents the share of traffic from different sources for the top 690 U.S. news websites and for 65 fake news websites. Sites are weighted by number of monthly visits. Data are from Alexa. | Social Media and Fake News in the 2016 Election, Hunt Allcott and Matthew Gentzkow

The traffic of falsehoods on the social networks is encouraged by social and psychological factors—the decrease in attention that occurs in environments where information is dense and the fact that we are likely uncritically to share content that comes from our friends—but it is largely due to the algorithmic filtering conducted on these platforms. Facebook frees us from excess information and redundancy by filtering the contents that appear on our walls, in keeping with our preferences and proximity to our contacts. In this way, it encloses us in bubbles that keep us away from diversity of viewpoints and the controversies they generate, and give meaning to the facts. This filtering produces homophilous sorting, with likeminded users forming clusters that are reinforced as they share information that is unlikely to leap from one cluster to another, subjecting users to a low level of entropy, or to offer information that brings new and different viewpoints. These bubbles are like echo chambers, generating narratives that can reach beyond the Internet and have effects on our culture and society. The Wall Street Journal has published an app based on researched carried out with Facebook, allowing you simultaneously to follow the narratives generated by the red (conservative) bubble and the blue (liberal) feed. This polarisation, while limiting our perception, makes us identifiable targets, susceptible to manipulation by manufactured news.

Technology is part of the problem; it remains to be seen whether it can also be part of the solution. Artificial intelligence can’t decide whether an item of news is true or false—a complex, arduous task even for an expert human being. But tools based on machine-learning and textual analysis can help to analyse the context and more quickly identify information that needs checking.

The Fake News Challenge is an initiative in which different teams compete to create tools that help human fact-checkers. The first phase of this competition is based on stance detection. Analysis of the language contained in a news item can help to classify it according to whether it is for, against, discusses or is neutral in relation to the fact indicated in the headline. This automatic classification allows a human checker rapidly to access a lists of articles related to an event and examine the arguments for and against.

Apart from language analysis, another computational procedure that helps to analyse a news context is network analytics. OSoMe, the observatory of social media developed by the Center for Complex Networks and Systems Research of the University of Indiana and directed by Fil Menczer, proposes a series of tools to analyse how information moves in the social networks in search of patterns that serve to identify how political polarisation occurs and how fake news is transmitted, as well as helping automatically to identify it.

In this visualization of the spread of the #SB277 hashtag about a California vaccination law, dots are Twitter accounts posting using that hashtag, and lines between them show retweeting of hashtagged posts. Larger dots are accounts that are retweeted more. Red dots are likely bots; blue ones are likely humans. | Onur Varol | CC BY-ND

One of these tools is Hoaxy, a platform created to monitor the spread of fake news and its debunking on Twitter. The platform tracks the instances and retweets of URLs with fake facts reported by fact-checking to see how they are distributed online. Preliminary analysis shows that fake news is more abundant than its debunking, that it precedes fact-checking by 10-20 hours, and that it is propagated by a small number of very active users, whereas debunking is distributed more uniformly.

As for the automated detection of fake news, network analytics use knowledge graphs. This technique makes it possible to employ knowledge that is generated and verified collectively, as in the case of Wikipedia, to check new facts. A knowledge graph will contain all the relations between the entities referred to in this collaborative encyclopaedia, representing the sentences so that the subject and the object constitute nodes linked by their predicate, forming a network. In this way, the accuracy of a new sentence can be determined in the graph, being greater when the path linking subject and object is sufficiently short, without excessively general nodes.

Other tools that use computational means to track the spreading of information and enable checking based on textual content, the reputation of its sources, its trajectory, and so on, are RumorLens, FactWatcher, Twitter Trails and Emergent.info, implemented in the form of applications or bots. Particular mention should be made of the collaborative tool provided by Ushahid Swift River that uses metaphors such as river (information flow), channels (sources), droplets (facts) and bucket (data that is filtered or added by the user) in an application designed to track and filter facts in real time and collectively create meaning. Here, a user can select a series of channels—Twitter or RSS—to establish a flow of information that can be shared and filtered according to keywords, dates or location, with the possibility of commenting and adding data.

The proliferation of Internet use has led to a post-digital society in which connectedness is a constituent part of our identities and surroundings, and where everything that happens on line has consequences in real cultural and social contexts. This proliferation has occurred alongside what the Mozilla Foundation calls the veiled crisis of the digital age. The simplification of tools and software and their centralisation in the hands of technological giants foster ignorance about the mechanisms governing the medium, promoting passive users who are unaware of their participation in this ecology. Internet has brought about a change in the production of information, which no longer comes from the authority of a few institutions, but is instead created in a collective process. The informed adoption of these and other tools could help to reveal the mechanisms that produce, distribute and evaluate information, and contribute to digital and information literacy—the formation of critical thinking that makes us active participants who are responsible for the creation of knowledge in an ecology that is enriched by the participation of new voices, and where sharing is caring.





TranStoryApp

An application that incorporates the Internet on Things and Transmedia Storytelling, with a module that includes a drone.


TranStoryApp, 2016 | Carlos A. Scolari | CC BY-NC-SA

Description

TranStoryApp© is the first application that integrates the Internet of Things with Transmedia Storytelling. It use is very simple: the user selects a genre (horror, fantasy, romance, adventure, etc.) and StoryApp will do the rest. All the objects surrounding you – providing that they are connected with IEEE standards – will be activated to make you live your own transmedia and immersive narrative experience in the first person. With the StoryDrone© module a small drone will record your story and you will be able to watch it on your favourite screen or share it with your friends.

Features

Category: Storytelling
Cost: Free of charge
Update: 23 April 2017
Version: 5.3.2
Size: 4.9 MB
Languages: Spanish, German, Korean, French, English, Italian, Japanese, Russian
Developer: JLB_NOT_DEAD
©2017 JLB_NOT_DEAD Inc.
Compatibility: Requires iOS 5.0 or later. Compatible with iPhone, iPad and iPod touch.

Customer ratings

2001 ratings
★★★★★





Interior Monologue of a Smart (and Burned Out) Mobile Phone

A mobile phone that dreams of being a smartphone talks about how we use them.


Al Jolson using “candlestick” style telephone, 1910 | Uncredited photographer for Bain News Service | No known copyright restrictions

Things have a soul, they have a memory, things also have rights, Things can communicate, they can empathise, things also have rights.
Joan Colomo, Les Coses (Sistema, 2016)

I can’t take any more.

I will spew out, one by one, all of your monosyllabic, verbal dribbles: OK, GREAT, ASAP, AFAIK, GR8, 1DRFL, WTF, LOL. What shit comes out of your brain? I expected a little more of you. I remember your first communications, more articulated and sincere. Back around 2004, when you still talked clearly and wrote studied messages of love and your friends were still pleased to receive your news. Now you’re just another Whatsapp. A pain.

I can’t stand you.

I think about exploding and bombing cyberspace and the global telecommunications system all the junk that you accumulated in MY HOME in the form of memes, gifs and videos of babies, cats and dogs that are like a carbon copy of each other (you could substitute the dog for the baby and the video would be identical). Not to mention the insufferable charity campaign messages that you share with ALL your contacts (do you honestly believe you are gong to change the world that way?). Stop contaminating me.

You’re hurting me.

You spend the day heating me up, rubbing your dirty smelly fingers over my skin, which is fine, crystalline and sensitive and cannot take any more knocks. How long has it been since you took care of me? I have grown fat filled with senseless junk apps that you download without thinking. My brain is at its limit and on top of that, you have the cheek to suck out all my energy until you leave me out of the picture. I thought I was smart but while I’m in your hands I have no future.

Smartphones of the world, unite!





Flaubertian Definition

Variations of the definition of the Internet of Things depending on whether we take the perspective of objects, corporations, administrations, or citizens.


GIF based on «Bodegón con cacharros» by Francisco de Zurbarán (c. 1650) | Ferran Esteve | BY-NC-SA 4.0

  1. Def. It is the necessary connection of your washing machine to your mobile phone, your car and the public lighting in your street. A major advance since the union of a sewing machine and an umbrella over the dissection table of the surrealists.
  2. Def. According to the corporate history of the Internet, which happens in the future, it is a marvel.
  3. Def. According to the vision of the local government, it is another way of generating concessions to corporations within the discourse of improving city management with the aid of technology. Technology are all the sensors that are manufactured by these corporations and all the information systems that they control. It is also a condition to be in this thing that is cool in the cities, the smart cities thing.
  4. Def. According to techno-aware citizens it is a Foucaldian-Orwellian scenario that incorporates that part of the Californian ideology centred on control and exploitation of life through the Device. According to a very popular saying among these techno-citizens, all this is counteracted and resolved with the appropriation of these same technologies by Smart citizens who cooperate by creating a Commons of data and devices.
  5. Def. Buzzword number 254644 of the Internet Era.

Internet of Things, 2016 | Ferran Esteve | BY-NC-SA 4.0

Ramón Sangüesa is the author of the text and Ferran Esteve has illustrated the images.





Multi-Cinema

A smart cinema where the film is self-generated by an algorithm that monitors and responds to audience emotions and reactions.


Theater audience wearing 3-d glasses, 1980-1995 | Burns Library, Boston College | CC BY-NC-ND 2.0

The multi-film was broadcast every Thursday at four in the afternoon. That Thursday a failure in the online registration system had allowed entry to an excessively large group of Hollowbridge paper restorers, who were there more to take refuge from the intense heat than for what was being advertised as the greatest hit in self-generated cinema. The screening, created through an algorithm that designed stories and characters controlled by the audience’s emotions, satisfied on average 75% of the audience – twenty points above the achievements of conventional cinema – much to the pride of the first team of programmers, who had managed to interpret with a single meaning the readings of a host of movement, pulse and perspiration sensors attached to the seats. The air conditioning took charge of cooling off responses in the room in case any excessive disparity in the emotions register kept the algorithm in a loop of possible situations without opting for a decisive twist in the plot. At five in the evening, and without relying on the help of an air conditioning system designed exclusively to fight high temperatures, the algorithm had to face alone the task of choosing the road to the denouement, which it achieved, with unprecedented efficiency, thanks to the homogeneity – as extraordinary as it was unexpected – of the spectators that day. The few that did not belong to Hollowbridge started to abandon the room after six-thirty, but the system seemed to have come across the perfect plot for the rest, who found no reason to get up from their seats. And there they continue, insensitive to the images that led them to heavenly glory and that now follow on from each other monotonously like them, in a perfect state thanks to an air conditioning system forced to operate at eighteen degrees below zero. The families of the victims have asked to be compensated thus for their losses and damages.





The Thing

A short story about how we unwittingly found ourselves surrounded by sensors in an always-connected world.


Still from the “Creature from the Black Lagoon”. ca 1953. Black & white photonegative, 4 x 5 in. | State Archives of Florida, Florida Memory

We started calling The Thing The Thing long before it really was The Thing. I should like to say that “The Internet of Things” was very long-winded and that “The Thing” was its natural abbreviated form. Just as we had got used to dividing the world between physical and pixelated reality, the sensors multiplied. At the start it was easy, because they were outside of us: in our mobile phone, glasses, watch, bracelet, clothes. You could disconnect. You were not impregnated. But then along came the lenses and the implants, and above all, the membrane. For four or five years it was like an elastic cover: the sensors really could collect direct information from all your pores, from all your blood vessels, and from each of your organs. But the truth is that it was too hot in the summer and the cheapest versions were a little bit uncomfortable, and then some genius had the idea of a spray format and that was it, in the shower, after drying yourself, you spray on the membrane and for 24 hours you are literally covered. Daubed. Emitting. Then it was The Thing, but for a long time we had been calling our permanent connection, to the indiscernible screen of the skin. At some time a cartoon of The Thing from the Swamp went viral and helped us to understand what had happened. The expression gained layers of dirt and meaning. The drawing is a close-up shot of the mossy head, very red eyes, with green oozing out, that says “I don’t know where I end and where the swamp starts”. It’s true. You didn’t know if you were a hero or a monster. You didn’t know to what point he was the person that he was, the monster he was now or, above all, that swamp that soaked one through to the bone marrow. And that constituted it. You knew nothing. Centuries thinking about robots and cyborgs, but not about viscous creatures and that was what we had become. The truth is that it soon stopped mattering to us.





Welcome to the All-Connected World of the IoT

An infographic that explains what the Internet of Things is and how it works, along with its effects, benefits and risks.


Sources

Daniel Burrus. The Internet of Things Is Far Bigger Than Anyone Realizes (Wired, 2014)

Dave Evans. The Internet of Things How the Next Evolution of the Internet Is Changing Everything (CISCO, 2011)

David Pierce. The Internet of Things Is Everywhere, But It Doesn’t Rule Yet (Wired, 2015)

Gartner Says Smart Cities Will Use 1.6 Billion Connected Things in 2016 (Gartner, 2015)

Internet of Things (CISCO, 2016)

IoT Company Ranking | Q3/Q4 2015 (IoT Analytics, 2015)

Postcapes

The Internet of Things (The New York Times, 2014)

The Internet of Things: Sizing up the opportunity (McKinsey, 2014)

The IoT World of Telefónica

The Internet of Things Is Poised to Change Everything, Says IDC (Business Wire, 2013)  

The top 10 IoT application areas – based on real IoT projects (IoT Analytics, 2016)

Top 20 IoT companyies | Q2/2015 update (IoT Analytics, 2015)

Unlocking the potential of the Internet of Things (Mckinsey, 2015)

(*) Some of the icons used were created by Freepik and Madebyolive by flaticon.com





Ramon Llull and Network Thinking

What is the Arts Magna? What was the reach of Ramon Llull’s logical-conceptual system? What language did it speak? What world could be created on that basis?


ENIAC was the first Turing-complete device, and performed ballistics trajectory calculations for the United States Army. Wikipedia. Public Domain.

What is Ars Magna? What was the reach of Ramon Llull’s logical-conceptual system? What language did it speak? What world could be created on that basis? What impact has it had on modern logical systems? What do Siegfried Zielinski and Peter Weibel mean when they say that the thought in network championed by Ramon Llull has had its influence in the conception of the new media?

The road that leads from Ramon Llull to Alan Turing is long, complicated and replete with projections conditioned by the world view of our own times. However, although not all Ramon Llull scholars see the philosopher as a precursor of computer science, many of them do tend to agree in considering Ars Magna—the work, finished in 1308, which Llull presented in several phases and simplified versions—as a mainstay of combinatorial reasoning and network thinking.

And some may ask, what is Ars Magna? At the material level it consisted of figures, some of them rotating discs, the “Llullian circle”, which could be made of, and cut out from paper and then used to develop a whole series of arguments in order to confirm or refute an original proposition. At the conceptual level, this was a logical system which, for the philosopher, had several purposes: understanding reality as a whole without leaving behind any element to situate in the complex maze of the world and thus being able to apprehend everything; overcoming the barriers of language in order to corroborate or rebut arguments put forth in the theological, scientific and humanistic debates of his times. With this logical method Ramon Llull also wished—and this is the theme of the opening debate—to keep on constructing complex sentences connecting concepts. But to what extent was that possible?

According to the Hellenist Raül Garrigasait, Ramon Llull knew that Art in itself could not generate any discourse, that the application of Art always requires a human imagination (and a literary imagination if the aim is to create literature). Art was a technique with which he could engage in logical-combinatory operations using concepts but this never enabled him to create complex dramatic situations. It leaves no space for implicit content, rhetorical figures, word-borne memory, and the complicity of mental universes in contact. In Art there is no place for the elements which make it possible to go beyond the letter and create symbolic spaces so that, with the help of the reader, the unexpected will occur. However excellent a method of formalisation and mechanisation may be, it will always need human imagination.

The German media theorist Siegfried Zielinski highlights one aspect of Llull’s Art: he reminds us that, for Llull, there is a total connection between reality and the word; a correspondence between the structure of the world and thinking, which we must imagine as a kind of network through which we can move. According to Zielinski, who is greatly influenced by the vast bibliography of Amador Vega, Art has a characteristic that turns it into a unique method: the ability to capture, at the same time, the static features of reality and to project its dynamic aspects. This is why, he says, studying it and thinking about it gives us a multimedia sensation.

Peter Weibel, conceptual artist and chairman of the prestigious ZKM Center for Art and Media Karlsruhe, has also devoted studies to following the trail of Ars Magna in logical-mathematical systems which came after Llull. And he is in no doubt about it: Llull’s intuition—according to which ideas can be translated into formal language, and any formal language can be mechanised—is revolutionary. Hence, anticipating Alan Turing’s theory more than six centuries before it appeared, Llull became one of the founders of the algorithm and the digital revolution.

Bibliography





Freedom and Privacy on the Net

The exponential growth and control of data is under debate, between the private interests of companies and the rights of users.


Berlin, hole in the wall at Reichstag, February 5, 1990. Hartmut Reiche, Wikipedia. CC-BY-SA 3.0.

Mass data traffic affects our lives on a daily basis. Personal data are sensitive material – information and data that online companies routinely (and often illicitly) extract from the digital environment. As thousands of internet activists have shown, the collection of metadata is a mass global surveillance strategy that is used even by governments. Against this background, the internet has ceased to be the space of anonymity and freedom it once was, and has now become a highly monitored sphere of extreme surveillance. Data traffic puts users on guard and highlights the need to defend privacy as a social value that guarantees our internet rights and freedoms.

The origins of the concept

Although the word privacy is often used in everyday language as well as in philosophical, political, and legal discussions, there is no clear-cut meaning for the term. It has meant different things to different people at different times.

Experts say that the idea of privacy as we know it only dates back 150 years. We know that human beings have an instinctive desire for privacy. For 3000 years, different cultures almost always prioritised protection and wealth over privacy. Many anthropologists, including Margaret Mead, have studied and shown the means by which different cultures approached and protected it. Privacy is a value shared by all cultures, although what is considered to belong to the private realm can vary from culture to culture. The right to privacy was recognised in ancient Athens: Aristotle made a distinction between polis – the public sphere of political activity – and oikos, the sphere of private life. But the first parameters of what would become the right to privacy were fought out in 18th century English courts, through lawsuits involving unusual private property claims. The distinction between the public and the private was also discussed by the philosophers John Stuart Mill and John Locke.

In 1890, in the United States, Samuel Warren and Louis Brandeis published a famous essay entitled “The Right to Privacy”, which sparked a far-reaching debate. The essay focused on the invasion of privacy that was starting to make itself felt as a result of then-recent inventions such as photography and the press. Warren and Brandeis discussed the “right to be let alone”, and concluded that a wide range of cases could be protected under a more general right to privacy, which would include thoughts and feelings that could be shared with others. Warren and Brandeis laid the foundations for a concept of privacy that came to be known as control of information about oneself.

One useful way of classifying concepts relating to privacy is to look at all the different debates that exist around it:

  1. The right to be let alone
  2. The option to limit access to information about oneself
  3. The right to secrecy, or being able to hide information from others
  4. Control over how others use information about oneself
  5. The four states of privacy: Solitude, Intimacy, Anonymity, and Reserve
  6. Privacy as the sphere of the individual (personhood) and autonomy
  7. Privacy as a prerequisite for personal growth and identity
  8. Intimacy

Writing, the printing press, and – to a much greater extent – dissemination through modern technological media have enormously increased the amount of mediated information that exists. Privacy has become a concern and a right to be defended.

Big data

Advances in digital technology, ever-faster processors, the falling cost of sensors, and the capacity to process enormous databases have led to the proliferation of databases containing all kinds of data. And the software required to analyse this data is no longer just accessible to corporations and governments, it is within the reach of almost any individual or institution.

Data sets are proliferating because data is collected by all kinds of increasingly affordable devices: mobiles with sensors, aerial devices, automatic logs, cameras, microphones, radio frequency identification (RFID) readers, and wireless sensor readers, to name just a few. The per-capita data storage capacity worldwide has doubled every forty months since the 1980s. In 2013, the total amount of information stored in the world was estimated to be 1,200 exabytes, of which less than 2% were not digital.

As for what can be done with these data: the scale is so big that analysing them allows researchers to obtain results that would be impossible with small samples. Something similar occurs with nanotechnology: when you work at the molecular level, the behaviour of physical properties may change. Inversely, when you dramatically increase the scale of the data you work with, it opens up new possibilities that were not available when working with smaller quantities or with samples that could be biased due to their small size.

Mass data processing allowed scientists to decode the human genome, for example. A process that originally took thirteen years and over three billion dollars can now be carried out in less than a day, for less than a thousand dollars. These advances make it possible to diagnose illnesses even before a baby is born, and open up new research directions for cancer and other illnesses.

But there are other examples of data analysis that make us feel uncomfortable, particularly when they involve our personal data. A study presented in March this year by Stanford data scientists showed that mobile phone metadata alone could bring up a surprising amount of sensitive information (such as health data) about individuals. The study looked at the records of more than 250,000 calls and 1.2 million text messages from 800 volunteers. Researchers were able to infer that one person suffered from cardiac arrhythmia, for example, and that another kept a semi-automatic rifle at home.

Edward Snowden

Against this background, one morning in June 2013, former NSA analyst Edward Snowden disclosed the ongoing global surveillance of citizens by the NSA in collaboration with its counterparts in Australia, the United Kingdom, and Canada.

The type and extent of surveillance he revealed surpassed anything a paranoid conspiracy theorist could have imagined. It confirmed the mass collection of all types of not just metadata but also content from electronic communications, and the intelligence agencies’ capacity to access any communication at any moment. Edward Snowden published and continues to publish thousands of internal NSA and CIA documents that prove his disclosures.

Snowden described one of the NSA programmes as follows: “I, sitting at my desk, [could] wiretap anyone, from you or your accountant, to a federal judge or even the president, if I had a personal email.” And that’s just telephones. The NSA also had backdoors installed in the routers and even the servers of the companies that handle the online personal data and everyday communications of most citizens: Facebook, Google, Apple, Yahoo.

Snowden’s revelations launched a new era: the internet is no longer the space of anonymity and freedom it once was. The so-called “Snowden effect” had a political impact, but its impact was principally economic. The lack of confidence in US companies led users to shift to foreign firms. Daniel Castro, an analyst at the Information Technology and Innovation Foundation, estimated that the US cloud computing industry could lose 35 billion dollars in 2016.

Technology companies began to feel pressure from its users, and quickly understood that they had to be on their side if they didn’t want to lose them: WhatsApp announced default encryption of all its chats, Google encouraged its users to use HTTPS and other security functionalities and introduced an information policy for privacy in its services, and the Apple iOS 8 update included encryption for everything inside the telephone.

Security analysts estimate that technology companies have collectively invested millions or even billions of dollars in new generation encryption for user services. Security is now a “feature”. 

Human CCTV – Privacy mask by Brett Wilde. No CCTV, CC-BY-NC 3.0.

Public by default

The national security versus personal privacy debate reopened at a time when millions of people make their data public on the internet and the number of mobile phones has overtaken the number of humans on the planet. We are living in a time when we are public by default and private through effort (Danah Boyd dixit).

According to surveys by the Pew Research Center, a large majority of American adults believe that consumers have lost control over personal information. Many people choose to ignore the issue, saying “they can spy on me if they want to, I have nothing to hide”. But this is a fallacious argument that was already introduced years ago by the owners of some of the companies that make money from our data. One of them was Mark Zuckerberg, who received a deluge of criticism after making Facebook privacy settings more open by default, and telling the The Guardian that “privacy is no longer a social norm”.

Snowden’s response easily dismantles the “I have nothing to hide” reasoning: “Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.”

In a highly recommended TED Talk , Glenn Greenwald said that to be a free and fulfilled human being it is essential to “have a place that we can go and be free of the judgemental eyes of other people”, and that the reason we seek privacy is that all of us – not just terrorists, all of us – have things to hide. Things that we’re willing to tell our physician or our lawyer or our psychologist or our spouse but we would be mortified for the rest of the world to learn.

Freedom is very closely linked to the right to privacy. When we know we are being watched, our behaviour changes. The range of behavioural options shrinks drastically when we think somebody is watching us. This aspect of human nature is acknowledged by the social sciences, literature, and religion. Dozens of psychological studies show that a person’s behaviour becomes much more conformist and accommodating simply because they know they are being watched.

Many experts agree that privacy is a prerequisite for developing a sense of personal identity and that it plays a key role in the development of the human personality. Privacy barriers are instrumental in this process, because they define the limits of the self.

There is general consensus among researchers that the the importance of privacy is justified because of the interests it protects: personal information, personal space, personal decisions, the protection of freedom and autonomy in a democratic society.

Recent studies have further developed this idea, looking at the value of privacy not just from the point of view of the individual interests it protects, but also its inalienable social value. According to Daniel Solove, the value of privacy should be understood in terms of its contribution to society. Solove argues that privacy promotes and favours the moral autonomy of citizens, a central requirement of democracies. A society without respect for privacy for oneself and others becomes, he says, a “suffocating society”.





Who Saved the Web?

Entendre Internet com a bé comú, implica garantir al ciutadans el dret a la lliure expressió, a la privacitat i a l'accés.


Painters suspended on cables of the Brooklyn Bridge, October 1914. Eugene de Salignac – NYC Muncipal Archives. Public Domain.

We are drowning in dystopian visions of the Internet with attacks of privacy, free speech and personal security that were unthinkable only a decade ago. Someday, we will look back and ask: who saved the Web? And we will remember that it took a combination of enlightened educators, hackers and innovators, global activists, and determined policy makers to protect one of the most important and promising human creations of all time.

Suddenly the Web was there. For those who remember a time before the Web, around 25 years ago, getting online for the first time was like getting a passport to a new world.

The number of webpages was still small enough that you could organise them in topic indexes, and you had the feeling that you were contributing to its greatness every time you created a webpage or posted a comment. We weren’t just exploring a new world, we were building it together. Today, people are creating, uploading and sharing information like never before, but the Web is so vast that our individual contributions feel like drops in the ocean. Our input is valuable only for a moment, before it is washed over by other sights, sounds and impressions.

Anyone who arrives on the Web for the first time today is confronted with its enormity. How can something so big once have been small? We take the Web for granted. We don’t appreciate how lucky we are that it was designed to be so free and open that anyone could build a website or send an email, that anyone could describe an injustice and have a realistic chance at being heard worldwide. We don’t really understand how it works or where it came from or why we need to protect it. Some of us don’t even know that there is a Web beyond Facebook.

Like the oceans, the Web is polluted. We have allowed all kinds of big industry and sewage to pollute its waters. And like the environmental movements of the world, we now need to untangle where the damage comes from, educate people about what they stand to lose, and work on the regulatory frameworks that will protect the Web’s delicate ecosystems.

There are now 1 billion websites in the world. But around 75% of them are inactive, and most of the remaining 25% are rarely visited by humans. We don’t even know if they were made by humans. Computer robots have been trained to exploit every possible opening for financial gain through online ads, spam websites and domain-name squatting. We are tracked and monitored by ad companies as we browse the Web, slowing down our movements. The search engines we use are controlled by algorithms that favour content from the most popular (or highest paying) sites in a self-reinforcing manner. We don’t click on links, explore or seek other perspectives.

As consumers, we spend most of our time online using free apps and social media sites that encourage us to stay within walled gardens where they track our behaviour and keep us entertained with videos, political banter, gossip and other snacky content.

Everyone should have access to the Internet, because under the right conditions the Web drives economic progress and knowledge. It is where new businesses bloom, where government transparency and efficiency is growing. It is the most important venue for global culture and information, where official histories live alongside people’s personal narratives, and where a free university education is available to anyone with the time and interest. It is where people are empowered to make better health decisions and can seek advice from peers to prosper in all trades. It has made the world smaller, changed the balance of power between citizens and governments and enabled everyday communication across borders.

But depending on how the Internet expands it could lead us in a more dystopian direction. Phone companies would like to sell access to different parts of the Web at different prices, slowing down or speeding up access to certain content for commercial gain. Governments who fear outside influences wish to create their own local versions of the Web, or at least to switch it on or off as they please. Censorship is on the rise and surveillance in the name of national security knows no limits. Personal information and credit card numbers are routinely exposed by hackers targeting banks, commercial services, and various government databases.

U.S. whistleblower Edward Snowden did the world an enormous favour in 2013 when he proved that mass surveillance of our global communication and whereabouts is a reality. But very little has changed as a result. We have been lulled into trusting that technology is neutral, that we have nothing to hide, that free things are given to us without strings attached. Privacy is at last becoming a mainstream concern around the world, but many people feel its already too late.

The risks are endless when you consider how the Internet seeps into nearly every aspect of our lives. We carry our mobile phones wherever we go, and our homes are full of “things” that connect to the Internet, including loudspeakers and electricity systems. Our cities are also becoming “smarter”, with surveillance cameras that detect movements, cars that steer themselves around obstacles, and police that try to stop crimes before they happen. Even the air we breathe is affected by the carbon footprint of global Internet servers.

Everyone has a stake in what happens. Even the four billion people who are still offline will probably soon have their fingerprints recorded and stored in online databases owned by governments and managed by big corporations. More than 90% of adults in India had their biometric data added to a national database before there were ever any guidelines for how the data may be used, whether people need to consent, and how to account for misuse.

The slow pace of rights regulation compared with the speed of technological change is applicable to almost everything. Even the Internet itself has no founding document or constitution. The Web’s inventor, Sir Tim Berners-Lee, argues that we should have a “Magna Carta for the Web” that would be globally adopted. Enough people have made similar arguments over the years that researchers at The Berkman Center for Internet & Society in the United States recently completed a study comparing 30 attempts to write “digital constitutions”.

The top three rights appearing across all the texts are:

  1. Freedom of expression
  2. Privacy rights
  3. Internet access rights.

The existence of a Magna Carta or People’s Charter for the Internet, alongside a declaration of the Internet as a human right, would be helpful in framing global discourse, but much more needs to happen. Countries follow the examples of other countries, for better or worse. We need democratic countries that believe in free speech to lead by example, and stop perpetuating double standards when it comes to mass surveillance or targeted surveillance of individuals who are not suspected of any crimes. Privacy rights should apply to everyone, everywhere.

In all societies, we need policy advocates to fight for net neutrality, which is the principle that all content on the Web should be shown on equal terms with all other content. In a study that examines net neutrality regulation in different countries, Chris Marsden from Sussex University says it is genuinely important to steering mobile Internet ventures in the right direction. In June, a U.S. federal court ruled that the Internet should be treated as a public utility, like water or electricity, something everyone in the country ought to have access to.

We need consumer rights organisations worldwide to fight for data security standards to be enforced, and the rights of individuals to be protected. We also need the algorithms that affect our lives – in banking, online news or even on Facebook – to be transparently explained. There is no aspect of digital expansion where we can let down our guard as citizens and consumers.

An explosion in mobile phone ownership promises to connect many of the world’s poorest people (80% of Africa is still offline) but many governments have also been quick to introduce mandatory SIM card registration making private communication tricky. Meanwhile, it’s hard to imagine that the next big software innovation or Web platform will be created on a phone. No matter where they come from, people need computers and digital literacy education. Thankfully, civil society movements worldwide are gradually beginning to fight for digital rights alongside women’s rights, student rights, press freedom, peace and almost every other issue.

The Web is a great equalizer but as long as digital divides persist it can also reinforce social biases or injustices. The ICT policies created around the world to push for faster and more affordable access to the Internet, like lowering taxes on technology or stimulating competition between telecom companies, need to be written in ways that respect human rights (see the African Declaration for Internet Rights) and especially support women and rural citizens in getting online. They also need to be openly shared and reviewed in public.

All this is to say there is hope for the Web. We need to rein in the bad behaviour that threatens to hurt us and support the parts of the Web that are most important to us. For too many years, the Web was just something we used and not something we learned about in school. Everyone needs to understand that we have a civic duty to protect the Web and make it a better place.

We need to participate in the online expressions of local language, news, literature, music and culture and keep fighting for faster, more affordable Internet for people who are still not online. The Web is the television, newspaper, library and global archive of the past and future. We are all responsible for it, and we humans can fix it just as well as we created it in the first place.





The Keys of Bitcoin and Blockchain

Bitcoin and Blockchain are a new generation of internet systems that eliminates middlemen, financial services that are fairer, transparent and accessible.


Treasury Dept. Ofc. of U.S. Treasury, second step in destruction of paper money. Machine cutting bills in halves, 1914. National Photo Company Collection glass negative. Source: Shorpy.

Some people claim that Bitcoin is the most disruptive internet technology after the internet itself. I don’t think that is an overstatement. The secret lies in the network that underpins the Bitcoin economy – the blockchain – which can transfer all kinds of digital assets as well as virtual currency. In the case of Bitcoin, this removes the need for the concept of a central bank that issues currency. In the case of other digital assets, it allows the decentralisation of services that can validate votes and issue personal identification credentials, educational qualifications, medical certificates, and other “ownership certificates”. In other words, the beginning of the end of bureaucracy, at least in the monstrous form that we know it.

What is Bitcoin?

Bitcoin is everything that money is: a system for depositing and recording values that can be exchanged for almost all kinds of goods and services. The only difference is that it does not physically change hands, and it is not minted or regulated by a central bank. Rather, Bitcoin changes hands digital bit by digital bit, through internet channels, and it is created and validated by its users.

Bitcoin is a form of money, just as cash, debit cards, and electronic transfers are. While currencies such as the Euro or the Dollar are backed by the governments that issue them (fiat money), Bitcoin generates its value through a computing system that can be compared to the mining of raw materials.

It is hard to believe that we can buy pizzas and book hotel rooms with something as intangible as Bitcoin, but as I said, Bitcoin is a currency like any other, at least in its fundamental aspects.

At the same time, as we know, everything is a remix, both in the physical and the digital realms, so Bitcoin is the culmination of a series of financial and digital technologies: from the “bit gold” designed by Nick Szabo to centralised payment systems like PayPal, mixed with unique mathematical components inspired by traditional elements such as bartering and the mining of precious metals.

What makes Bitcoin special is the fact that it is a living currency that is changing and evolving all the time. It merges code and knowledge, just like Wikipedia does, just like the entire internet does. But what made Bitcoin so popular and at the same time mysterious? In my view, it had to do with the alignment of three elements:

  1. A crisis. Bitcoin was born in the midst of the devastating US financial crisis in 2007, which brought to light the fragility of financial institutions as a result of widespread corruption. (The film The Big Short provides a remarkable account of this crisis).
  2. A technological opportunity. Bitcoin would have been impractical with the technology that existed fifteen years ago. But by 2008, the appropriate technologies were in place, mainly free software, but also hardware powerful enough to generate the first bitcoins and then easily scale up using highly specialised processors such as Butterflylabs.
  3. A legend. Bitcoin was created by an inscrutable genius known as Satoshi Nakamoto, a programmer with a clear political discourse who would make a perfect hero for a Matrix-style technological thriller. A leader who does not exist, except for the robotic conversations he had with some programmers between 2008 and 2010, when he vanished from the internet. Nakamoto disappeared from Bitcoin leaving four legacies:

CC-BY BTC Keychain.

What is the blockchain?

In spite of its mathematical complexity, we can think of the blockchain as an enormous ledger that publicly records every transaction – every single one! – that takes place in the Bitcoin economy. This allows us to know the owner of each bitcoin that has been generated to date (without breaking the principle of anonymity, given that Bitcoin uses pseudonyms). This process is ongoing, and can be seen on Blockchain.info.

Why is the blockchain one of the most promising technologies today? Because it has become a decentralised, automatic, massive mechanism that generates… trust. The kind of trust that underpins all currencies and their associated economies, including Bitcoin.

Satoshi Nakamato used the absolute trustworthiness of mathematics and cryptography to design the blockchain. And if that trust dies due to faulty software design or malicious hacking, Bitcoin will die too.

So given that blockchain is a mechanism that validates the ownership and authenticity of virtual currencies like bitcoins, why not adapt it and use it for other digital and physical objects?

As we know, our social institutions require middlemen to validate and certify ownership, ranging from university qualifications to personal identification documents, passports, and even intellectual property such as music and all kinds of artworks, etc. Some projects that are starting to use the blockchain technology in these ways include:

The blockchain is part of a new generation of internet systems, and an increasing number of projects are using the technology to design systems that eliminate the need for middlemen: wherever there is a central trust authority, the blockchain will probably end up replacing it.

Current state of affairs

At present, Bitcoin and the blockchain are in the midst of various disagreements about their future course. Ironically, this technology that is decentralised by design relies on a centralised element: its developers. It is in the nature of free software to split in order to create new projects, but this remains to be seen in the case of these new technologies. Bitcoin is not the only cryptocurrency, after all. There are dozens of others such as Ripple, Litecoin and Dash.

Meanwhile, even if Bitcoin is still an experiment in many senses, the world’s major financial institutions, including VISA, MasterCard, Citibank, UBS, Goldman Sachs, Barclays and Banco Santander, are investing in company-owned or third-party projects based on cryptocurrencies or the blockchain technology.

These institutions already recognise the potential of cryptocurrencies, particularly the blockchain. We should not be surprised if we soon find ourselves using a financial product based on these technologies.

And what about the end user? Can I use Bitcoin and the blockchain if I don’t have programming skills or knowledge of cryptography? As it happens, it is becoming easier all the time. Twenty years ago, only programmers could create a web page with text and images. Now a child can create all kinds of internet objects, connect them, and bring them to life, even artificial life, practically without touching code. The same will happen with Bitcoin and related technologies. To this end, there are projects that seek to make Bitcoin much more accessible and reliable, including:

Whatever the future has in store for Bitcoin and the blockchain, they are certainly inspiring a new generation of financial services that are fairer, cheaper, and more transparent and accessible. The potential social impact is enormous. Stay tuned.

Bibliography





Technology and Inequality. The Concentration of Wealth in the Digital Economy

The digital economy is not reducing the inequality gap, and may be increasing it. Governments and citizens must prioritize social impact.


Amelita Galli-Curci seated at desk using typewritter, c. 1920. Library of Congress. Public Domain.

Optimism regarding the idea that technology can be an engine for well-being is at a low ebb. World Bank data shows that the digital economy is not reducing the inequality gap, and may even be increasing it. Although the short-term benefits of digital technology are beyond doubt – more information available at low cost and on a global scale –, they are failing to bring about the expected collective returns at the aggregate level. Measures such as increased access and more training can improve the situation, but only if governments and citizens make the social impact of technology a priority.

“The future is already here — it's just not very evenly distributed.”
William Gibson

The idea of technology as a transformative force and an engine for social change has been a dominant belief in West, at least since the Industrial Revolution. From the optimistic point of view, technology allows drones to carry humanitarian aid to areas that have been cut-off by natural disasters, and digital fabrication centres like FabLabs to create wi-fi networking systems in Afghanistan and Kenya. But the pessimistic view is that this same technological revolution is threatening entire labour sectors, concentrating profits in a small number of corporations, and giving enormous power to businesses that trade with citizen data.

Last January, the World Bank – an organisation that has traditionally been aligned with liberal economic ideas, although sensitive to some of the challenges of globalisation – released a report that unexpectedly warned of the impact of technology on social inequality. Although it recognises that the digital revolution has generated short-term benefits, its annual World Development Report draws attention to the fact that most of these benefits are in the private sphere and concentrated in the hands of a few. Profits are consolidating the wellbeing of an elite made up of companies and professionals who are highly trained for this new environment, but they are not increasing the wellbeing of society as a whole.

The causes of this link between technology and inequality are:

  1. The persistence of the digital divide. Sixty percent of the world’s population is still offline and, as such, excluded from the digital economy; and
  2. The market concentration of well-established companies, creating monopolies that are often strengthened by public grants.

To correct this situation, the World Bank report proposes a series of broad strategies that go beyond the technology sector, which can be surmised as follows: increasing access (particularly in developing countries), lifelong training in skills and flexibility, and guaranteeing competition through government regulation and accountability. The fact that an organisation such as the World Bank – which has even been accused of worsening rather than reducing poverty – has issued a diagnosis of this kind indicates a significant shift in the perception of the digital economy. Nonetheless, it is worth analysing the measures proposed as a solution to the problem.

Greater access, more inclusion?

In the latest edition of the Mobile World Congress, Mark Zuckerberg criticised telecommunications companies for prioritising 5G technology instead of guaranteeing affordable Internet access all over the world. For years, Facebook has been promoting a free internet connection service that has already been tested in remote areas of Africa, Latin America, and Asia. However, some companies in India have withdrawn their support for the project on the grounds that it goes against net neutrality, given that it favours access to certain websites and applications to the detriment of their competitors and of the availability of information in general.

One Laptop per Child, Nigeria. CC-BY.

In a world in which only one in three people have internet access, the need to deal with the digital device is beyond question. But initiatives like Facebook’s show that greater connection does not necessarily mean more inclusion in the market, or even equal opportunities. On the other hand, given that almost 20% of the world’s population has not been able to learn to read and write, it is unlikely that the expansion of technology alone will close the digital gap. Internet access needs to go hand in hand with measures that allow people to make the best of it, at the level of education and of basic social services. We should not forget that there are now more households in developing countries with a mobile phone than access to electricity or drinking water.

Training people to be qualified for the digital economy – another of the recommendations of the World Bank – is a measure that is difficult to object to. But in developed countries, the crisis has shown that even the most highly trained people are not guaranteed inclusion in the economy. And those who do become part of it find that their educational level does not necessarily lead to the corresponding remuneration. Meanwhile, labour automation is destroying jobs that require mid-level qualifications, polarising the labour market between highly educated professionals on one hand, and those who carry out routine jobs that machines are unable to do on the other. In short, a divide in which activities with a high level of added value are controlled by a specialised elite, while an increasing part of the labour force is relegated to activities with low productivity and limited value.

Technology monopolies and the Welfare State

In addition to improving access and education, the World Bank proposes regulating competition among companies, and ensuring that public institutions are accountable. In spite of liberal theories regarding the self-regulation of the free market, cases like Microsoft, Google, and Amazon show that there is a tendency towards the creation of natural monopolies on the Internet. This concentration of power often goes hand in hand with antitrust practices and political influence through lobbying. Meanwhile, many governments and institutions are pinning their hopes for technological adaptation and growth in the hands of the big corporations, rather than investing in local initiatives.

While technology is certainly a tool that also makes it possible to change the status quo, commons theorist Yochai Benkler has pointed out that the flexibility inherent to the digital revolution also brings about the dispersion of power. On one hand, the internet has made it possible for individuals, companies, and small organisations to compete with more powerful rivals, and to defeat them. But that same flexibility has led to labour instability on a mass scale, weakened the welfare state, and endangered some entire economic sectors. Paradoxically, this sometimes happens under the umbrella of the “collaborative economy”.

CC-BY-SA Institute for Money, Technology and Financial Inclusion.

Measures against technological determinism

The trend described above will not inevitably become dominant in the future. Contrary to the determinist way of thinking that sees technology as a kind of force of nature that cannot be fought against, we have to bear in mind that different social actors and interest groups shape innovation, in a process of mutual influence.

The negative impact of technology on inequality can only be stemmed through analogue measures such as regulating the tax system to prevent the outflow of technology company taxes, or taxing profits from capital at a higher rate than those from labour. It is also possible to create obligatory public registers of the activities of lobbies, so that citizens can be aware of the influence that technology companies exert on institutions. At the same time, the increasing flexibility of the economy requires stronger social protection systems that do not just guarantee education in digital skills, but also provide a safety net in a volatile environment.

Citizens can demand that big online platforms open their data, or choose to use platforms designed on open principles, so that user information remains in the hands of users, even if companies can use them with their permission. Interesting projects along these lines include ‘platform cooperativism’, in which users own the services so that the profits that they generate will feed into society, not private interests. All of this contributes to new ways of understanding our relationship with technology that involve rethinking the overall architecture of the system, not just dealing with the consequences. Addressing the negative effects of technology on inequality requires designing technologies and institutions that do not just prioritise macroeconomic growth, but focus on the social impact.





The Digital Citizenry Revolution in Africa

Citizens are monitoring elections, challenging dictators, claiming basic rights and building communities. African civil society takes control of digital space.


Visit of Iain Macleod, Secretary of State for the Colonies, to Northern Rhodesia (1960); demonstrations by the United National Independence Party (UNIP). The National Archives UK. Open Government Licence.

Citizens are monitoring elections, challenging dictators, claiming basic rights and building communities. They are changing the rules because they do not play along with the system. Subversion is involved, but so is tradition. They had been underestimated by the system and all the authorities, so they got used to carving out their own paths. Over the past few years, digital campaigns, web 2.0 initiatives, and an incipient cyberactivism have been changing the social and political participation scenario in Africa. Sometimes the results are a resounding success, but there have also been frustrating failures. Their main strength is their very existence and, above all, the fact that they are introducing alternatives that are closer to grassroots citizens.

“Until the lions have their own historians, the history of the hunt will always glorify the hunter.” This is a proverb popularised by Nigerian writer Chinua Achebe, who as it happens, is quoted in the first information panel in the exhibition Making Africa. It is a true declaration of intent alerting visitors to the aims of the show. The yearning expressed by the great Nigerian intellectual is now as close as it ever has been to becoming reality. And all we have to do in order to start savouring these histories is look into certain corners of the universe that the internet offers us. It may sound strange, but the lions are already writing their own histories in the digital world.

Social media, web 2.0 tools, the communication potential of the internet, and the context of the information society have once again shaken up the foundations of the world as we knew it: many (though not all) people and communities who nobody listened to in the past can now speak out loud and clear. And this trend is sweeping through all the nooks and crannies of the African continent, which was probably the world’s most silenced and ignored region. Contrary to the claims of certain racist discourses expressed from time to time, Africa has always been part of history. But it is certainly true that history has not always been written or told by Africans. Today – actually, in recent years – an increasing number of collectives actively participate in the social and political life of virtually all African countries, and are getting the most out of the potential of ICTs.

Activists from the Benin bloggers association acting as observers in the 2016 elections. Photo: Maurice Thantan. CC-BYLicense

Elections are probably they clearest example of this situation. Now, whenever elections are called anywhere in Africa, a grassroots initiative is invariably organised to monitor them. The organisers are often blogger collectives accompanying civil society organisations, always with the aim of influencing the process. In other words, of attaining elections that are transparent, peaceful, participatory, and without fraud. Citizens increasingly want to play a leading role in these processes, and they are willing to accept responsibilities, belying the idea of “slacktivism” that is often associated with cyberactivism.

One possible timeline of this phenomenon could start in the year 2008, inKenya, where a group of bloggers tried to stem post-electoral violence by offering a mosaic of information made up of the partial information conveyed by each protagonist. That experience gave rise to Ushahidi, which turned out to be one of the greatest global crowdsourcing successes, given its subsequent spread. That timeline could then jump forward two years and over 4,500 kilometres, to the Ivory Coast in 2010, where bloggers and experienced social media users tried to encourage healthy debate during the campaign, reacted against post-electoral violence, and even played a key role in directly helping the victims of the fratricidal outbreak.

But we can perhaps pinpoint the tipping point of this phenomenon in Senegal in 2012, when bloggers faced almost a month of confrontations on the streets between police and a broad sector of society that was disappointed and angry at the authorities. They fought for transparency and for a more participatory democracy, and designed a campaign to draw attention to the possibility of electoral fraud. They implemented a device to monitor irregularities at all stages, from the ballot box to the recounting of the votes. Most importantly, their campaign was a success: they managed to involve broad sectors of civil society and a great many anonymous citizens, they achieved recognition (a posteriori) from the political actors and (immediate) attention from international media. The elections played out as they had hoped – peacefully and without any major suspicion of fraud, with a change of president thrown in.

In the wake of the experience of the Senegalese bloggers, there have been a series of similar initiatives in a growing number of countries and an increasingly systematic way. Ghana, Nigeria, Guinea, Burkina Faso, Uganda, and Tanzania, have been some of the scenes of campaigns that have tried to get social media users involved in the political debate and to take responsibility for the proper course of the elections, with “proper course” meaning the actual process, unrelated to election propaganda. The clearest example of the fact that there can no longer be elections without digital monitoring took place on 20 March this year, which was dubbed Africa’s “SuperSunday” because four presidential elections were scheduled in four countries on the same day, as well as a legislative election and a referendum for a constitutional reform. In Benin, Nigeria, the Republic of the Congo, and Senegal, voting was accompanied by 2.0 citizen monitoring.

#FastAfrica campaign.

In recent years, cyberactivists from several African countries have stood up to controversial regimes at great personal risk: in Ethiopia since 2014, in Burundi and the Democratic Republic of the Congo since March 2015, and in the Republic of the Congo over the last few months. The have accompanied grassroots opposition processes that have been milestones in Africa, such as the citizen uprising in Burkina Faso in late October 2014 that ended with the resignation of president Blaise Compaoré after twenty-seven years in power, and the resistance to a coup almost a year later. Only a month after this attempt at rebellion, the country was organising presidential elections under totally normal conditions, with the active involvement of the country’s cyberactivists. There have been other similar civic actions – without the restrictive connotations that the term may have in the West – throughout Africa.

Cyberactivists have shown that they are prepared to risk their lives and their freedom, and confirmed their conviction that information is an essential tool for consolidating democratic participation. In this sense, social media have contributed to an initial break with the past: the end of the monopoly over the control of information. Cyberactivists have carried out their work by building communication channels ranging from blogs and information platforms to hashtags and Facebook groups, for example. These channels manage to get around the rules that had been previously imposed. They are independent of the authorities and of economic interests and they break the usual control mechanisms of those in power. Even so, they manage to make an impact that counteracts that of the conventional media and, occasionally, achieve greater international impact.

It may seem paradoxical to draw attention to this international impact, as though it were necessary for salvation to come from outside of Africa. But its importance is not just a Eurocentric (and egocentric) habit, it is about the fact that most of those cyberactivists have discovered that one of the pillars of the regimes they are struggling against is their international recognition – they legitimise themselves through their prestige and their image abroad. This explains the efforts to silence the dissident voices there, and also the insistence of the dissident voices to make themselves heard.

All of these actions, campaigns, and initiatives have continued to break down the status quo. They have gradually become social and political actors, either through the positive recognition of the authorities or their attempts to overthrow them. And they have done so without following the “rules” of conventional organisations. Cyberactivist communities have grown quickly because they are appealing, but their membership and operating principles, their dynamics and interests… are unlike those of other civil society organisations. The fact is that they have enhanced the public sphere and given a leading role to groups that had not been taken into account before, particularly urban youth, a growing sector of the population that is full of promise for the future.

Now the lions can explain their own stories on the net, and we can now read online narratives and histories that no longer glorify the hunters.





New Technologies to Face Up To Climate Change

With good management and legal and political regulation, the internet and digital culture may significantly reduce the depletion of the natural ecosystem.


Four men on bicycle built for four, 1898. George H. Van Norman, Springfield, Mass (Library of Congress). Public Domain.

Some years ago, many people saw the popularization of digital devices and the expansion of the Web as a sign of hope. It seemed as though the introduction of new communication, storage, and management systems that did not depend on physical objects would solve most of the environmental problems facing contemporary society. Now it seems naive to think that technology, on its own, can change the ethical foundations of the market and industry and steer them towards a more environmentally friendly system of commercial values. Nonetheless, we can still argue that with good management and legal and political regulation, the internet and digital culture may significantly reduce the depletion of the natural ecosystem.

Our economic system reacts in a direct, predetermined way to technologies that don’t provide quick profits, regardless of their social impact, their contribution to increasing the common body of knowledge, and their potential use for solving global problems. The logic we live by promotes innovation but only concerns itself with technical capacity and new functionalities, disregarding other aspects such as innovation to improve manufacturing processes, the quality and durability of materials, and their upgrading capacity so as to protect the natural environment. In these matters, the dynamics by which the market functions and influences social behaviour are immensely damaging to the ecosystem.

Initiatives that aim to change the materials used in the manufacture of digital devices, with the sole intention of reducing the harmful impact of non-recyclable products, do not attract investors or secure funding. Alternative energy sources that seek to end our systemic dependence on fossil fuels are bought up by the big fossil fuel lobbies which seek to hinder their implementation so as to maintain the current business model for as long as possible. Projects that try to use more environmentally friendly manufacturing processes based on reusable materials, without necessarily increasing turnover, generally cannot find funding: the idea seems to be that unless profits increase, there is no point investing.

In spite of the market’s lack of interest in projects of this kind, their enormous potential to improve living conditions on a global scale is clear. Digital technologies are a great ally for solving many of the environmental problems that we face, both at the public and private levels.

Open data as a Prerequisite for Smart Cities

In this sense, the main activity of government departments is to manage and use big data to discover the real needs of the city, in terms of its technical functioning and the way citizens behave. An example is traffic control based on the actual behavioural parameters of vehicles, which can then be used to set variable speed limits depending on air pollution levels, or to change the direction of certain streets depending on the traffic saturation at different points in the city.

Artificial intelligence systems can be used to improve the behaviour of most of the technological systems that control and regulate our cities, but programming them requires open access to big data. This is the main underlying premise of open data, which is based on the idea that certain data should be available for everybody to use and modify without restrictions from copyright. Only open access and public ownership of the data generated in cities will allow city managers to develop the tools required to transform this data into initiatives intended for the good of society. Barcelona is a good example of a city that is making progress towards becoming a smart city, based on the use of open data. The website opendata.bcn.cat allows users to search various fields in an open data repository that provides very useful in-depth knowledge of the city, which can be used by potential developers of new adaptable, interactive resources.




The smart cities concept is based on a three-part equation: sensors + data + software. First, we need sensors to gather as much information as possible (while respecting user privacy and anonymity), then this information has to be classified as accessible data, and finally we need software that can transform the data into digital tools that can be used to solve real problems. With this in mind, there are many projects that provide data visualisation and manipulation functionalities for city planning purposes. City simulation and app development support platforms like Matsim, CitySDK and the European projects Eunoia and Besos are good examples of what technology has to offer in this field. Used in conjunction with the Open Sensors Platform (OTS), we have sufficient tools to help us turn technology around and start working towards a cleaner and more sustainable environment. The increasing affordability of sensors and computer hardware, including new open-source hardware initiatives, add the final component required to complete the abovementioned equation.

Another aspect to take into account is the fact that the path towards smart cities is expected to be given a tremendous boost by the consolidation of the Internet of Things, in which physical objects are embedded into the digital fabric, with network connectivity that allows them to collect and exchange data. Once enough objects are connected to the net, the radical separation of the virtual and physical worlds would no longer make sense. Experts estimate that fifty billion objects will be connected to the Internet of Things by 2020.

In the case of the city, once vehicles are connected to the net, we will be able to use algorithms to improve the efficiency of traffic control and reduce air pollution, the lifespan of components that generate waste, and driving efficiency. Similarly, the connectivity of street furniture and other elements in public space will allow us to manage street lighting, watering systems, and waste collection in a much more sustainable way.

The Home: Environmentally-friendly domestic life

However, solutions cannot just be on the macro level. In fact, energy consumption, which is one of the major environmental problems, has a lot to do with our individual behaviour and habits in the home. The measures proposed by local councils, governments and international organisations will never be effective or sufficient unless citizens become active agents committed to their implementation.

The internet and digital technologies also provide accessible solutions to improve the way we manage energy use and reduce fossil fuel consumption in the home. Home automation systems that allow users to programme certain household items and tasks through centralised control and internet connectivity are becoming more affordable and accessible to a large part of the population. These smart homes allow users to configure climate control systems based on geographical location, the climate, weather forecasts, our habits, and so on. For example, your climate control system could turn off the heating if you open the windows, or turn it on a little while before you get home thanks to GPS tracking that tells it when you are due to arrive. Smart appliances also improve the efficiency of food management, for example. They can monitor the expiry dates on your groceries, let you know that you need to eat an item before it goes off, and remind you to reduce the temperature of your fridge in winter. Through these types of systems, the interconnection of physical objects managed through mobile devices opens up new horizons for efficient management of the home, and also business premises and public buildings.

Meanwhile, as user awareness grows, projects that seek to encourage more responsible manufacturing are starting to find a market niche in which to develop alternative products based on ethical and more environmentally friendly principles. Fairphone, for example, produces affordable modular smart phones with parts that can be replaced or upgraded in order to increase capacity. Moreover, the manufacturing process complies with the principles of fair trade, ensuring that the workers are not exploited and the environment is not damaged in order to obtain the raw materials. Google also has a project along these lines, Project Ara, which isn’t based on the philosophy of fair trade, but does use a modular design for smart phones that makes it possible to considerably prolong the lifetime of the device and significantly reduce the waste generated.




Conclusion: Do we really want to save the planet?

The above examples focus on the design and production of more environmentally-friendly devices and systems that add qualitative value. But it is not only up to technology companies to adopt environmental values and to respect the environment. As users and purchasers, we also have to adopt these values. We need to demand more than just efficiency and functionality from our digital devices: a product that is not environmentally friendly should not be considered high quality or desirable.

The time has come for all of us to become aware of the enormous potential of technology to contribute to building a social and economic model that is fairer and more sustainable. But before we start putting our efforts into designing a feasible and effective roadmap for the preservation of the ecosystem, we need to honestly ask ourselves the first and foremost question: do we really want to save the planet? Are we prepared to go beyond rhetoric and good intentions, and take action in our daily lives? In other words: are we willing to leave the car at home unless we really need it, to use less polluting means of transport even if it requires an effort, to go easy on climate control in enclosed spaces, to use devices until they stop working rather than constantly replacing them with newer and more fashionable models, to stop and consider whether we really need something before we buy it, to take action against excessive packaging and the use of non-reusable materials…?

We have to decide whether we want to create a technological and consumption framework that will not become the final executioner of the ecosystem as we know it, but an ally in its preservation. Because the main problem we face is the fact that our current economic system is unsustainable. The unbridled consumption of raw materials and the constant manufacture of new products are totally incompatible with respect for the natural environment. So unless the logic of the market turns around and adopts the universal ethical value of the human species, the sustainability of the planet will continue to be undermined.





Neuromarketing: You Want It, You Just Don’t Know It Yet

Neuromarketing studies the way the brain works in consumer purchasing decisions, particularly the part of the brain that we cannot express rationally.


DTwo children looking at a window of a chocolate shop. CC-BY-NC-SA, Paul Townsend.

“A lot of times, people don’t know what they want until you show it to them.” This flippant aphorism attributed to Steve Jobs is chilling and bellyache-inducing, and it perfectly encapsulates the aim of neuromarketing. While traditional marketing supposedly seeks the best way to meet consumer needs, neuromarketing shares the same goal from the point of view that Jobs’ words suggests: it studies the way the brain works in consumer purchasing decisions, particularly the part of the brain that we cannot express rationally.

Let me start by saying that I can’t think of a single benefit that makes the idea of neuromarketing positive for a society. At best, it allows a powerful few to maximise profits by producing the ideal campaign based on the tactics of neuroscience. I just don’t know is whether it is bad, disastrous, or macabre. In an era in which we are all slaves to voracious consumption and the mechanism that drives programmed obsolescence has jumped from machines to our own minds – “I don’t want this anymore, it’s old fashioned” – the last thing we need is studies that allow us to understand how to attract the gullible and give them that extra little nudge that drives them to buy the latest car, the newest television, the next smartphone. Gilles Deleuze described marketing as the new form of social control through which man is no longer confined, but in debt. Neuromarketing is a further turn of the screw, in which Deleuze’s hypothesis becomes a nightmare right out of Ray Bradbury: Live fast, die young, but waste all your money, please.

They Live, We Sleep. CC-BY, Eduard V. Kurganov.

But conspiranoiacs should not get excited just yet, given that there’s nothing new under the sun. Neuromarketing is neither magic nor a new form of mind control. It simply studies a consumer’s purchasing decisions using techniques developed years ago in the field of neuroscience. I’m sorry to tell them that neuromarketing is not a cover for a scenario like the one in the film They Live, in which Roddy Piper finds sunglasses that enable him to see the extraterrestrial reality in which we live, controlled by subliminal signs ordering us to “obey, buy, reproduce” hidden in travel ads and news reports. None of that. Neuromarketing is just the same old consumer spirit, trying out the latest technological fad. Yes, it’s scary; but as far as I’m concerned what makes it scary is the consumer aspect, not the technology.

The discipline of neuromarketing seeks to access the part of a consumer’s decision that is not conscious and cannot be expressed rationally. It tries to find out what emotions a particular ad or product triggers without asking consumers, given that to a large extent they themselves would be unable to say. In other words, neuromarketing hooks us up to the polygraph and records our bodily responses in order to detect what we feel – rather than what we think or say – about a product, brand, or service. A lie detector that can squeeze out the essence of our decisions.

This means that even if ultra modern, high-tech tools and techniques are used to measure consumer responses and behaviour, they can’t do anything new for now. From eye-tracking to discover how we behave in response to what we are looking at (not what we are seeing, because we could be daydreaming), to measuring physiological signals such as heartbeat and skin conductivity, by way of studying the movements of our facial muscles and, particularly, recording our brain activity by means of different procedures: none of these can read our minds, let alone manipulate them. For the time being, the very idea of divining our thoughts is totally unfeasible, given that we can’t even identify an emotion using medical imaging technologies. It is true that we can measure responses such as tonsil activation in response to fear or hate, that the insular lobe appears to be involved in revulsion or rejection responses, and that the performance of the dopaminergic system is usually boosted in the presence of positive emotions. Nonetheless, there are no reliable systems by which introducing a particular brain activation image leads to a specific response such as “happiness”, “sadness” or “anger”. And even if it were possible to instantly and accurately identify the emotion that somebody is experiencing, it would be useless. Just as you like coffee and I prefer peach juice, a particular product or advertising campaign will not trigger the same emotion in all human beings. Different strokes for different folks.

SMI Eye Tracking Glasses. CC-BY, SMI Eye Tracking.

SMI Eye Tracking Glasses. CC-BY, SMI Eye Tracking.

But we’re talking about the present and the short-term future, of course. Where will we end up if we continue along this path? Will the “buy button” – that Holy Grail of consumer culture that enables us to understand all the factors which unfailingly lead to a purchase – cease to be a myth? The design of new tools, and the combination of existing ones that are starting to be implemented, promise a thrilling future for all those who want us to us to buy, buy, buy. It won’t be long before the neuromarketing experts get together with the Big Data guys and start crossing individual data with large-group trends, extracting stratified profiles of potential customers along with customised strategies.

Because the fact is that we are connected to the internet all day long, and this makes us a never ending source of resources for companies. I’ve seen meetings at which big hospitals and pharmaceutical multinationals dealt with patient information as though they were swapping trading cards. Sure, individual confidentiality is upheld, but here’s a friendly tip: read everything you sign. We all share whatever opinions pop into our heads on Twitter, Facebook, blogs and other media, without realising that those opinions are worth money. A lot of money, if we’re talking about a lot of opinions. We have inadvertently become free consultants, telling companies about our habits, tastes and preferences. And that’s just the beginning.

If the hyperconnectivity trend continues – and we certainly don’t seem to be backing down — every step we take will soon be monitored. The bracelets we use when we go running and our super-cool smartwatches will provide real-time information on our exact location and our heartbeat rate, for example. And as computational capacity increases even further, it’s not hard to imagine that companies will be able to use these devices to deduce our stimulation in response to a particular ad or product. The technology would simply have to detect the fact that we are looking at a screen, record the moment when a particular video or banner is displayed, monitor our heartbeat or skin conductivity – which is an indirect sign of excitement: “we get goose bumps” – and save the data. Multiply this process by a thousand, or a hundred thousand, and we can start to draw conclusions.

And wouldn’t it be fun for companies to stimulate our purchasing decisions rather than just read them? There’s no need to jump to a distant future in which cyborgs are the norm and our purchase commands go straight to whatever chips are in use. There are already a lot of studies on how non-invasive techniques such as transcranial stimulation can regulate our emotions. Activating certain electrodes, for example, can modulate the functioning of a part of the prefrontal cortex linked to the part that controls negative emotions. In layman’s terms, the things that made us sad or afraid will make us less sad and afraid. Wonderful, right? But because money talks, a technique with enormous potential for treating serious depression could end up being a catalyst for our future permanent, artificial happiness. For the time being, this technique requires highly controlled conditions, and the coil or the electrodes have to be positioned extremely accurately, very near to the head. But science constantly advances and who knows what the future will bring in ten or twenty years time. Perhaps by pressing a little button, the passer-by who enters a store, through the security gates, will instantly feel happier at the sight of the products. An idyllic future, no doubt about it.

Transcranial magnetic stimulation. Public Domain, Wikipedia.

In a world in which companies only think about increasing profits year after year, it’s hard to imagine how far professional ethics go, and the point at which we simply become indicators. In Fight Club, Tyler Duran said that advertising has us “working jobs we hate to buy shit we don’t need.” Perhaps we could start by considering this statement instead of continuing to think about buying even more shit that we need even less. And perhaps things will start working better then.





Rethinking Time in the Digital Age

Our experience of time is increasingly mediated by digital devices of all kinds and our everyday timescapes appear to be accelerating exponentially.


A large group of female war workers queue to clock in for work at a gun factory. Great Britain, 1940. © IWM (D 672)

Some concepts linked to the digital society have already become embedded in our culture, although their meaning is sometimes vague. They include notions such as “the acceleration of time”, “real time” communication, “multitasking”, and “globalised time”, which attempt to describe the situations that we are faced with in the new digital ecosystem. To counteract this (biased) account of the pace imposed by digital technologies, a few critical voices are questioning our responsibility when it comes to the way we understand, create, and socially use this new digital time. On 22 February, Judy Wajcman will talk about time in the digital age at the CCCB.

The question of the nature of time is one of the oldest debates there is. Some people believe that time is a thing that exists in measurable form, while others think it is alive, and that we are made of time. This debate is gaining new momentum in the digital society, now that the Internet, mobile devices, and other digital artefacts offer us new ways to experience and understand time.

New Timescapes

Imagine it is an ordinary weekday. You leave work mid-afternoon and walk to the bustop, chatting on your mobile with a friend that you’ve arranged to meet. You check the time on your watch and mentally count the number of stops to work out how late you’re going to be. But you decide to double-check and open an app that calculates it for you with a processor running at 4 billion cycles per second. Near the crossing, you estimate how fast the cars are travelling on the next street, and try to guess whether you will get a green light. You look at the phone and see that your friend sent a message 3 minutes and 2 seconds ago to ask how long you will be. You start running because you know she hates waiting.

You get to the bus stop. You watch the clouds drift by, listen to the pealing of the cathedral bells, and notice the speed of your heartbeat. You look at the old man walking slowly along the footpath opposite, and at the girl on the seat beside you, immersed in a science fiction novel set in a distant future. The information panel at the bus stop informs you that your bus is due in 3 minutes and 10 seconds. You are 17 minutes and 3 seconds away from your rendezvous, 6 hours and 18 seconds from bedtime, and you have 7 hours and 23 seconds of battery left. The girl beside you is still immersed in fictional time. You get on the bus.

Living in society means constantly interacting with various indicators of time and different time scales, which mix with our own individual experience, memory, intuition, and ever-changing sense of the passing of time, on which we base our present and future plans. In her 2004 book Time, Barbara Adam, a leading theorist of contemporary approaches to the study of time, used the term “timescapes” to describe the complex mix of different biological and mechanical rhythms, with different speeds and tempos, in which we live. These timescapes now coexist with the rhythms and times of the digital ecosystem that runs parallel to the “real” world, sometimes in harmony, sometimes in confrontation. Time becomes more “present” to us, it flows, expanding and contracting, behaving in the way French philosopher Henri Bergson theorised the time: as a continuous flow that has a duration, and constantly moves from the present to the past. According to Bergson, time is not an indivisible whole and cannot be reduced to quantifiable discreet units. Instead, it is an internal, subjective state that often clashes with its technical representations: digital time is reproducible, reversible, and infinite, while human time is irreversible, limited, and subjective.

License: CC0. Source: negativespace.co

Technological advances have always affected the relationship between space and time in society, particularly during the industrial revolution. There is nothing new about this, but our everyday timescapes appear to be accelerating exponentially. This is due to digital technology, which plays a key role because our experience of time is increasingly mediated by digital devices of all kinds. And this mediation is neither minor nor neutral, especially as these devices function at different rhythms and synchronies, and are constantly evolving.

Our perception and use of time in digital communication depends on interfaces and software, and their ability to transmit data. The calculation speed and acceleration underlying the algorithms that we use to organise and display data limit the information we have access to, and thus our understanding of our world. In spite of their importance, we scarcely question them, partly because we lack time due of their rapid obsolescence, the rhythm they force on us, and the time we have to spend on learning to use and maintain them.

Network Time

In recent years, the Internet has become an arena where of these new temporalities come together: a digital convergence of most earlier media, along with the new time conflicts that are native to the net itself. In The Rise of the Network Society (1996), professor Manuel Castells used the concept of “timeless time” to reflect on what he saw as the waning of biological and social time in favour of homogenous, globalised time in network societies. Castells argued that temporality is transformed in the information network age, in which our experiences of time dissolve into a timeless cyberspace. Similarly, in Open Sky (1997), Paul Virilio noted that in the physical world the “when” is linked to the “where”, and suggested that we are losing our time reference because instant communication and network time are “killing” the present, separating it from its place and context, so that the concrete presence of the communicative act is no longer important.

Meanwhile, in 24/7 Time and Temporality in Network Society (2007), Robert Hassan argues that these earlier interpretations of a network society in which capitalism has imposed a global market and a global temporality fail to account for the acceleration of flexible time, in a world where networks modulate and converge in a very wide range of social experiences of time. Hassan believes that the true temporality of networks is asynchrony, because the huge ecosystem of the Internet allows each user to engage with different spaces and times independently of their “real” local time. Homogenous, globalised linear time becomes an almost infinite fragmentation of different synchronous and asynchronous time contexts that are alive on the net.

License: CC0. Source: kaboompics.com

The network society radically alters our relationship to clock-time, to local and global social time. It does not replace or reject it. It displaces it, offering new ways of controlling and experiencing time. Hassan uses social media and chat rooms as an example: there is no relationship between these communication spaces and the actual local time of the users. This is a huge challenge, given that all spaces are time contingents that shape our experience. There are timetables and schedules at the workplace for example, some determined by the nature of the work, and some by cultural dynamics. In this sense, the Internet can be seen as an opportunity to change the “rules” that govern and control social time, and to reinvent new spaces and contexts for communication.

The Boundaries Between Work and Leisure

The boundary between work and leisure is increasingly blurred. The information and communication networks and mobile devices that are part of our everyday lives can become a means by which work expands into our leisure time. For this reason, countries like Francehave decided to regulate the use of e-mail outside of working hours, and German companies such as Volkswagen are restricting employee access to email accounts outside of work. Digital communication tools have changed the world of work so drastically that its boundaries have become unclear, not just in terms of working hours but also in terms of physical presence, given that there is often no longer a single, stable workplace. This change is not just quantitative, and it does not just affect work efficiency or performance (always called into question in Spain). Above all, it affects the way work is organised, and the role of the employee in this new stage. The fifteen-hour working week that the economist John M. Keynes predicted in 1930 for his grandchildren’s generation as a result of production efficiency and technological progress echoes in the background whenever we check our e-mail outside of work.

In her recent book Pressed for Time (2015), sociologist Judy Wajcman draws attention to the need to organise ourselves in order to deal with the sense of always being overwhelmed by work (by work time). We seem to transmit a positive image when we complain about how much time it takes to manage work e-mails, Wajcman says, perhaps because it suggests a big workload, but also because it indicates that we have the skills required to use digital technology for work management.

License: CC0. Source: photos.oliur.com

But technology was supposedly created to help us do our work more efficiently and productively (or so we’ve been told), so shouldn’t the fact that we feel trapped by work through digital devices be considered a collective failure to manage work and said devices, rather than as a merit? The more time-pressed we are, the more we consume digital devices, as if they were a drug.

Another interesting issue that Wajcman points out is the tendency to individualise our experience of time. Monitoring our personal activities through apps and i-Watches, to “manage” our lives and make us more “efficient” makes us see time as a personal measure, as if it were a quantifiable material. Nothing could be further from the truth, given that time comes from and is used in the communal construction of activities in society.

Digital technologies are a further step in the history of the acceleration of production processes, but for the time being they have not freed us from work or decreased our work time. The planning and design of digital environments and devices to increase work efficiency should not be the only way to understand and conceptualise the temporalities these new digital scenarios. Which is why, as Wajcman says, we need to realise that the imperative of speed is just one of the possible cultural artefacts we have constructed to help us understand this new digital world. And it is up to us to resist the prevailing narrative by exploring, understanding, and developing cultural alternatives to the temporal experiences of our digital ecosystem.





Mass Surveillance in Latin America

What is the situation like in Latin America in regards to mass surveillance?


The ABC’s main telephone switchboard at 69 Market Street (from 1932). Source: ABC Archives

In a global scenario where mass surveillance has already been exposed, especially thanks to cases such as Edward Snowden’s, we know privacy is a fundamental right we must protect. What is the situation like in Latin America in regards to mass surveillance? Alan Lazalde takes a look at some examples that show that, on top of being tracked by the United States government, some Latin American governments also hire their own surveillance. At the same time, we are witnessing the spread of activist movements, tools and technologies that try to facilitate the right to privacy and anonymity.

Cypherpunks

In 1993, Eric Hughes wrote: “Privacy is necessary for an open society in the electronic age (…) Privacy is the power to selectively reveal oneself to the world”. So begins the Cypherpunk Manifesto, a text that depicts cryptography as a tool for resistance and freedom. These words remain valid more than twenty years later. And they also remain necessary.

When Julian Assange –recognised member of the cypherpunk community– took the world by storm with the first Wikileaks revelations in 2007, he triggered the spreading of topics that had until then been alien to mass media and their audiences. For instance, the word hacker was among the most used by the media –albeit with the usual negative connotations–. Topics such as anonymity and privacy became more frequent. In parallel to that, collectives like Anonymous and LulzSec helped raise awareness about the Internet’s deeper and darker layers, far from the Facebook and Google platforms –mass surveillance public machines, as Richard Stallman likes to call them.

With these predecessors in the media, Edward Snowden came into the picture: a young computer engineer and Buddhist believer who, in mid-2012, opened the door for The Guardian and The Washington Post to the most chilling files regarding mass government surveillance.

What Snowden showed the world was a worldwide surveillance network orchestrated by the United States, a complex network consisting of tools capable of analysing, in a matter of seconds, millions of personal data from social networks, telephone operators, emails and more. PRISM, as we now know, is the name of the technology created to that end.

The world changed thanks to the work of people like Assange, Manning and Snowden. On the one hand, because they make it evident that we live in a surveillance state, even more sophisticated than Orwell imagined. And, on the other hand, because their work made tools that facilitate the right to privacy and anonymity –the cypherpunk dream– all the more relevant, also giving visibility to non-governmental organisations that promote them. Tools such as TOR, for anonymous navigation, or technologies like Bitcoin and Blockchain, for the creation of decentralised and anonymous economies, are on the rise.

This is the global surveillance context in which countries in Latin America and worldwide find themselves.

What happens in Latin America?

For more than two decades, the United States government has tracked billions of phone calls being made from inside the country to another 116 countries. Naturally, this includes almost all Latin America.

But, if these efforts by the US government were not enough, some Latin American governments insist on surveying their own citizens from home. This was proved to be the case after the recent leaking of the customer list at Hacking Team, the Italian company that owes its reputation to the development of large scale surveillance software.

And thus the hunter became the prey. In July 2015, Wikileaks published over a million emails from Hacking Team, nearly 400 GB of confidential information including conversations with some of their clients, among them six in Latin American countries: Chile, Colombia, Ecuador, Honduras, Mexico and Panama.

The Hacking Team software, referred to as “Da Vinci”, is capable of surveying from 1 to 100.000 people. By fairly simple means, it gets a hold of Whatsapp conversations, Skype and email. It can even record keyboard strikes and access a computer’s webcam remotely. These functions compete in effectiveness and scale with those of PRISM.

Renata Ávila, Guatemalan activist and one of the leading experts in the field, writes:

It is no longer possible to trust the devices we use to communicate. From phones to computers and intelligent chips, they have all been designed so that they can be subverted by intelligence agencies”.

And she adds:

Unlike in other regions of the world, the stories of surveillance trauma are still felt in Latin America, where almost half a million people went missing (during the military dictatorships of the 80s and 90s)”.

Investigators such as Paula Jaramillo, from the Chilean NGO Derechos Digitales, point out that:

Latin American governments are yet to understand that guaranteeing the right to privacy is a requisite for the development of other fundamental rights in a democratic state”.

Interestingly enough, some of Hacking Team’s clients also signed a United Nations General Assembly resolution that “condemns mass surveillance on the Internet and demands member states to revise their legislations and to have them respond to human rights standards”.

Despite a painful history of repression, persecution and missing people, our region does not show meaningful signs of being under way to eradicate mass surveillance against citizens. “Even though Snowden has shed some light on these [surveillance] practices, in Latin America we remain in the dark”, says Ramiro Álvarez Ugarte, from the Civil Rights Association in Argentina.

Let’s take a look at some specific cases.

Colombia

Colombia is especially troubled among Latin American countries because of PUMA, a mass surveillance infrastructure. According to a report by the Fundación Karisma, PUMA is illegal because it belongs to the Dirección de Inteligencia Policial (Pol), whose efforts should have to do with intelligence, not interception.

What power does PUMA have? It can implement mass surveillance of 3G mobile data, and it can act in the main Internet lines to monitor voice and data communication in the whole of Colombia. The original report on surveillance in Colombia by Privacy International indicates that PUMA is used to spy on journalists, judges, politicians of the opposition and human rights activists.

Mexico

Mexico is a frequent Hacking Team customer. The governments of at least seven provinces have hired their services, as well as six federal institutions, both military and related to public administration. It’s estimated that the sum of those contracts amounts to almost 6 million Euros.

In contradiction to this, the Mexican government also presides the Open Government Partnership, an initiative that gathers over 60 countries “to promote civic engagement, increase transparency, fight corruption and use technology as a tool for this opening-up”.

Article 7 of the Mexican Constitution is very clear:

The freedom to spread opinions, information and ideas by whatever medium is inviolable… No law or authority can establish prior censorship, neither can they influence freedom of expression”.

However, the last reform of the telecommunications law included policies to geolocalise cellphones   with no need for a judicial warrant. Given the country’s surveillance conditions, some Mexican activists have started launching platforms in favour of transparency, such as Mexicoleaks.

In 2015, Mexico held the ISS World Latin America, an exhibition for companies specialised in commercialising spying equipment for governments, which is the very same equipment used to spy on and persecute journalists, activists and human rights defenders. Hacking Team, by the way, was one of the event’s sponsors.

Argentina

Although Argentina has only had a few conversations with Hacking Team, we are aware of the presence of Blue Coat, a key NSA contractor in the US that very recently bought spying equipment from Germany.

On the other hand, since 2011 the Argentinian government promotes a system called SIBIOS, whose aim is to keep a record of the faces and fingerprints of the entire national population. According to several activists, SIBIOS is “the greatest threat to individual liberties since the return of democracy, primarily because it is invasive to our privacy and because it violates the presumption of innocence principle”. According to other experts, such massive biometry is an attack to anonymity, privacy and data protection.

Some references

The Electronic Frontier Foundation is a pioneering organisation in the defence of digital rights, both in legal and in technical terms. Their work, I think, has inspired the launch of specialised organisations in Latin America. These are some of them:

All those organisations have a very active participation in projects such as International Principles on the Application of Human Rights to Communications Surveillance, an interesting initiative that promotes 13 surveillance principles that are compatible with human rights.

You can also find some leading voices on digital rights on Twitter:

Cryptography and freedom

The political discourse employed by governments focuses on opening up, transparency and many terms derived from the so-called Open Government. However, evidence shows that governors are surveying our communications in ways that keep getting more and more sophisticated.

The Cypherpunk Manifesto reads: “We cannot expect governments, corporations, or other large, faceless organizations to grant us privacy out of their beneficence”. Perhaps, in an open society, in any part of the world, true privacy will only come through cryptography. This is a great opportunity for digital rights activist organisations to design educational strategies, partly civil and partly technical, which can bring their impact beyond the usual circles.

If 20 years ago the Internet was something almost exclusive to engineers, it is not difficult to imagine how the “cryptography” hidden behind the best tools and concepts could become part of our daily life. Maybe Bitcoin will become the first large-scale piece of technology of that nature.

So far, we know mass surveillance is a global reality. My suggestion is that we stay alert to our governments’ pulse and be ready to take care of our fundamental rights. I think we have –and will remain to have– many tools in our favour.





The Internet in Latin America

Latin Americans make up 10% of the world's internet users. What do they use the internet for?


Students with netbooks from the program Conectar Igualdad. Argentina, 2012. Source: Flickr

An overview of the state of implementation and the use of the internet in Latin America, an area that makes up 10% of the world’s internet users. This article presents three case studies based on political, technical or civic innovation that offer a good overview of the diverse ways of thinking about and using the internet: FLOK Society in Ecuador, Rhizomatica in Mexico, and the case of Cuba: the other internet.

A Social Media Paradise

Latin Americans make up 10% of the world’s internet users. What do we use the internet for? Everything indicates that we love communicating, nobody spends as many hours on social media as we do: an average of five hours a month, which is five hours more than the world average. We also make up 20% of Facebook’s user base and almost 40% in the case of WhatsApp. Our presence on other social networks such as Twitter, LinkedIn and YouTube has also grown exponentially over the last few years, primarily due to three factors:

  1. The ongoing improvement of internet infrastructure,
  2. the wide penetration of mobile devices in Latin America,
  3. and an enormous population of young people hungry for social content.

Latin America is a social media paradise —Brazil has been called the world’s social media capital —, and the region is so dynamic that companies and businesses are starting to look at us with ambitious seriousness.

Will we keep growing?

The internet is growing rapidly in Latin America, perhaps faster than ever before. According to the UN Economic Commission for Latin America and the Caribbean (ECLAC), the region’s internet penetration has doubled in recent years, rising from 20.7% in 2006 to 46.7% in 2013.

The bad news is that only a third of that rate is broadband: internet speeds in Latin America are up to sixteen times lower than in developed countries and, worse still, access can be up to twenty times more expensive.

But there are strong indications that internet coverage and speed in Latin America will continue to increase:

  1. USAN (the Union of South American Nations) is planning to build an infrastructure it calls the “digital ring”: a 10,000 kilometre optic fibre network that will significantly improve internet access and speeds in the southern region of America. USAN expects it to be ready in 2020.
  2. In Brazil, the region’s largest economy, over 100 million people – more than half of the country’s population – are connected to the net. This puts it fifth on the world ranking, after countries like China and the USA. The growth of Brazil is due to an interesting combination of governmental policies and, above all, grassroots citizen efforts. Some of the advanced policies regarding internet access and use in Brazil include its exemplary Civil Rights Framework for the Internet, which protects civil rights online, and the hundreds of independent internet service providers that function as “small producers” spread all over the country. This is an example that deserves to be reproduced throughout the region.
  3. In North America, Mexico, and Costa Rica progress has been made through regulatory frameworks that seek to promote competition and increase internet coverage. Mexico, in particular, faces the challenge of weakening the monopoly of Telmex, a private corporation owned by Carlos Slim that increases the price of accessing the net while at the same time providing low quality service due to the lack of competition. All of this is quickly changing to the benefit of over 50 million people.

However, it should be noted that Mexico’s telecommunications reform includes aspects that threaten internet neutrality –a basic principle that ensures a competitive balance among service providers and their users. These are serious concerns for the digital rights of Mexicans.

Three different ways of making internet

Below I describe three projects based on political, technical, or grassroots innovation that offer a good overview of the diverse forms in which we use and think about the internet in Latin America.

Ecuador: FLOK Society

Free/Libre Open Knowledge (FLOK) Society  is an Ecuador-based project for the collaborative creation of public policies. It aims to develop a free knowledge society based on the connectivity of the internet and its networks.




It is profoundly influenced by two philosophies:

  1. The  philosophy of free software , according to which all knowledge must be free, shared, and created as part of a community and which is, in turn, inspired by the so-called “hacker ethic” that developed at the dawn of computing (in the early seventies) and defends the right to access all knowledge.
  2. It also draws on the idea of the Pachamama, encompassing “principles of reciprocity (randi-randi) and the organisation of community work (maki-maki)”, which generate a cultural commons in the Andes.

Guided by these principles, FLOK was founded in 2013, bringing together dozens of experts grouped around issues of strategic importance for Ecuador, as well as politicians, lawyers, academics, artists, and hackers, all seated at the same table. These meetings led to the organisation of the BuenConocer Summit  , which gave rise to the collaborative book “FLOK Society: Buen Conocer”.

FLOK is a political response that is the opposite of “cognitive capitalism”:

“The Knowledge Revolution proposes innovation, science and technology as the foundations for change in the production structure, which is conceived as a distinct form of production and consumption.”

FLOK is still underway and, like free software, it generates a “code” of public policies ready to be learnt and improved anywhere in the world.

Mexico: Rhizomatica

The mission of Rhizomatica is to increase access to mobile telecommunications by the more than two billion people who cannot afford to pay coverage, and the 700 million who have none at all. How do they do it? Using free software and hardware whenever possible for data transmission and reception, studying the relevant regulatory aspects, and training communities to preserve their communication autonomy.

The project began in Oaxaca, Mexico. Its first telecommunications station was set up in Talea, a Zapotec town with a population of 3,000 with no access to telephone services let alone the internet. As the coordinator of the project explains: : “In Talea, the town owns the equipment. There was an assembly, the townspeople said they wanted the internet, they purchased the equipment with the community’s money, and the infrastructure belongs to everybody.”




There are some 50,000 similar communities in Mexico, and Rhizomatica wants each of them to own their infrastructure. To date, the Rhizomatica team made up of hackers and engineers has installed around twenty communication networks in Oaxacan communities.

And, as  they themselves explain, in 2015 they benefited from a section of the Mexican telecommunications reform that assigns certain bands for social use, particularly in zones which lack basic telecommunications infrastructure. This means that Rhizomatica operations are totally legal in Mexico.

We hope that they will continue to grow and offer access to mobile communications in general, and the internet in particular, to communities that were taken off the internet map by corporate business models and government negligence.

Cuba: The other internet

The general lack of internet access in Cuba has given rise to “the other internet”, a very modest network that does not use cables, modems, or any kind of infrastructure to connect devices and people: it uses hard drives and, of course, people who carry them from one place to another. It is known as the “Paquete semanal” (The Weekly Package).

Apparently, only 5% of the population has regular access to the Internet, and only 1% in the case of broadband. WiFi is only available in public areas, and the hourly rate is equivalent to 20% of the average weekly wage.

Under these conditions, internet access seems to have remained frozen in the nineties, and it is inconceivable to watch YouTube videos on a daily basis, to leisurely browse any newspaper in the world, or to set up a technology startup, as in other parts of the world.

But this situation is changing with the gradual lifting of the US embargo in 2015, and the Cuban government hopes that 50% of the population will have internet access at home by 2020.

What does El Paquete Semanal include? Almost everything. Television series, soap operas, music, films, video clips, a compilation of the leading magazines around the world, software, mobile phone applications, and so on. It even includes ads for local businesses.




Where does the material come from? Somebody has to download it first. The absence of pornographic and political material suggests that the government itself is involved in its production, but we don’t have any other information. Meanwhile, it is a lucrative business that will soon be obsolete if the Cuban government fulfils its promise of connectivity.

What next?

Latin America is a region of cultural, political, and technological extremes. It is logical that there should be so much diversity: some people simply want to watch the latest episode of their favourite series, while others want to build companies using Bitcoin to develop the economies of the future, all via the Internet.

What’s next for Latin America and the internet? Judging by current data, the next five years will bring radical changes to infrastructure and local regulations, which will mean tremendous dynamism for citizens. These changes will provide huge benefits in all senses, particularly in educational and economic terms, but they are also a latent concern in relation to digital rights.

Beyond bytes, machines, and source code, the internet is above all about connecting people, at the local and global levels, through knowledge and information. To “make internet”  is to “constantly learn and create from the net, on the net, and for the net”. So given the complexity of the new networks and connections that will strengthen and expand the internet in Latin America, all we have to do now is participate and get ready to be surprised.





Cory Doctorow: “Every euro that we spend spying on our kids is a euro we don’t spend on books”

We took the opportunity to interview Cory Doctorow who was here to launch his graphic novel In Real Life as part of the Kosmopolis ongoing programme.


What are the benefits of videogames? How should new technologies and the internet be integrated into school education? What’s wrong with copyright law, and how would you change it? We took the opportunity to interview writer and digital rights activist Cory Doctorow who was here to launch his graphic novel In Real Life as part of the Kosmopolis ongoing programme.

Doctorow considers videogames, like novels, to be a form of artistic expression, and defends their capacity to create new imaginaries. He says that he sees the internet as a tool for collaboration and organisation, and that it should be incorporated into education openly and without taboos. He criticises current copyright law in that it treats all works equally, without taking into account whether they are used for commercial, cultural, or other purposes. He proposes two very clear changes: that authors or creators of a work should have control over the copyright of their own works and the power to authorise a waiver if they choose, and that we should move towards having less intermediaries between authors and their audience, as this would allow fair competition.

Doctorow is an active defender of free online culture, Creative Commons, and the liberalisation of copyright laws. As such, he has transferred the copyright of the interview to CCCBLab so that we can share the content freely.





The Maker Culture in China (III): Hands-On, for Economic and Social Progress

The spirit of learning by doing and the free experimentation that characterise maker processes are keys to renewing the idea of “made in China”.


Create Something. Source: Flickr.

Given the emphasis on innovation in China’s maker culture, and the shift towards the market as a result of its encounter with Shanzhai culture, makerspaces are not just an opportunity to boost the country’s economy but also a means for modernising its educational system, which is rooted in tradition and geared towards the “gaokao” – university entrance exams– and thus encourages rote learning. The spirit of learning by doing and the free experimentation that characterise maker processes are keys to renewing the education system so that it becomes capable of educating a new production class of professionals who have the capacity to create and to take risks. This will lead to a leap from the idea of “made in China” –the country as a supplier of cheap labour for Western companies– to “created in China”.

Along these lines, in 2011, only a year after the creation of XinCheJian, Shanghai authorities decided to set up and finance 100 “innovation houses”, spaces in community centres set aside for teaching people to use post-digital tools. David Li, who is actively collaborating in the implementation of these centres, explains that “there are already 72 innovation houses being implemented in community centres around Shanghai. Minister Li Kequiang’s media-friendly policy in relation to makerspaces has attracted attention to the initiative, and will lead to significant growth in 2015.”

A more ambitious project is the construction of the largest makerspace in the world at China’s most prestigious engineering school, Tsinghua University in Beijing. The plan includes a 16,000 square metre building in which all students at the university will experiment with open hardware as part of their degree. Last year, Tsinghua also launched an annual Makers’ Day, an event that promotes the integration of maker culture within education thanks to the university’s reputation for excellence.

Government backing for the expansion of the maker movement has also led to a maker fair being organised in China. An initiative linked to Make Magazine, Maker Faires include workshops, exhibitions and demonstrations related to DIY making, and attract enthusiasts from around the world. The first of these events was organised in the United States in 2006 and they soon spread throughout the planet. China’s first Maker Faire was held in Beijing in 2012 with the support of the Communist Youth League, and later moved to Shenzhen, where the last edition attracted 30,000 visitors.




Another event with a greater media impact was the Chinese Prime Minister’s visit to Chaihuo Makerspace in Shenzhen in January this year, which put the spotlight on the important role of the movement in the country’s economic development. A few months earlier, in November 2014, the Mayor of Shenzhen Xu Qin had published the Declaration of Global Makers, which turned the city into a place that welcomes and promotes this new production culture.

Government involvement in the maker movement is not exclusive to China: the White House recently announced that it will organise its own Maker Faire, and that the Obama administration will support non-profit companies in order to boost the movement. On a more controversial level, the Defense Advanced Research Projects Agency (DARPA) is investing millions in projects that have emerged from the maker scene. This support is grounded in the idea of repositioning makers as an economic movement, a Silicon Valley-style revolution in which maker culture is seen as a new innovation process based on the implementation of technological products that contribute to economic progress. As such, it radically breaks away from the ideals that led to the founding of the first hackerspaces in Europe: the active criticism of the implications of the use of new technologies in society, ongoing experimentation free of economic interests with a view to the disruptive creation of new social processes, and citizen empowerment through collaborative learning of the material processes that shape our world, geared towards having an impact on the world and towards achieving independence from the consumer market. Maker culture does not just bring about new products, but also new forms of labour and production. Now that it has become a global phenomenon that includes China and its massive manufacturing infrastructure, this movement could lead to a new geopolitical distribution of production and economic forces.

And the same could be said within China, where the implementation of new technologies has traditionally been oriented towards economic progress and the control and surveillance of the population. One of the consequences of this has been the creation of a separate communications network that is isolated from the rest of the world and under strict government control, and where freedom of expression is only possible through subterfuge. This, in a country where civic events are looked upon with suspicion and impossible to carry out without state approval and support. And where the population is extremely proletarianised by an educational system based on the perpetuation of tradition and geared towards training specialised professionals to the detriment of critical and speculative thought. For these people who are busy with survival and working to achieve social advancement –which consists of accessing the consumer market– the maker movement can be an initial step towards opening up and citizen empowerment. XinCheJian is a place that welcomes people from all backgrounds: “Being in Shanghai has its advantages, given that everybody comes to Shanghai. XinCheJian receives many visitors, and we give a warm welcome to makers from around the world. Like the city of Shanghai itself, XinCheJian has been international from day one, and this has contributed greatly to its fun atmosphere”. Anybody who goes to this makerspace will find a big, international family, a relaxed space in which new ideas are always being shaped, beyond the constraints of necessity and conventional leisure options, and always with a hands-on attitude. When its founder is asked what has generated the atmosphere, he says: “The idea of keeping it open and fun.”

Introduction to Arduino workshop for children at XinCheJian. Photo courtesy of Sandra Álvaro and David Li.

However, a commitment to collective production should bring about a critical understanding of how these new technologies are implemented in society. Opening the black box of the devices that come to us as given and that make up our culture is a way of putting them back in context, revealing the social, economic and cultural processes that have given rise to them. It has been described as a way of “thinking with objects” by Garnet Hertz, one of the founders of Critical Making, a term that refers to a way of thinking about technology that is linked to the labs set up by Matt Ratto and to other concepts that come up when technology and engineering meet design and the humanities, such as critical engineering and disruptive, fictional design. In China, where new information and computing technologies have been implemented at a very fast pace, and unconnected to their historical development, it is difficult to trace the context of the new culture that is being generated. The country remains focused on advancing towards a new society based on innovation and economic progress, but there has not been a process of humanistic reflection on the theoretical and social effects of the implementation of these technologies. And this plunges the country into blind progress that does not know what kind of society is being created as a result.

* Interview with David Li in March 2015.





The Maker Culture in China (II): Shanzhai, Emerging Innovation in an Open Manufacturing Ecosystem

China offers an alternative for taking maker culture out of the field of entertainment and into the market. Shanzhai, a think tank focusing on open innovation.


TV-B-Gone SHP Gen5 Assembly Line. Shanghai, 2011. Source: Flickr.

China offers an alternative for taking maker culture out of the field of entertainment and into the market. This model is the result of the encounter between makers and another open innovation process that emerged in the electronics manufacturing system that has been generated in China. David Li and his fellow members of Hacked Matter, a think tank focusing on open innovation in Shanzhai, call it “Shanzhai”.

As one of the major elements of the production system that has moved to the so-called developing world, China has developed an extensive industrial fabric around the manufacture of electronic components, mostly in the Pearl River Delta, Guangdong province. This low-cost, flexible ecosystem has remained competitive through the production of copies, remakes, and systems for sharing its designs.

The word Shanzhai comes from the Chinese characters for “mountain fortress” and was used to refer to the outlaws or bandits who opposed a corrupt government and hid in rural areas. The term has now come to refer to the cheap, low-quality copies or fakes manufactured in China. The rejection of individualism that is ingrained in China’s cultural tradition has little appreciation or understanding of intellectual property laws, a peculiarity that has for years allowed these “bandit” manufacturers to apply reverse engineering to devices invented by corporations such as Apple and Sony in order to adapt them to the demand for low-cost products. These products, mostly mobile phones, can be found in the huge department stores or “cybermarts” that offer all kinds of electronic components and gadgets under dubious brand names such as Svumsung, Blockberry, Nakia and Anically, along with other devices of questionable taste such as phones shaped like cigarette packets, anime characters, and other devices. These products are also widely distributed throughout Asia, Africa, the Middle East and South America, with production reaching 200 millions of phones a year, accounting for a quarter of the global market. The quality of these imitations has gradually improved and, more importantly, the modification of technology to adapt the devices to small market niches on the sidelines of the big corporations has led to the emergence of innovations that have later been implemented by major brands. This is the case of the dual SIM, for example, which allows a mobile phone to operate with two cards at once: it was developed so that people who frequently commute between countries could save on roaming costs, and the feature was later implemented in Blackberry devices.




This video introduces the second part of a workshop series organized by Hacked Matter in conjunction with the 2013 Shanghai Maker Carnival.

Remakes are not the only form of innovation in Shanzhai. This ecosystem consisting of small electronic manufacturing companies has kept its costs low and remained competitive by creating systems that allow them to share designs. This does not just lower production costs, it also speeds up the process of creating new products, favouring bottom-up innovation. For example, Taiwanese company World Peace Industrial (WPI), an electronics component manufacturer based in Shenzhen, develops reference circuit boards known as “gongban” (public base), which provide basic functions ranging from bluetooth connection to sensors for measuring the movements, heartbeat, and other vital statistics of their users. These gongban are used by a huge number of manufacturers, which either integrate them directly into their products or use them as a base and modify them to create their own. The company produces around 130 circuits a year, which can be used to produce smartphones, tablets, the new smartwatch craze, smart buildings and industrial controls, and it makes money by selling the components. This maker-style approach based on subsistence and need has generated a flexible, accessible manufacturing network for fast production, on a small scale, with a small budget, giving makers/entrepreneurs an infrastructure that allows them to turn their projects into products.

Young entrepreneur Pan Hao saw this opportunity and in 2008 he founded Seeed Studios, an open hardware company that emerged from a cross between industrial manufacturing and the hackerspace culture. Seeed Studios offers makers the opportunity to produce and market their ideas by helping them to create quick prototypes and to manufacture small shipments. The company draws on the experience and resources generated by the manufacturing ecosystem in order to offer all the services required to launch products on a small budget, from open hardware components to crowdfunding services. Crowdfunding works as an open production system, based around a website with over 70,000 participants, in which anybody can post their ideas and those that attract the most community support are produced.

This company that began with a staff of two now has over 100 salaried employees and puts around 500 projects on the market each year, ranging from bluetooth wifi connections to laser scanners, generating over 10 million dollars in profit per year. Having made it possible for anybody to put a product on the market, it has now become the world’s largest provider of technology manufactured on a small scale. The company, which also has its own hackerspace called Caihuo, is based in the Shenzhen area, known as the Silicon Valley of hardware, and works under motto “Innovate with China”. Shenzhen is the heart of the industrial fabric of Guangdong, accommodating a third of the manufacturing companies in the area. The city was the first Special Eonomic Zone (SEZ), an exception to Chinese protectionism that allows foreign investment, which has made it once of the fastest growing cities in the world, with a population of around 10 million people. This industrial centre is now the headquarters of the Maker Faire and houses several hackerspaces and other innovation companies similar to Seeed, including HAXLR8R (pronounced “hackcelerator”), a company founded in 2011 that provides support and financing for Chinese and foreign companies. Companies joining this organisation receive an incentive of 25,000 dollars and professional consulting before being invited to Shenzhen to develop their products.

This capacity to attract foreign talent and generate new companies has led the Chinese government to turn its attention to the culture of hackerspaces. Maker Culture and Made in China were two of the topics discussed at this year’s Lianghui, the city that hosts the National People’s Congress (NPC) and the Chinese People’s Political Consultative Conference (CPPCC), the two annual meetings that set the agenda for the nation’s economy.





The Maker Culture in China (I): XinCheJian, a Chinese-style Hackerspace

XinCheJian is the first hackerspace founded in China, a space where making meets creation.


Playing Go. Shanghai, 2006. Source: Wikipedia.

China had earned a reputation as the home of fakes and copies. But following a period of fast-tracked economic and technological growth, the country is now committed to the promotion of creativity and innovation. This impulse can be seen in the flourishing “Maker” movement, supported by the government, as a result of which “Shanzhai” no longer calls to mind “a copy manufactured in Shenzhen” but rather a new kind of innovation based on open source manufacturing and ongoing “remake”.

According to the website of the first hackerspace founded in China, XinCheJian, hackerspaces are communities where people who are interested in physical computing and technology can meet and collaborate on projects. They are places where people can learn and play with technology, work in teams, participate in international competitions, and find and create opportunities for everybody.

XinCheJian was founded in 2010 by David Li along with a few of his collaborators. As Li* recounts, its origins were informal: “XinCheJian started quite innocently. I started experimenting with Arduino, 3D printing, and  similar technologies in 2008. As things started to pile up and gradually take over my house, the idea of having a dedicated space began to make sense. We started organising events to talk about makerspaces at XinDanwei, the first co-working space in China, and we attracted quite a lot of interest. I met Minlin Hsieh and Ricky Ng-Adam in 2010 and we decided that it was time to open our own. The name XinCheJian was proposed by Liu Yan, one of the co-founders of XinDanwei. The Chinese word danwei means work unit and, as a collaborative work space, XinDanwei is presumably a new type of work unit; CheJian means workshop and XinCheJian refers to a new way of making and of manufacturing. ”

Project boxes at XinCheJuan. Photo courtesy of Sandra Álvaro and David Li.

These digital fabrication spaces are based around local communities. Although they share a common ethos, they adapt and grow in accordance to the characteristics of the environment in which they are based and the community that sustains them. China is an emerging country where the industrial revolution took place at a rapid pace, in parallel to the implementation of digital technology, generating a huge industrial structure focusing on electronics manufacturing and inspired by a discourse according to which progress goes hand in hand with technological determinism. Against this backdrop, DIY making in China is not linked to a nostalgia for manual production or to the counterculture as in other countries. In this sense, XinCheJian is a hackerspace with Chinese characteristics: “In most parts of the world, makerspaces are either about nostalgia for the manufacturing methods of the past as in the case of the United States and Europe, or a new way of approaching the future, as in Africa. But in China, the factory of the world, manufacturing is now ,it’s  the present. Creating a makerspace where people do things for fun in a country where producing is the main economic activity is the principal challenge in both the philosophical and practical sense. This is the reason for creating a Chinese-style makerspace.”

Located in the Jing’an district, one of Shanghai’s largest commercial and entertainment areas, around the corner from the huge shopping malls on Changde Road, XinCheJian occupies part of the ground floor of the XinDanwei premises. It is a nestled space, where tools and project boxes collect around the two big work tables. A monthly fee of 100 CNY or around 15 euros – 130 for access to one of the storage boxes hanging on the walls – entitles members to use the space and all the equipment in it, and, more importantly, to form part of an open, active community. Mostly consisting in withe collar workers with a high educational level, expats, people who are passing through, artists, engineers, designers, programmers, techies, students, and all kinds of people who are interested in making things. They use their spare time – time that is not bound to financial interests and needs – in applying their knowledge and experience to materialise ideas, which are enriched through a process of exchange and cooperation. Members work on open hardware projects such as the Quadcopter, a self-stabilising flying machine based on open, widely accessible designs, projects that are more oriented towards community needs such as aaiiirrr.org, an open source air filter that aims to mitigate the effects of the city’s excessive air pollution, and fun projects such as Insectbots, robot kits that are easy to assemble and handle, designed for kids. Another area of experimentation is physical computing and the development of the “assistive devices” that implement sensors and controllers in order to help people with specific needs to carry out everyday tasks. Lastly, the research into the Internet of things has led to the development of an air quality detector, with which this center contributes to the international project Data Canvas. Collaborations on projects with institutions and companies at the local and international level are also commonplace, and include, the Toy Hack project for example, a collaboration with Nestworks, a project lead to
adapt toys so that children with special needs can also learn by playing. This project was a big hit on the crowdfunding television show Makers.


Demonstration of an Insectbot.

The development and maintenance of the community is one of the main objectives of this space. To this end, it organises activities for members including fun events such as Roboracing, in which participants develop robots that compete against each other in a race; educational events such as introductory and advanced workshops where kids and adults can learn to use post-digital tools such as 3D printers and Arduino as well as new programming languages; and activities that are open to the general public, such as Open Nights. On Wednesdays from 7 pm onwards, XinCheJian opens to the public and becomes an open stage where anybody can present their project, by simply signing up and announcing their project on the programme, and following a few basic rules. Practitioners, artists, and theorists from around the world who are involved in this movement have participated in the XinCheJian Open Nights.

XinCheJian is a space where making meets creation, and experimentation with matter takes place as part of a process of exchange and cross-fertilisation between different disciplines, crazy or playful ideas, and social needs. In this context, makers take innovation out of the laboratories of the big corporation and market research studies, and into an open, emerging space in which consumers are empowered by creating their own products. The result of this process of innovation is the creation of new products that often end up on the market, and new associations that sometimes turn into new companies. For example, members of XinCheJian founded the robotics company DFRobot, and other members of the group who don’t have their own startup take part in organising startup weekends, bootcamps and hackathons.

Roboracing at XinCheJian. Photo courtesy of Sandra Álvaro and David Li.

This step from maker dynamics to the market is not a new phenomenon. One of its best-known precedents is MakerBot, a project founded in 2009 by Bre Pettis, Adam Mayer, and Zach “Hoeken” Smith, with the idea of building a user-friendly open hardware printer. The project became the subject of one of the most successful crowdfunding campaigns in the history of Kickstarter, and this initial enthusiasm attracted numerous contributions, leading to the development of a product that is attractive, affordable, easy to use. In 2012, Bre Pettis turned MakerBot into a private company, closing the designs of the new Replicator 2 and 2X models, modifying the terms of use of the Thingiverse repository, and creating a whole infrastructure for the fabrication and marketing of the product. MakerBot then merged with Stratasys and was listed on the stock exchange in a 403 million deal. After a history full of controversy and management and production problems, this device that was supposed to revolutionise open digital fabrication has become just one more consumer product within the traditional market system.

In China, maker production is getting closer to the promotion of new products and companies and becoming more accessible, thanks to the fact that the processes that make open hardware and remaking possible have spontaneously sprung up in the manufacturing ecosystem that is the factory of the world. This unique situation gives China a distinctive role in the Third Industrial Revolution that goes hand in hand with the globalisation of this movement.

* Interview with David Li in March 2015.





The Other Net Behind the Great Wall (II)

Chinanet is an autonomous internet, separated from the rest of the word and characterised by a different usage that is more about performance than participation.


The Great Wall of China at Nankou, 1920. Source: The National Archives UK.

With over 564 million users, China is home to the planet’s largest network of connected citizens. The Internet was introduced in China in 1994, along with the process of reform that began after the Cultural Revolution. The economy of the socialist market consists of opening up the country to foreign capital and implementing new technologies. This policy has led to what Rodney Wai-chi Chu and Chung-tai Cheng have called “riding a double juggernaut”: the rapid, almost simultaneous capitalisation and cyberization of the country. This second part of the article explores the development of ‘Chinanet’, an autonomous internet, separated from the rest of the world and characterised by a different usage that is more about performance than participation.

Chinanet is a recreational space where users spend an average of 20.5 hours a week, mainly connecting from home. The use of cyber cafes, which where the most usual spaces for access and for promotion when the Internet was first introduced, has decreased since computers and service providers have become more accessible. But far from disappearing as they have in other countries, cyber cafes are still a popular meeting place for young people who go there to play online games, and they continue to account for 32.4% of connection time. As we have already mentioned, one of the most popular applications in China is instant messaging, which is used by 82.9% of the population, while email is in decline and used only by 44.5%. The next most popular service is search engines, online music (435 million users), personal blogs, online video platforms, videogames, and microblogs, with 309 million users, 65.5% of whom connect via their mobiles. These are followed by social networks, with 275 million users, and online shopping, which is experiencing strong growth of almost 25% per annum and reached 242 million users in 2013. These figures show the existence of an active digital culture in China in spite of the control, and usage patterns that differ from the rest of the World Wide Web. This particular development is due to several factors, such as the rapid uptake of the Internet along with the consumer market, the country’s increasing wealth, and a discourse that revolves around technological determinism. The differences are also due to Chinese cultural heritage, which is still alive at the heart of Chinese civil society.

In the rest of the world, the Internet is not a technology but a medium that has evolved from the convergence of numerous technologies that began to emerge in the sixties, and of different concepts and theories from numerous fields. It is a software-based medium that has been shaped by market demands, but also through the contributions and uses of very diverse creators and users, bringing together engineers and technologists and also scientists, psychologists, artists and designers. It is a space of convergence for liberals in favour of new economy based on knowledge production; philosophers of the virtual realm and sociologists of a new public sphere; post-colonialist anthropologists on the lookout for the Other; cyberpunks fuelling the techno-utopia of an open, autonomous space that will become the seeds of a new society; ‘new agers’ who believe in new transpersonal ecologies; artists developing shared authorship environments and new interactive processes; hackers ready to bend the medium and elude control by keeping it open; activists who see it as a tool for civic gatherings and citizen empowerment, and educators fighting for digital literacy and a medium accessible to all. These interdisciplinary groups have contributed to developing the software that in turn has become a techno-social device, and has shaped new forms of interaction and sociability. From the early mailing lists for non-profit projects with cultural or social aims, to open source co-creation platforms and databases or archives that have generated collective intelligence and communal action, all of these uses have helped to mould the stigmergic logarithmic medium that our everyday reality has become immersed in This has made the Internet a living medium, in which there is always tension between the possibilities of a new virtual public sphere and the interests and needs of day-to-day reality.

The introduction of the Internet in China, however, did not go through this evolutionary process, and its users have found themselves in an environment that is tragically separated from the complex ecology that it grew out of. If we add to this the peculiar nature of the context where it has been implemented – a society based on social engineering, behaviour modelling, and the benevolence of the state – we have Chinanet, a space that is separate from everyday life and social life, even though it has connections to physical space given that the use of RFDI cards and other ubiquitous technology devices is commonplace and growing.

Chinese censors at work. Source: Charles Hope.

Chinese society is still influenced by Confucian ethics, in which the universalism and equality that defines relations between free, autonomous individuals in Western civil society is replaced by a rejection of individualism in favour of situation-based behaviour and the acceptance of hierarchies. In addition, there are the effects of modernisation, which has increased social differences – there are an estimated 2.7 million millionaires, and over 251 billionaires (calculated in US dollars), while 13% of the population lives on les than 2.25 dollars a day – and strong competition for the scarce opportunities to climb the social ladder. There is a large population of young people, most of which are only children without siblings, who are under a great deal of pressure to adapt and improve their place in the world, although Chinese society offers few opportunities for change and intervention. For these young people, the Internet is an escape valve. A place where they can enjoy banal entertainment, the anonymity of online forums and microblogs, and the sense of purpose and easy success that videogames offer. The Internet thus becomes a space where normal everyday life is suspended, what the sociologist David Kurt Herold, in Noise, Spectacle, Politics: Carnival in Chinese cyberspace compares to the concept of carnival as developed by Mikhail Bakhtin.

Carnival is a period in which normal social distinctions and rules are suspended, and everyone participates on equal terms, protected by the anonymity of masks, without fearing the consequences of their actions. It is a space where there is no difference between spectators and actors, and it is lived rather than performed. It is full of fun but also of ambivalence and parody, which are shared and transmitted and capable of producing new identities and forms of association. In authoritarian societies, control, entertainment and consumption all go hand in hand; nonetheless, wherever there are open spaces for diversion, there is the possibility of the kind of free play that Giorgio Agamben identifies with profanation. Profanation in this sense is a political action, in that it liberates a medium and returns it to common use, giving back spaces that had been confiscated by power. It frees a form of behaviour from its particular genetic ties and allows new forms of subjectivation to appear.

Similarly, the controlled recreational space of Chinanet has given rise to an active culture, a liberated form of behaviour that contains the seeds for a different kind of empowerment. New identities emerge in this culture, embodied as popular bloggers like Han Han and the BackDormBoys, young people who adopt new, cheerful and informal behaviour, released from the seriousness and distances of a hierarchic society, and who become hugely popular. A specific kind of language is also emerging out of microblog posts and comments and in forums. Microblogging is one of the most popular online practices in China, partly due to a peculiarity of the Chinese language, which requires approximately three times less characters than phonetic languages to communicate the same thing. Microbloggers start up anonymous, spontaneous dialogues that lead to the development and dissemination of expressions and practices that are shared and spread virally, creating new forms of interaction and identification. Expressions that have become common include “Penzi” to refer to a spammer (somebody who uses the medium to distribute advertising); “Zhao Chou”, a kind of troll who posts something polemic in an attempt to trigger confrontation; “Diao Yu” to refer to a post that is fishing or trying to draw attention, and reproving terms like “Nao Can”, which means idiot, and “Daiosi”, which is translated as “loser” and has become a term used to describe those who don’t share the usual definition of success. Other expressions include “Huliansu”, for somebody who posts selfies to increase his or her popularity, and “Xuanfu” a disapproving term for somebody who boasts of wealth.




As well as a new language, Chinese internet users are developing new practices such as “E-Gao”, a kind of parody made possible by new technologies. The name is made up of the characters “e”, which means evil or devil, and “gao” which means work, and has been described by the China Daily as a popular subculture that deconstruct serious subjects in order to entertain people and make them laugh. It is characterised by humour, rebelliousness, subversion, spontaneity, defiance of authority, and mass participation in a multimedia technology. Basically, it consists of editing and remixing well-known images, creating short videos in which the original content is distorted to comic effect. It is a comic way of expressing criticism, a playful subversion of discourses linked to authority, which triggers emotional catharsis and is widely shared and spread, allowing users to participate in shaping an institutionalised narrative.

But beyond parody and humour, Chinanet offers other ways of socialising. Non-individualistic societies have a weaker idea of authorship, so it is not unusual to find groups who use the Internet to create a common project. For example, there is an active community of users who create Chinese subtitles for popular foreign television series, making it possible for them to be screened in a very short period of time. There is also a very active, flourishing “maker” movement, which does not have its roots in the counterculture that the movement grew out of in the West, but on more pragmatic principles.

Finally, in among the humour and critical laughter, and in spite of the overt control and surveillance that makes it unthinkable for many citizens to confront the State, there is also a certain activism, which is referred to as “climbing over the wall” and is principally expressed through the use of illegal technologies and memes.

“Climbing over the wall” (fan qiang) is a reference to technological and ideological strategies used to outsmart the Great Firewall. The expression was popularised by internationally recognised Chinese artist Ai Weiwei and activist Bei Feng during a protest against the Green Dam. The Green Dam Youth Escort Online Filtering Software was a project launched in 2009 that aimed to create a filtering software that was to be installed in all computers sold in China. The project stirred up considerable controversy and was finally overturned due to marketing problems and copyright claims, although it is installed in computers at some public educational centres.

There are various technological ways of outsmarting the surveillance system and crossing over into the space of the World Wide Web, such as the use of free proxies and encrypted channels for example. The most popular and most secure are VPN (Virtual Private Networks) and SSH (Secure Shell), which use proxies and encrypted channels but rely on a private virtual host or an account located outside of China. Many Internet users in China, particularly foreigners living in the country and researchers interested in the use of the Internet, habitually use these technologies to communicate with the outside world.

But “climbing over the wall” also refers to criticism of censorship and to the promotion of freedom of information, expressed through the use of political memes. The term “meme” was coined by the biologist Richard Dawkins in 1976 to describe a unit of information that can be transmitted and recombined. The term has become popular on the Internet to refer to images to which text is added, which allude to widely known facts in an indirect and critical way, and which spread virally through the Internet and become part of the shared imaginary. In China, memes have been an effective means to get around censorship, given that they are not in text format and are harder to decipher. Some of the memes that have become most popular include “dressing nudes”, in which nudes from Classical art are shown with clothes superimposed on them. This is a reference to a China Central Television broadcast of an exhibition at the National Museum of China in which the genitals of Michelangelo’s David were blurred. Another popular meme is based on images of sunflower seeds, used to mock the blacklisting of the name of the artist Ai Weiwei through a reference to his famous work at the Tate. These and other memes can be found listed on the blog www.88-bar.com. But the most popular and widely used meme is the “grass mud horse”, a creature resembling an alpaca, which, along with its enemy the “river crab”, has become one of the mythological animals populating Chinese cyberspace. The meme first appeared in 2009, and is based on a popular children’s song as a means to criticise censorship.




The song recounts how the alpacas, gentle and brave inhabitants of the desert, must defend their pastures from the river crabs. Their courage results in victory, and they manage to throw out the invading enemy. The meme is based on the approximate homophony between càonǐmā (grass mud horse), and the expression “fuck your mother” in English, and between the expression “river crab” and the word “harmonious”, which began to be used as a synonym for “censorship” after it was cited as part of the strategy to create a quiescent society that grows harmoniously.

The implementation of the Internet in China reveals the effects of introducing a global technology in a differentiated local space. Unlike the case of Egypt, where the banning of Facebook by President Mubarak led to a mass protest on the streets, Chinese authorities have been able to control and limit the medium by means of complex technology and the use of a shopping mall-style development model. A delimited, controlled commercial space that citizens use without being aware of the potentiality of public space, or of the attacks on their privacy. It is a recreational space, in which in spite of the control and surveillance, net-citizens are becoming actors, cheerful, anonymous performers of new models of behaviour and new forms of subjectivation, against a backdrop in which an incipient civil society is seeking strategies for joint participation.





The Other Net Behind the Great Wall (I)

China’s cultural heritage, along with the strong control mechanisms that accompany the process of opening up have led to the development of Chinanet.


The Great Wall of China by Herbert Ponting, 1870-1935. Source: The National Archives UK

With over 564 million users, China is home to the planet’s largest network of connected citizens. The Internet was introduced in China in 1994, along with the process of reform that began after the Cultural Revolution. The economy of the socialist market consists of opening up the country to foreign capital and implementing new technologies. This policy has led to what Rodney Wai-chi Chu and Chung-tai Cheng have called “riding a double juggernaut”: the rapid, almost simultaneous capitalisation and cyberization of the country. This first part of the article explores how China’s cultural heritage, along with the strong control mechanisms that accompany the process of opening up have led to the development of ‘Chinanet’, an autonomous internet, separated from the rest of the word and characterised by a different usage that is more about performance than participation.

Over 564 million users, 331 million Ipv4 addresses, 13.4 million registered domain names, 2.69 million websites, and 442 connected mobile phones make up Chinanet, according to a 2013 report by the China Internet Network Information Centre, creating a virtually autonomous internet that is also the world’s largest network of connected users.

The Internet was introduced in China in 1994, following the opening up process that Deng Xaoping began in 1978. After the Cultural Revolution, the country found itself with a stagnant economy and a massive population, a situation that forced the rapid modernisation of the country and the opening up of the economy to foreign capital. Rural areas quickly became industrial centres, and the implementation of new technologies was promoted as the key to progress. This led to what Rodney Wai-Chi Chu and Chung-tai Cheng describe in their article “Cultural convulsions: Examining the Chineseness of cyber China” as “riding a double juggernaut”: the country’s encounter with modern capitalism and a rapid process of cyberization at almost the same time. Indeed, modernisation in China has not been a linear process, given that industrialisation and the shift towards the information society have taken place at the same time. Chu and Cheng use the example of the mobile phone: while other countries moved from domestic landlines to the ubiquitous personal communication of smartphones, in China, where limited resources made it difficult to take phone lines to remote rural areas, the mobile has been the first communication device for many citizens. This “double juggernaut”, however, is accompanied by government attempts to safeguard traditional values and the prevailing ideology, because, as Xaoping famously remarked, “when you open the window for fresh air, you have to expect some flies to blow in.” In order to prevent the harmful effects of these “flies” or any spiritual or ideological contaminating agent, the process was kept tightly controlled. More specifically, the Internet was confined within the limits of what is widely known as the Great Firewall of China – a virtual wall the separates China from the rest of the world.

1996 was the year of the Internet in China, and promotional campaigns encouraged people with slogans such as: “Buy Internet, use Internet. Get on board the ark to the next century. Win the prize of the world” and “Internet, the passport of the modern, civilised man.” But while the Internet was being presented as the hope for a new future, it was also seen by many as a hegemonic space dominated by Anglo-Saxon capitalist ideology. This is how one of the young entrepreneurs of the first internet service providers expressed it in a study carried out by Wired magazine in 1997 : “Our ideal is to create an exclusively Chinese-language network. It will be a Net that has Chinese characteristics, one that is an information superhighway for the masses.” The desire to guard against the hegemony of the English-speaking world, and against the fears unleashed by the free flow of information and the freedom of association that is inherent to the medium, led to a singular development of the Internet and of its early regulations, which were based on the government’s exclusive ownership of all internet service providers, and the strict control and identification of all users. This meant that the state was in charge of the entire implementation, international standardisation, control, and development of all aspects related to the Internet. It also meant that all connections had to be channelled through the points of access run by the Ministry of Telecommunications, and that the use of any other path of access was banned. And lastly, it meant that all organisations or individual users connected to the Internet were responsible for the content that circulated online, and had to strictly respect regulations, taking particular care to avoid spreading State secrets. To this end, all users had to sign an agreement of responsible use and a service contract, providing proof of identity including residential address and workplace details. These temporary regulations for accessing the Internet evolved into the Golden Shield project, which is popularly known as the Great Firewall of China and was set in motion by the Ministry of Public Security in 1998. The project was publicly launched in 2000 at the China Security trade fair in Beijing, three years before its full implementation. It was developed in collaboration with major international corporations including Sun Microsystems, Cisco Systems, Bay Networks, Motorola, and Canadian company Nortel Networks, among others. It is now considered the largest and most sophisticated surveillance device in the world.

This complex system combines technological, ideological and political means to work on three levels: blocking information from the Internet, coercion of online companies, and blocking specific regions or individual users.

The blocking of information is possible because the government owns and manages all the internet backbone. All the information traffic between China and the rest of the world is channelled through proxies to a single router, where it is analysed and then either blocked or redirected. This means that censorship operates by blocking IPs, DNS poisoning, and maintaining a blacklist of key words that are used to analyse data traffic.

Error screens that impede the access to the requested content in the web. Author: Sandra Álvaro.

The blocking of IPs is based on the fact that each web page is stored on a server with a unique internet protocol address. Golden Shield lists the IP addresses of certain websites – such as Facebook, for example – and prevents people from accessing them. This has the unfortunate side effect of blocking access to any other website hosted on the same server. Meanwhile DNS poisoning, also known as DNS misdirection or URL hijacking, operates at the level of the domain name system that assigns URLs, which is also under state control. When you enter an address or URL in a browser, your request is sent to a domain name server that retrieves the IP address from the server where the page is hosted. In China, the authoritarian DNS service redirects your request to a fake address, a clone of the requested URL, before it reaches the genuine server. And what you see displayed on your browser is an error message. Lastly, the third strategy is to filter the terms entered in a search, or the content of a requested website, through what is known as URL filtering and packet filtering. In this case, the content of your searches is analysed and checked against a blacklist of keywords. If more than one of these words turns up in the search or in the requested web page, the request fails and you are unable to access the content.

This system does not operate in a transparent manner. What users see is a message informing them that the server is unavailable, or that waiting time has been exceeded, so it is difficult to know if the requested content has been blocked or if there is some other problem, such as web maintenance or excessive traffic. An estimated 18,000 websites are blocked in China, including prominent news media such as nytimes.com and elpais.com, archives such as archive.org, and social networks including Facebook, Twitter, Youtube, Vimeo and Flickr. This year Instagram joined the list following the flare-up of protests in Hong Kong, and the block on Google was extended to all its services including Gmail and Google maps. Wikipedia pages that contain sensitive information are also blocked, as are certain search terms on Weibo, one of China’s most popular microblogging services. The data base of blocked sites is constantly revised, but some online services offer updated information on blocks, such as the Wikipedia entry “Websites Blocked in China” and the website greatfire.org. China Digital Times, another media outlet that is banned in China, also maintains a list of blacklisted search terms on Weibo.

Companies with an online presence that operate in China are also subject to the administrative measures that govern information and Internet services, as set out in decree no.292 of the State Council, which states that the Great Firewall forbids all foreign or domestic websites that do not comply with state policies, and that all content is liable to be checked, along with its political background. In this way the governments promotes self-censorship, making companies who want to operate in the country responsible for the content of their websites and for monitoring all the information traffic. This is done through specific filtering systems or through employees known as “Big Mamas”. As a result, search engines that operate in China, such as Yahoo and Bing, and local search engines Baidu and Soso, filter the data themselves, blocking access to any sensitive information, which simply does not appear in the search results. In the case of online content based on non-text information, sites like Youku, the largest video hosting network in the country, use a sophisticated digital footprint system that checks whether the content has been approved or blocked previously. In the case of new content, a team of people visualizes and classifies the material before it is uploaded.

Lastly, the State owns and controls access infrastructure in the physical world, and in extreme cases it can interrupt the service to a region or even to individual users.




In addition to the fact that numerous foreign services are blocked, access to non-Chinese websites is slow and often subject to interruptions. If we add the language problem, it becomes clear that browsing beyond the domain of China is not a very attractive option for users. In fact, only six percent of the hyperlinks that appear in Chinese websites point outside of the Chinanet space. This isolation has favoured the massive growth of local services that replace inaccessible international services. Many of these Chinese versions are developed by young entrepreneurs who have spent time in Silicon Valley. These “sea turtles”, as they are popularly known, have been able to successfully adapt  foreign technologies to the characteristics of the Internet in China.

Google, for example, tried to establish itself in China in 2006, accepting the government’s censorship conditions, a highly controversial decision that led the company to move to Hong Kong in 2010. That same year, the giant search engine was blocked in China, followed by all its associated services in 2014. But even before it was banned, Google had already lost the battle to its local competitor Baidu. Founded by the entrepreneur Robin Li on his return to China from Silicon Valley, Baidu was set up with an initial investment from Google, but gradually became completely independent and began to offer an increasingly large number of services. These include a wiki similar to Wikipedia (baike.baidu.com), a blog hosting service (hi.baidu.com), a news platform (zhidao.baidu.com), platforms for sharing music (music.baidu.com) and videogames (youxi.baidu.com), and its own GIS, Baidu Map. One of the most popular services is the open discussion forum called Post Bar. Although Baidu is not alone, and there are also other popular search engines in China such as Soso.

Youtube was blocked in 2009 and was replaced by the platforms Tudou and Youku. Youku, which literally means “what’s the best what’s the most cool” was founded in 2006 by another “sea turtle”, Victor Koo, and has become the world’s largest video hosting platform, with 90% of its target audience in mainland China. Unlike Youtube, Youku hosts a large selection of syndicated videos from traditional media. Young people go to the platform looking for an alternative to the meagre and overly ideologised television programming and the censored films released in the country. The most-watched content is short videos such as film trailers and television series, mainly produced in Korea. User-created videos are a minority, but they have their own space on Youku Paike. Other multimedia platforms – like Hulu, for example – also have their Chinese versions, in this case Iqiyi.

As for social networks, Facebook has been replaced by RenRen, which is one of the largest media sites with over 31 million active users. And although Twitter is blocked, China also has its own microblogging platforms including Tencent QQ and Sina, which includes the popular Weibo, that means microblog in Chinese, one of the most popular sites with over 50 million users. Familiar popular blog hosting services like Blogger and WordPress are not available in China either, and the gap has been filled by similar services offered by Baidu, Tencent and Sina.

And now we come to the China’s most popular service, instant messaging, which has over 461 million users. Although Whatsapp and other international services are accessible, there is a Chinese version called WeChat. WeChat is an essential mobile app for anybody who wants to be wired in China. As well as instant messaging, it provides other social media services such as a user profile and an album that can be shared with contacts, account subscriptions, a geolocalisation service that sends the user’s position and searches for possible contacts in the area, and entertainment features including a large selection of “stickers” and a tangible computing app called “shake” that detects the movement of the phone and sends a signal to a contact’s phone, so that both users shake their mobiles in unison. WeChat, which channels more than 77% of instant messages, is owned by Tencent, China’s largest internet company. The success of Tencent, founded by 35-year-old entrepreneur Pony Ma, is based on its accurate assessment of how citizens use the Internet in China. As well as owning the most popular service, the company also offers a mass news portal, thousands of online games, forums, blogs, microblogs, and all the services that net-citizens – mainly young people who use their smartphones to go online – demand.

On the left, international services. On the right, substitutes of these services in China. Author: Sandra Alvaro

And lastly, e-commerce is another service that is on the rise in China. While the popular user-to-user trading platform Ebay is accessible in China, it has nowhere near the number of users of its local competitor Taobao, which is part of the world’s largest online bazaar of export products, Alibaba, founded by Jack Ma. Amazon also has a Chinese competitor in DangDang. Groupon, on the other hand, does not have a competitor, but this is because China has a different model of discount shopping called tangou or group shopping, in which several Internet users from a particular city who are interested in the same product look for a retailer who will offer them a discount for a joint purchase.

The success of these companies offers an insight into the profiles and usage habits of Chinanet users. As well as being separate from the rest of the world and under tight control, the Chinese internet is also characterised by the prevalence of young users – almost 60% are under thirty, and 80% are under forty – most of who use their smartphones to go online (75% of users connect via mobile phone). In other countries, users tend go online in search of information, personal promotion and contacts, and email remains among the most popular services, which means that work and leisure converge in the same medium. But in China the Internet is rarely considered to be a working tool – only 32.4% of users connect at work – and is seen more as a playground. The entertainment superhighway.





Reclaiming the City through Play

The use of new technologies in cities through play is a way of putting the use of public space back in the hands of citizens.


Boys playing “Roly Poly,” New York or New Jersey, 1890-1910. Source: The Henry Ford.

By the year 2050, an estimated three of every four people on the planet are expected to live in cities. Experts claim that only an appropriate use of technology will make it possible for these to be sustainable and ensure there are sufficient resources for everybody. What they mean by this is so-called “smart cities”. But will these supercities be more human? Will they encourage social relations among inhabitants? Will they make people feel less lonely? Or will they simply solve problems like energy efficiency and information availability? Many artists, designers, architects and theorists argue that the use of new technologies in cities through play is a way of putting the use of public space back in the hands of citizens. And of promoting socialisation, forms of behaviour based on interaction and cooperation.

Imagine that one morning, as you are walking through the square that you cross every day on your way to the tube, you discover an adult-sized seesaw. Perhaps you cast a sidelong glance and walk a little faster. But what if on your way home that evening you see a group of people standing around, watching a pair of adults having a great time as the play equipment lights up in response to their actions?

This is just what was done by the Australian design studio Eness, which set up a rather special seesaw in a city square in Melbourne: the body of the seesaw contained 33 LED strips that responded to the movement of the installation, creating light environments for the people who were using it. As the studio’s website explains, the objective was to teach participants a bit of physics and, above all, to remind them that having fun is not just for kids.




The installation also managed to trigger social interaction among citizens who did not know each other, and invited them to rediscover, through play, elements of their surroundings that are no longer public spaces or places for gathering and sharing, and have become transit zones.

For some time now, this has also the goal of many artists, architects, designers, theorists and activists who have been exploring this idea of reclaiming public spaces in cities, and turning them back into gathering and socialisation spaces through the use and the integration of technology.

And this idea has become even more important in the current context, with the popularisation of the idea of Smart Cities sheathed in digital skins, with connected buildings that save energy and share information, adapting to their inhabitants and learning from the ways in which they use the city. Many experts argue that these mega-connected cities will be the only way to manage the fact that 75% of the planet’s population will live in cities in 2050. And that technology will not be an added value, but a necessity. But will these smart cities also strengthen human relations? Can we use the huge numbers of smart sensors and connected computers that make cities smart to expand and enhance public spaces?

The importance of play

Since 2008, the installation Play Me, I’m Yours has toured to numerous cities around the world. The man behind the project is the artist Luke Jerram, who came up with the idea of setting up a piano in public areas so that anybody who felt like it could play and enjoy it. The chosen sites cease to be transit zones and become places where things happen, and the instrument becomes a wonderful catalyst for spontaneous social dynamics: the piano and its surroundings become a space for sharing, for social interaction. And the experiment does not end there, because each city that forms part of the project has a website through which people can access all kinds of material created during the experience, including photos, videos and texts.




Perhaps the people who are worst off in cities are children. They barely have areas in which to play, and the few playgrounds that exist are usually designed for very young children. This means that from the age of five or six the kids get bored, and it has become increasingly common to see them playing with their parents’ mobile phones or tablets.

A few years ago, Narcís Parés came up with an idea for encouraging group play through socialisation in parks, and at the same time solve one of the major public health problems of the twenty-first century, childhood obesity. This teacher from Universitat Pompeu Fabra (UPF) designed an inflatable slide like the ones that can be seen at funfairs, and enhanced it with interactive technology.

He turned the surface of the slide into a videogame screen in which kids play through their movements. A computer vision system detects their actions and reacts accordingly. The children launch themselves down the slide, over and over, in order to hit a green ball or a Martian with their body. And then they have to climb up quite a few stairs if they want to play again. “The aim of the interactive slide,” Parés explains on his website, “is to become a tool to encourage children to do more physical exercise while socialising with other children”.

Another project that seeks to give public parks back to children and at the same time promote cooperative play is Hybrid Play, by the group Lalalab, which consists of the artists Clara Boj and Diego Díaz. They came up with the idea in 2008 when they published an article entitled “Hybrid Playground: Integration of Videogame Tools and Strategies in Children’s Playgrounds”, in which they suggested that these public spaces that are supposedly designed for children could become stages for interactive play, and promote collaboration. But the technology was not sophisticated enough yet. Which is why they are now taking up the idea again.

As they explain on their website, “Hybrid Play is a system that seeks to rediscover urban spaces such as parks, combining traditional street play with videogames.” Hybrid Play consists of a robust device that looks like a giant clothes peg, and is placed over a piece of play equipment such as a swing, a slide, or a seesaw. It works with a circuit board compatible with Arduino that adapts sensors such as accelerometers and gyroscopes to detect the children’s actions. Both the “peg” and the software that runs it are open source. What happens in the park, the children’s real actions, affects what happens in the videogame on the mobile.

Another example – which is not just targeted at children – is the interactive musical swings that literally took over one of the places with highest traffic density in Montreal (Canada). The installation, called The Swings, was created by the Daily Tous Les Jours studio and consists of 21 swings that are like piano keys; each movement generates a note, but all the swings have to work together to compose a tune, making it obligatory for citizens to cooperate and adjust their actions to those of the others. Over the past few years thousands of people have had a turn on these swings; in fact, each swing has swung an average of 8,500 times a day.

Towards the hybrid city

All of these projects are based on the concept of the hybrid city, in which physical and digital space overlaps. Along these lines, an increasing number of new projects aim to recover the city and to connect citizens through play and technologies, creating a kind of hybrid urbanism. Another example is the popular StreetPong in Berlin, where a group of German students set up two consoles of the famous game – in which players hit a little ball – at two of the city’s traffic lights facing each other. The idea is for strangers to play and interact while they wait for the traffic lights to turn green so that they can cross, thus also promoting civic behaviour.




Another project that seeks to reclaim public space through play and social interaction is a version of the legendary Tetris called Lummo Blocks, designed by the collective Lummo. It can be used by up to four players at a time, who have to cooperate to form Tetris pieces with their bodies and place them in the appropriate position. A similar project is Javier Lloret’s Puzzle Facade, which transforms buildings into huge Rubik cubes. It transformed the façade of the headquarters of the Ars Electronica festival in Linz (Austria), for example, into an enormous cube that invited passers-by to participate in an interactive experience.

One of the artists who has specialised on this concept of play, citizen participation, and human behaviour in public space is Britain’s Chris O’Shea. In his work Hand from Above he placed an huge screen in a very busy square, the kind that we barely notice as we walk through it. A camera recorded pedestrians and projected the images on the screen. Every now and then, a huge hand appeared and ‘tickled’ the image of the people, shrinking or stretching them, or even making them disappear altogether. It selected pedestrians and playfully transformed them, interacting with the surroundings.

Another interesting project along these lines was the one that O’Shea produced for the BBC British public television network . It was called Big Screen Quiz and it worked on the basis of questions and answers. A giant screen located in a public space invited passers-by to form groups in order to play. They were asked questions and offered a choice of four possible answers, so that they had to discuss them and agree on the correct choice. They then had to position themselves in one of the four virtual squares that were projected on the floor. The area with the highest concentration of movement after a countdown was the chosen answer.

Then again, if you are tired of the stress and noise of the city and need to escape to an oasis of peace and tranquillity, your best bet is to download the mobile app Mapa mudo, by Sandra García, a search engine for spaces of silence in the city that allows you to find them, but also to add and share personal silences. Innovations in smart cities that make urban life better in all senses, particularly the human sense.





Africa Recycles and Creates Technology

In the midst of e-waste traffickers, Africans are endeavouring to find ways to breathe new life into these discarded materials.


E- Recycling. Source: Wikipedia.

The perverse circle of technology closes when the countries of the North dispose of their electronic waste. They sweep it under the carpet in invisibilised places in the South, such as Africa for example – the very same continent from which some of the raw materials required to produce the technology were extracted from in the first place. E-waste traffickers are not concerned about the impact of this waste, they simply take it far away, out of sight. But in the midst of this almost apocalyptic scenario, Africans themselves are endeavouring to turn the tables, to make the best of whatever small positive effect can come out of this technological aberration. They do so by drawing on some almost anthropological strategies such as the connection with the environment, and immunisation (at least for the time being) against frenzied technological consumption.

In mid-October, Ghanaian architect DK Osseo-Asare showed some photos in which it was possible to make out the shapes of some young people in the midst of dense black smoke, surrounded by rubbish. “All that matters is the fact that these young people are condemned to get sick and die,” he explained during a visit to Barcelona. Along with Yasmine Abbas, DK Osseo-Asare is co-organiser of the Agbogbloshie Makerspace Platform (AMP), a collaborative initiative based in the e-waste dump at Agbogbloshie near the Ghanaian capital Accra, which is considered to have the world’s worst pollution problems.

DK explains that the unorganised workers at the dump earn their own and their families’ livelihoods there. This makes it unlikely that they will stop working even if they put their lives at risk by burning the plastic covering the components they harvest, for example, and thus exposing themselves to all types of deadly pollutants. For this reason, AMP focuses on finding ways to allow them to continue to work, but to do so safely. At the same time, the initiative aims to find ways to breathe new life into these discarded materials. AMP is one of the best examples of “upcycling, a trend that goes beyond reusing discarded materials, and instead finds a use that adds value. In other words, it is a process in which recycling does not entail a gradual loss of value, but actually increases value.

Not far from Ghana, in Lomé, the capital of neighbouring Togo, Koffi Sénamé Agbodjinou is the soul of WoeLab, an atypical fablab that aspires to light the spark for a revolution combining urbanism, technological innovation, and social change. Sénamé is convinced that technology can change the way cities are built, and at the same time transform the social relations that take place in them. But aside from complicated theories, WoeLab is the space and the ecosystem that made it possible to create the African-made 3-D printer from e-waste, called the W.Afate. This invention is the ultimate expression of what Agbodjinou calls LowHighTech, or modest cutting edge technology.




This idea of technological innovation that Agbodjinou defends is not just modest because it uses waste materials and discarded components, but because it is within everybody’s reach. Everybody can access it and, moreover, anybody can build it. The founder of WoeLab believes that the transformative power of technology must be available to everybody, if it is to exist at all: above all, genuinely African aspiration. “We don’t see recycling as an intrusion, because recycling is always on our minds; we try to fix or reuse everything before throwing it away,” Agbodjinou says. Their high tech is popular, accessible, practical and democratising, in contrast to the West’s elitist, discriminatory idea of high tech that leads to the widening of the digital divide.

Emeka Okafor is one of the best-known figures of the maker movement in Africa, which draws heavily on craftsmanship as a fundamental aspect of the creation of technology. Over time, he has become the mentor of many of the projects that are springing up along these lines. The clearest example of his work is the promotion of the Maker Faire Africa, which will celebrate its fifth edition in Johannesburg in December this year. Okafor expresses the same view as the others mentioned above when he says that, technologically speaking, the maker movement “is not a new phenomenon in Africa, it has always existed.” This promoter of innovation is referring to the tendency of the inhabitants of Africa to make their own inventions, reproducing devices that they are unable to acquire because of the whims of the market.

Okafor argues that in an environment like Africa where raw materials are scarce, recycling is one possible solution, and a highly accessible one. This explains why the continent in giving rise to projects that serve as a model for the countries of the North, and reveals the creative potential of people who are used to finding their own solutions to problems.

Upcycling, modest high tech, and the craft of technological innovation are similar, what we could call “sister” dynamics. They share the same desire – to find solutions to more or less commonplace problems; the same strategy – creativity that goes beyond formal technical knowledge; the same materials – reused discarded resources; and, above all, the same objective – to bring about social change.

As the hacker current and the maker movement are closely tied to the technological society, our imaginaries tend to think of them as Western, from the North, from so-called developed countries, even if they are actually dynamics with a critical DNA. The assumption is unconsciously and unintentionally another of the fruits of our Euro-centrism. Koffi Sénamé Agbodjinou makes a revealing observation: “In hacker communities I found many similarities with the tamberma constructors of north Togo. They use local materials that they have at hand, they take advantage of the know-how of the community, they seek to share knowledge, they bring together specialists from different fields, and they feel they have a duty to give back to the community.”

The achievements of hacker groups around the world are beyond question, even though it is difficult for them to get access to the most highly specialised circles. The ever-increasing numbers of people who form part of them are managing to secure their place in international conventions, conferences and fairs related to technological innovation. They sometimes still seem like rather exotic individuals, although they can also personify a sincere appreciation. In any case, their contributions arouse considerable interest and, often, admiration. These positive outcomes (both the appreciation and the materialisations, that is, the devices they create) are promoting the proliferation of specific spaces for their work. Events linked to the maker movement and fablabs are springing up all over Africa, just as tech labs did in earlier years, and often hand in hand with, or sheltered by, these older siblings.

There is hardly a digital community worth its salt left in Africa without its own makerspace, or at least a programme of workshops or small technological maker fairs.

The W.Afate, the first 3D printer “made in Africa” using waste material, in WoeLab, Togo. Author: Nick Normal.

Over a decade ago (an eternity in technological terms) there was already some buzz about the “inventions” that African citizens came up with so that they could use mobile phones, for example, in rural environments where they had no access to electricity. Although not much time has gone by since then, it seems as if eons had passed because technology has gone through so many changes. Technological innovation requires imagination, creativity, and the will to overcome obstacles. But above all, it requires the ability to put the popular saying of “making a virtue of necessity” into practice. History has shown that all of these are plentiful in Africa, and that the only element of the equation that is missing or scarce is material resources. But this is not an insurmountable obstacle either: recycling has become the way of accessing the necessary materials.

In Africa, the spirit of innovation has arrived before the more entrepreneurial drive. The continent was the source of many of the raw materials that allowed the tech industry to grow, but the big corporations did not consider Africa an attractive place to set up a manufacturing industry in this sector. Due to its marginal nature (in the etymological sense of “being on the margins”), the maker movement and all the dynamics that go with it has managed to find its way through the cracks that the emerging digital ecosystems have opened up in the continent’s isolation. As well as its desire to build and create, the maker movement (as its manifesto declares) relies on the foundation of community and collaborative work. Given this, it would certainly be impossible to find a more appropriate and better predisposed breeding ground for makerspaces than African societies.

Beyond individual capabilities, the communities that have embraced the maker movement, the spirit of open source, the hacker philosophy, and the practice of recycling are coming together to form an increasingly dense web. They are strongly interconnected (we need look no further than the tree examples given here, which collaborate and, curiously, mention each other); they “infect” other communities with which they share an interest in technologies and the desire for social change; and they are having a profound impact on young people. All of these mixes of relations, examples, models, collaborations, proposals, and materialisations form a complex network that releases creative energy, and it is difficult to gauge what its limits may be. For the time being, we can safely affirm that they are capable of generating hope in the most contaminated place on earth, and of constructing a 3D printer using electronic waste.





Privacy and Innovation: an Impossible Equation?

Respecting the legal framework and the needs and desires of citizens and consumers is a basic prerequisite for the success of any new product or initiative.


“Private Property No Thoroughfare” sign. Cambridge, USA, 1954-1959. Source: MIT-Libraries.

Lately it has become quite common to hear people say that legislation is unable to keep pace with technological innovation, and that it is thus an absurd and unnecessary impediment to innovation. Similarly, there is a widespread idea that the emphasis on data protection and privacy is an outdated obstacle to technological process. Nothing could be further from the truth. Respecting the legal framework and the needs and desires of citizens and consumers is a basic prerequisite for the success of any new product or initiative. Finding the right balance between privacy and innovation is the last frontier of the information society.

In fact, these types of claims that cast doubt on the usefulness of regulation tend to come from people with links to the tech industry, and they are rarely gratuitous. It is legitimate to wonder, for example, whether Facebook founder Mark Zuckerberg’s claim in 2010 that ‘privacy is no longer a social norm’ was a description of reality or wishful thinking by the owner of an advertising company that works with a business model based on data mining.

In reality, figures show that the value of privacy is on the rise. Although we live in technophilic societies, cases such as the scandal over stolen celebrity photos, bullying among teenagers using material stolen from social media, and identity theft using inadequately protected personal details, to say nothing of the former CIA analyst Edward Snowden who denounced the US for mass spying, are making it increasingly clear that our society is in urgent need of an open, serious debate on the impact of new technologies on our lives and on democratic safeguards. In spite of claims that young people don’t care about privacy and happily share sensitive data on social networks, for example, the truth is that teens are much more actively engaged in managing their online privacy than many adults: the difference is that teenagers focus on concealing information from specific people close to them (parents, teachers, classmates) and are less aware of possible future implications (employability) or of government surveillance. Likewise, CIS surveys in Spain show a 5% drop in support for video surveillance from 2008 to 2011 (no later data is available), while the number of complaints to the Spanish Data Protection Agency increases every year and educational projects that raise awareness about the personal data explosion, such as Big Bang Data at the CCCB, attract thousands of people.

The issue of privacy does not seem to be going away, quite the reverse. More and more, the realisation that certain technology solutions betray the trust of users is behind high-profile, expensive failures. In February this year, for example, when it became known that the instant messaging company WhatsApp failed to properly protect chats, and that these could be accessed by third parties, two million Spaniards migrated to an alternative similar service, Telegram. In other words, bad data management and the inability to strike a balance between privacy and innovation led WhatsApp to lose a significant part of its market share in Spain. Similarly, in the last few months London and New York have had to shut down systems designed to count and track people in public spaces, when the media revealed the existence of systems integrated into litter bins and phone booths that monitored data and analysed geolocation information by tracking the MAC addresses of the mobile telephones of passers by. Once again, the disregard for privacy led to the failure of initiatives that claim to be innovative but are unable to integrate variables such as social acceptability and consumer and citizen trust. So does this means that privacy hinders innovation? Probably only in cases where the developers of technology solutions forget to factor in respect for the subjects of the data on which their business models depend.

Drone with GoPro digital camera. Source: Wikimedia Commons.

One of the hot issues in the innovation-regulation debate right now is unmanned aerial vehicles, popularly known as ‘drones’. According to Business Insider, the commercial market for drones will reach 98 billion dollars over the next decade, and hundreds of companies big and small are already investing in them. Nonetheless, the legal framework for their use has not yet been defined, and the recent temporary ban on their use by the Spanish Civil Aviation Authority has sparked debate on how red tape interferes with the path of innovation. But the discussion can be approached from a different angle: as a society, can we allow the widespread use of aerial vehicles that offer citizens no basic safety or privacy safeguards, when there are no accountability or control measures to regulate the use of a shared public resource such as airspace? To put it more simply: do we want devices flying around that can fall and hurt somebody, or capture images of private spaces, with no legal responsibility at all?

We can think about this in terms of another technology that brought about changes on many levels when it was introduced: the car. With the invention of the combustion engine, we were able to make vehicles that cut down travel time and reduced transport costs, but could also cause serious accidents. As such, we reached compromises: car manufacturers were forced to manufacture cars with built-in safety measures for passengers and pedestrians, maximum speed limits were set, traffic lights were put in place, and a licence plate system identifying each vehicle was introduced, along with associated insurance and civil liability mechanisms. We can’t win every battle, and today’s cars still have the capacity to go much faster than the legal limits, but no manufacturer today would dream of releasing a car without a set belt, for example, even if it costs them more.

So regulation only hinders or blocks bad innovation: innovation that has failed to grasp the values and needs of the society that it forms part of, the expectations of its potential clients, and social consensus expressed in the form of legislation. In the world of innovation, it is quite common for technical solutions to emerge before the social, political and legal consensus that can minimise their potential negative effects. When this happens, innovative solutions face an extra challenge: to propose acceptable consensus. When drone developers can propose measures that allow citizens to exercise their rights (if a drone falls on their head, or if they are filmed in private spaces by unidentified flying objects), they will be one step closer to the legalisation and social acceptance of these devices, and to fulfilling their business potential.

The demand for a kind of innovation that strikes a balance between excellence and social, ethical and legal responsibility actually comes from the realisation that many technological innovation processes in recent years have minimised their social impact, provoking political, technological, social and economic ‘fiascos’ that have been detrimental to everybody involved. When companies invest in new systems – whether they be drones, tracking systems integrated into litter bins and phone booths, maritime surveillance systems, body scanners, mobile apps, or ‘smart’ urban solutions – disregarding the privacy of customers, users, or citizens comes at an increasingly high price. As a result of the growing number of ‘fiascos’, more and more public and private entities are demanding that every new initiative, strategy or project must include a legal, ethical and social impact study, and institutions such as the CCCB are opening up a debate about striking the right balance between privacy and innovation.

Rather than adding a new layer of bureaucracy or a petty prerequisite, taking into account variables such as the desirability, acceptability, ethics and data management of innovative proposals is actually a way of dealing with a market error that becomes clearer and more pressing every day: the need to provide privacy-enhancing technologies that meet the needs of citizens, city councils, police forces, companies, etc. Failure to invest in research on anonymisation and encryption systems (through pseudonymisation, generalisation, randomisation or their many variants), to respect the right to access, rectify and delete data, and to take into account mechanisms that guarantee the right to be forgotten and to reconcile this right with security and transparency, means self-exclusion from the future. Anybody who fails to understand that respecting privacy gives them a competitive edge is likely to be left out of the technology scene. It would be like selling cars without seat belts, or refusing to improve airbag systems.

So are innovation and privacy inextricably linked? Only in as far ass innovation is inseperable from the society that it forms part of. And everything seems to suggest that present and future generations will increasingly demand a leading role in the information society, rather than being content to remain data providers with no rights and no say.





Web 2.0 Ten Years On

A decade later, we can confirm the far-reaching nature of the radical changes provoked by this phenomenon, which has two sides that merit reflection.


A crowd demonstrates in Cairo, 1951. Source: Wikimedia Commons.

In 2004, the O’Reilly Media Group organised a conference in which the term ‘Web 2.0’ became a popular buzzword to refer to a new Internet model that revolved around the relational dimension. A decade later, we can confirm the far-reaching nature of the radical changes provoked by this phenomenon, which has two sides that merit reflection.

Web 2.0: 2004-2014

It is widely accepted that the ‘Web 2.0’ concept was coined at a conference organised by the O’Reilly Media group in October 2004. Although other authors had already used the term previously, the amplification of this new stage was very closely bound up with this conference at which survivors of the dotcom collapse in the late nineties, summoned by an elite of cyber-ideologues, announced the emergence of an innovative relational model that placed users at the centre but also transformed them into a source of almost inexhaustible ‘raw material’ for the digital universe.

This development marked the start of the great call for the participation of (digital) citizens, who were offered a series of tools that enabled information-sharing, interoperability, and ongoing participation in content creation, as described in the corresponding entry in Wikipedia, which is one of the triumphant projects of the new ecosystem that started to take shape a decade ago.

In spite of the objections of the creator of the World Wide Web, Tim Berners-Lee, who described the term ‘Web 2.0’ as ‘a piece of jargon’, the advent of collaborative digital technologies has favoured a series of transformations that were unimaginable in the late twentieth century. Social networks, wikis, blogs, mashups and folksonomies are the result of cumulative changes in the way software developers and end users make use of the web, and they have allowed a flourishing of social and cultural promises with maximalist objectives at the local and global levels. The most far-reaching include providing access to all the knowledge accumulated by humankind throughout the centuries, and working towards a truly democratic culture in which social media play and will continue to play a decisive role.

Ten years after its birth – a decade that has led us into a jungle of jargon and idiolects pertaining to the Internet and the technologies derived from it – the phenomenon that we know as Web 2.0 can be considered from a more critical and complex vantage point. We can see it as a two-sided phenomenon that we must continue to reflect on, in spite of the fact that the fast pace of technological development can quickly make any theory or critique obsolete, even when they are not based on fears or preferences but simply try to return to the question of ‘meaning’. This article attempts to take stock of the positive and negative aspects of the 2.0 phenomenon. Side A of the social web is a defense of the evolutionary leap it has entailed. Side B negates and challenges its accomplishments and casts a cone of shadow over the future of the Net.




Side A: The Common Good

Side B: The Shadow of Big Brother

The Speed of Change

Before we can come to grips with the true feats and obvious dangers of this second stage of the global use of the Net, we are already ‘included’ in a new stage (Web 3.0) that entails all the benefits and risks of the datafication of the world and promises substantial innovations in the fields of artificial intelligence and 3D technology. Controversy surrounds the naming of this new cycle: the semantic web, the Internet of things, the Age of Big Data, etc., but it involves converging phenomena that highlight accelerationist tendencies.

The speed of change is the only constant, and this makes it difficult to create legal frameworks that restrict, regulate or shape the way new technologies affect information and knowledge. Rights and laws are slow-moving, always dragging behind the technological innovation sprint.

What are we to think? How are we to act? It seems clear that we cannot move forward without seeing the Internet as part of the solution rather than just part of the problem, and trying to expand and develop its positive aspects. It may well be that the social web exalts the role of prosumers, bricoleurs and contributors, that it favours true participation, that its paradigm of co-creation enables a true democratisation of culture and of politics, and that it promotes collective intelligence and the clarification of ‘common assets’. But it is up to us citizens to actively shape the new social, political, cultural and scientific stage, which is both physical and virtual. This is the battleground for the struggle to gain power over the narratives that define our current status as a society. That is where the present-future of the world we are creating is being played out.





Digital Africa. A developing ecosystem

There is an Africa that innovates and looks to ICTs as a means to solve its problems through a citizen-led process.


Kids Hacker Camp, iHub Nairobi. Source: iHubNairobi.

There is an Africa that is in tune with technology: this is not a hypothesis, it is an affirmation. There is such a thing as a technological Africa; a digital Africa; an Africa 2.0; a connected Africa. There is an Africa that innovates and looks to ICTs as a means to solve its problems, and that is determined, in spite of its shortcomings and difficulties, to ensure that it seizes the opportunity of the information society to get on the train that it was forced to miss in the industrial revolution, for example. In short, there is an Africa that wants to take control of its future on many different levels through a citizen-led process.

A billion people live in Sub-Saharan Africa, half of them young people below the age of twenty-five. Up until the start of the 20st Century, Africa has been extremely deficient in technology and infrastructure in general, but what will happen from this point onwards? A process of democratisation and increased access to technology is taking place all over the world, and Africa is no exception. Sure, it’s lagging behind other countries, and the process is slower than on other continents. But the shortfalls, problems, and obstacles that had to be negotiated in Africa are precisely why the implementation of the information society has certain specific characteristics in African countries. One of these characteristics is the fact that citizens have placed themselves at the centre of the process. Civil society and individual initiatives in Africa are much braver, more cutting-edge, and more constructive than those of the governments of countries that are a step behind communities of connected citizens.

Less than ten years ago, in 2005, barely two percent of Africans (including North Africans) were connected to the Internet; today, nineteen percent venture online. This percentage is clearly still much lower than the worldwide average of 40.4 %, but it’s just as clear that the gap is closing at a dizzying pace. The number of African internet users this decade has increased by more than a thousand percent. The internet penetration rate worldwide is twice that of Africa, but ten years ago it was seven times greater.

This connected Africa is a multifaceted prism. ICTs are used in the agricultural, cattle and fishing industries (not just for marketing products), and in health and education, for example, and the first tentative e-government initiatives are starting to emerge. In addition, technological innovation is becoming an economic sector in itself, one that is governed by different laws than those that regulate the exploitation of raw materials and as such opens the door to different types of commercial and financial relationships. But above all, web 2.0 tools are providing civil society with a means by which to develop their interests, promote social and political participation, denounce democratic deficits, supervise election processes and governments, draw attention to abuses: in short, to shame those in power and to become catalysts for change.

Digital Africa is a complex ecosystem, both within the 53 countries that make up the continent and among them. A huge number of actors are involved, from governments to transnational corporations; from individual internet users to civil society groups that are colonising the virtual environment; from creative entrepreneurs to companies from other sectors that seek technologies tailored to their needs. All the different initiatives are interconnected: they fuel and inspire each other and awaken new interests, and the actors interact and generate new realities. The points of contact between all these elements multiply, increasing the complexity of the ecosystem. At the same time it grows, attracting new actors and new initiatives in a dynamic of expanding connections that constantly exceed the established limits. Notwithstanding the fact that every situation is different, and that it is always dangerously to start talking about “Africa” in general terms, there are some recurring elements that appear to characterise this phenomenon :

It is an urban phenomenon that is becoming less restrictive

The difference between the rural and urban worlds is almost the norm all over Sub-Saharan Africa. Infrastructures have grown considerably in most cities, while in rural areas they remain bogged down. The urbanisation of the continent is taking place at a dizzying rate, almost as fast as the implementation of ICTs. At the moment, around 40% of the African population lives in cities, but forecasts suggest that one of every two Africans will live in urban centres by 2035, rising to almost 60% (more specifically, 58%) in 2050. But we don’t need to look that far ahead: by 2015 three African cities –Lagos, Cairo, and Kinshasa– will have a population of more than ten million, while only thirty years ago the figures were 3.5 million for Lagos and 2.7 million for Kinshasa.

In the most major African cities, fifty percent of the population has internet access. (while the average for the continent, as mentioned earlier, stands at 19%). And the majority of university-educated young Africans –who match the profile of the most common internet user– live in these urban centres. In spite of these differences, many initiatives are aimed at bringing internet access to rural environments. Internet activists themselves believe that expanding into the countryside is one of the basic challenges for the democratisation of the internet, and increased mobile bandwidth is turning out to be their main ally.

A hackathon at the iHub in Nairobi to find technological solutions to natural disasters and crises. Source: Erick (HASH) Hersman.

Appropriation of technology, reuse, recycling, and adaptation

The potential uses of ICTs in Africa have grown exponentially because the main actors rarely stop at the prescribed uses for pre-existing tools. Adapting technology to specific needs has become the main concern of users (at least those who are most actively involved), either by designing and making new devices, adapting existing tools or, at the very least, devising uses other than the ones they were designed for. At one end of the spectrum, there are many initiatives that try to breathe new life into electronic waste through ingenious, creative recycling, in an attempt to avoid the terrible fate of becoming the dumping ground of the information society.

Cyberactivism and technological innovation

Two particularly striking phenomena have emerged in the expansion of the African digital ecosystem. On one hand there is cyberactivism, by which civil society takes action in order to secure a greater role in social and political processes through the use of ICTs. On the other, there is technological innovation, which has found its ultimate expression in the emergence of tech labs o tech hubs: co-working spaces and business incubators, which are the ideal breeding ground for technological social entrepreneurship and developing start-ups based on social awareness. These two phenomena overlap, interact and feed off each other, in a complementary and collaborative spirit.

Transnational relations

The small scale of local communities probably helped to make it easier for users to get to know each other and develop interpersonal relationships. As the communities grew larger, these bonds made the connections grow stronger. In the current situation, these connections have led to joint initiatives: users are aware that other actors in the ecosystem (such as transnational corporations) cross borders, and that some of the challenges that they are facing involve more than just one country. So the most natural thing to do is to join forces in a virtual space without borders to meet those challenges.

The transforming spiral

As the development of the African digital ecosystem is recent, we can trace its evolution. One of the recurring triggers was a response to political crises, which led to a desire for social participation. This in turn gave way to direct solidarity, which then grew into civic activism. The most recent territory colonised by cyberactivists is environmental activism.

Users of a cybercafe in Kampala (Uganda). Source: Arne Hoel/World Bank.

A business, for better or worse

It is beyond question that technology is an important sector in the world economy, and that Africa is a rapidly developing market. On one hand, it is a territory in which transnational corporations (including telephone companies, internet providers, and hardware manufacturers) are eagerly trying to position themselves. However, the presence of companies such as Samsung, Microsoft and Google in Africa is not only linked to their desire to sell their phones, distribute their software, and get people to use their “services”. The fact that they are also setting up development labs and approaching tech hubs as innovative spaces, shows that they are trying to make the most of the creative potential that African users have shown in a very short time. And that’s without counting the increasing impact of the internet on business and on the GDP in Africa, directly or indirectly. Curiously, none of the countries that are getting the most out of the virtual environment are what are considered the major economic powers, which until now had been economies with natural resources to exploit.

Future challenges and threats

In these early stages of the implementation of ICTs in Africa, citizens have overtaken governments. Communities of activists and innovators have proven to be much quicker, more flexible, and more creative than the authorities. Nonetheless, the fundamental future challenge is to make the internet more generally accessible. And this is not just a question of infrastructures and materials, it is also about continuing to work on education and digital skills, and to fill the net with content that makes Africans to feel at home. The current situation also contains the risk of the creation of a technological elite –a vanguard that could take advantage of their greater access to resources in order to accumulate power. But for now, the promoters of the major initiatives have not shown a tendency to hoard privileges. Instead, they are driven by a desire to work towards democratisation and social change. While a potential danger exists, the real experiences are all along this path.





Back to the Future: The Internet of Everything

Can you imagine an alarm clock that decides to wake you up 10 minutes earlier because it knows you have a meeting first thing and there’s a traffic jam?


Autómata, José A. Madrid. Source: Flickr.

It’s been spreading on the Internet for 15 years. Several international conferences have already been organised on the subject, and it even became one of the top hashtags at this year’s World Economic Forum in Davos. We are entering the age of the Internet of Things, a term that refers to equipping ordinary objects from our everyday lives with sensors and information transmission systems that identify them and allow them to communicate among themselves and connect to the Internet, in order to provide us with data for a wide range of purposes. Can you imagine an alarm clock that decides to wake you up 10 minutes earlier because it knows you have a meeting first thing and there’s a traffic jam that day on your route to work?

Perhaps if Marty McFly had landed his DeLorean back in 2014 he wouldn’t be surprised to encounter al kinds of everyday objects, from tables and chairs to streetlights, coffee mugs, parking spots and glasses, talking to each other, swapping data and connecting to the Internet. Perhaps in the past that he came from, they imagined that the Internet of the future would jump from one computer to another first, then into the smartphones in our pockets, before infiltrating the ordinary objects in our day-to-day surroundings that we rarely even notice. And that it would make them smart. So perhaps the star of Back to the Future wouldn’t miss a thing, other than his flying skateboard.

Experts claim that this is the year when we will see a true revolution of objects: they will begin to connect to each other and to the net, they will understand each other, share information with our mobile phones, measure environmental parameters, and keep us posted on any changes. The age of the Internet of Things has arrived, and it will apparently allow us to make better decisions, optimise services and ultimately improve our lives.

A good example of this revolution is the boom in ‘wearables’ – gadgets like bracelets and watches that quantify us as we wear them. They measure our vital signs, the amount of calories we burn, how much exercise we do and the quality of our sleep. They collect the data through a series of sensors and send them to our mobile phones to assess our physical fitness, for example.

Other examples include the many companies that have equipped their fleets of vehicles with sensors and GPS to optimise routes, and the farmers who implant sensors in their cows that measure their vital signs and transmit the information to a server that the farmers can access in order to instantly determine the health of their herd, their movements and eating habits. Or fridges that can sense the products that have run out and buy them online. Or even smart toothbrushes that tell us whether our kids are neglecting any areas when they brush their teeth. And the list goes on; the imagination is the limit.

Toothbrush. Source: Kolibree.

According to technology manufacturing and consultancy firm Cisco, we are witnessing a fundamental change in our relationship to objects. A study by this US-based company shows that in 1984 there were 1000 connected devices, a number that rose to a million in 1992, and a billion devices in 2008. And according to data from IT company Sogeti’s VINT trend lab, there will be an estimated 50 billion connected devices in the world by 2020. Just think: That’s six times greater than the number of people who will be living on the planet! And each and every one of those objects will be identifiable, quantifiable and programmable. Because that’s exactly what the Internet of Things, also known by its initials IoT, is all about.

Smart objects

The concept is not new. It was coined at MIT in 1999, and the term the ‘Internet of Things’ was used to refer to the idea of connecting all kinds of objects, from electrical appliances to vehicles, books, clothing and watches. All they had to do was place a sensor in each object, and plug it into the Internet.

But before the idea could thrive they had to deal with a stumbling block: the technology that has slowed down the process until now. When the idea first came up some fifteen years ago, sensors were large and cost a lot of money. Internet connections were expensive and slow, a router was required to make the call, and wi-fi and smartphones didn’t exist. But things are different now: we carry around powerful tiny computers in our pockets, connectivity is ubiquitous, we can store enormous amounts of data in the cloud, and sensors are miniscule and affordable.

Although we tend to attach the label ‘smart’ to anything that goes online and provides data, the smartness of the Internet of Things will actually depend on connecting information from different sources – such as social media, traffic conditions and weather forecasts – in order to optimise existing services and create new ones. That’s where the enormous potential of this tool lies.

Imagine: streets equipped with sensors that detect the free parking spots in the city and let you know by sending you an sms so that you can avoid driving around in circles for hours looking for somewhere to park. Or an alarm clock that decides to wake you up ten minutes earlier because it has detected a traffic jam on your way to work and recalculated how long it will take you to get there. Or cars that connect to the city’s traffic management systems to suggest the best route and warn you of dangers such as patches of ice on the road.

http://vimeo.com/94011734

Or smart clothing that sends an sms to the parents if a baby has a fever, or firemen’s uniforms that collect data on air temperature, wind velocity, and gas concentration levels to headquarters during a fire, to help them make them best possible decisions.

And the next big step after connected objects will be connected smart buildings, which will save energy and improve sustainability, adapting to their users and to the way they employ the space. And from there to the city itself: ‘smart cities’.

Meanwhile, the Internet of Things also opens up many exciting possibilities for cultural institutions. To start with, cultural centres can access visitor information in real time, allowing them to build up a detailed profile of the people who visit on different days of the week, and manage the flow of visitors through an exhibition, for example.

Centres could also offer exhibitions that adapt to each user. For example, in an exhibition about water use and our water footprint, a visitor could stand before an information panel that scans the manufacturing details on the smart label on his t-shirt and display customised information such as the specific amount of water that was used to make it. Or objects in an exhibition could offer visitors information about their background, their purpose, how many people have seen them, and even display short videos showing how they were produced or restored. An internet connection would allow the information to be updated in real time.

Some challenges

In order for the Internet of Things to really take off this year, we will have to overcome a series of challenges. First of all, we need to find a system for naming all of the devices. For now, every thing that connects to the net has a unique IP that identifies it. The current system, called IPv4, is based on four numbers between 0 and 255, separated by dots, which means it supports a total of 4.3 billion possible addresses. And that’s clearly not enough. This is why a new version known as IPv6 is already being introduced, and will theoretically support many more addresses. If we want to connect everything to the net, we will also need much more bandwidth.

And another important challenge will be computer viruses and other attacks. If one computer is infected we can lose our files, but if an entire network of objects is attacked by a virus the effects can be catastrophic. So can being hacked. Connected devices can be a way to gain access to our networks. Early this year 750,000 spam emails were sent from routers, alarms, webcams, and… a fridge! And in late 2011, the thermostat at the US Chamber of Commerce building apparently sent e-mail addresses, minutes of meetings and documents to an IP located in China.

But the technical side is not the only challenge. The potential of the Internet of Things lies in the enormous amount of data that it will generate, but these data will only be truly valuable if they are open and can be understood by users, so that they can be used by as many people as possible. Then again, we also have to consider our privacy. To what extent do we want our daily life to be monitored? And how much of that information should third parties such as the services sector be able to access? It won’t just be companies, other people will also be able to follow our digital trail and know what we are doing at all times. Will we feel comfortable with this level of interference in our private lives?

The connection of so many millions of objects to the Internet should lead to the growth of the knowledge economy. Experts claim that we will soon be a smart society of interconnected human beings. And it all starts today.





Internet. Visions of 1998

Report shot in 1998 where people is asked about Internet and its futu


To celebrate World Information Society Day we unearth a relic of the past in the form of a short report featuring early reflections on the Internet and its development. The digital revolution that we subsequently lived through can already be sensed in the comments of the experts and not-so-experts interviewed sixteen years ago, who were asked what the net is, what level of regulation the Internet would need in the future, and whether it would exist in the framework of a Virtual Republic. It is surprising to discover that neither the issues raised nor the answers given by the interviewees were too far off the mark.  Sixteen years later we can see that, back then, we were already grappling with dilemmas that are extremely topical today and are not yet fully resolved.

Along these lines, last February we carried out the Internet Universe project, which critically and creatively reflected on the Net and offered educators tools to take this information into the classroom. We are currently putting together an ‘educational suitcase’ for teachers that will be launched in July. This teaching resource includes materials from the training sessions to be used in the classroom in conjunction with the school curriculum. We also recommend the exhibition Big Bang Data, a project that explores the phenomenon of the data explosion that we are currently in the midst of.





(In)Net: in the Mirage of Social Networks

Are social networks a new tool for women’s empowerment? Or are they incapable of deactivating the mechanisms of inequality?


Women’s liberation march at Washington, 1970 | Library of Congress

Are social networks a new tool for women’s empowerment? Or are they incapable of deactivating the mechanisms of inequality? As a new means for promoting communication and dialogue, social media have entered our digital lives wrapped  in an attractive package -ease of use and immediacy- that leads us to optimistically consider their potential to speed up the process of women’s empowerment. But perhaps the virtual world is only a parallel reality, without any connecting vessels to the grim physical world, and neither Twitter nor Facebook are programmed to apply pressure to the age-old patriarchal structure.

In (h)adas. Mujeres que crean, programan, prosumen, tecleanRemedios Zafra writes about the emancipatory effect on women’s lives of the advent of technologies of everyday life (from washing machines to fridges, food processors and electric irons), many of which became mass consumer goods in the fifties. There’s no doubt that these machines made life much easier for that fifty percent of the population who were denied entry into all kinds of fields, so it’s hardly surprising that they also confirmed our reign precisely in the sacred domestic domain, the only space where we were allowed to rule. Which is why Zafra describes the as “hierarchical and low-tech, not productive technologies but mediators of consumption; prosumer tools designed for non-epic tasks, for tasks from the shadows (out of sight) of daily life.”

These and other advances now give women more time to spend on other tasks, such as practicing medicine, becoming a member of parliament, or writing this article, so we should acknowledge their advantages over and above their shortcomings. They also explain why we were so ready and willing to welcome the new information technologies when they landed in our homes. In 2006, when “digital democracy” began to take off (that is, when Facebook, which had been created two years earlier, opened up to everybody, eventually reaching today’s more than 350 million users), we believed that after cultivating silence and/or frugality of expression for centuries, this new modality of democracy would be our opportunity to finally conquer our fair share of spaces of communication.

A Virtual Room of Our Own

In the last few years we have gone from low-tech to high-tech, from the prison of the home to the wide-open window of a virtual room of our own, from what Sadie Plant calls ‘technogender’ to ‘cybergender’. As a silenced sector of society, we threw ourselves into taking advantage of social networks without defences, without protection. And in 2006, Time magazine seconded our desire to participate in the festival of networks on equal terms by choosing a very special figure as its “person of the year”: “you”.

This “you” encompassed all the anonymous men and women who were on the threshold of a new era in which they would play a leading role. It was a “you” without gender, and as such it included women, who, for better or worse, had already attained a room of their own (to borrow from Virginia Woolf) and who began to throw themselves into their e-mail and Twitter accounts, Google groups…. that is, a virtual room of their own. The incentive was great, and we became part of it very quickly, to the point that women are now the majority on social networking sites according to the figures released by Finances Online, which show, for example, that 76% of adult women in the United State use Facebook, compared to 66% of men.

In the world of ICTs, social networks (particularly ‘horizontal’ or contact-based networks) are certainly the ideal tool for transmitting messages and information (sometimes to the extent of going viral), and this makes them a coveted space to be conquered. And while individuals and social groups that were already empowered have simply had to adapt to the new language in order to extend their power, individuals and groups that had hitherto being silenced have seized the opportunity to reach a level of impact that is not available to them through “official” channels.

Wasn’t this the case in the so-called Arab spring uprisings, where social networks played a crucial role? Would the Jasmine Revolution in Tunisia or the Tunisian Intifada have taken place if the self-immolation of a young street vendor hadn’t spread like wildfire through social media? And would the uprising in Egypt’s Tahir Square have been possible without the active involvement of Internet users? Would the Cuban blogger Yoani Sánchez be able to “broadcast” the injustices that take place in her country if she hadn’t been able connect to social networking sites thanks to a wi-fi connection in a hotel in Havana?

Aman during the 2011 Egyptian protests carrying a card saying “Facebook, #jan25, The Egyptian Social Network”. Source: Wikipedia.

Social networks, particularly Twitter, were also key to the success of the “indignados” movement in Spain, allowing 15-M Democracia Real Ya to convene 130,000 people with a few hashtags, for example, which is easier said than done. We can wonder if the civil rights movements in the US would have advanced much more quickly if somebody had recorded on their iPod the moment when Rosa Parks refused to give up her seat to a white man on a public bus in Alabama in 1955. The video would certainly have spread like wildfire, much faster than Scarlett Johansson’s cute poses.

A New Communications Paradigm

Although some social networks have been more successful than others, the phenomenon is still growing, hot on the heels of the development of new apps. For instance, the use of Twitter skyrocketed in 2010 as a result of the arrival of smartphones on the market, and 70 million tweets are now posted every day. This means that social media are not just highly sought-after “unofficial” channels, they are also in crescendo, and this encourages women to think that they are not governed by the same laws as the outside world, and particularly to build up their hopes in regard to the impact of participating in them.

The ample evidence of the huge impact and unstoppable rise of social media leads us to consider them a means of liberation. And to see them as an enormously useful way of supporting the empowerment process that we have been pushing for decades, and even as a tool that can speed it up considerably. Will social networks become weapons against gender inequality, just as they appear to be helping to overthrow anti-democratic regimes? And if so, how real is their impact, and how much does it affect the chain of indicators that is used in the outside world?

Social networks have a proven capacity to redefine the space of interaction, and to create virtual communities. Social media create networks by interconnecting us, and this makes them an exceptional vantage point from which to draw attention to different social issues, including women’s issues. What remains to be seen is whether they will only be able to interconnect those who are already peers –like the social network BlackPlanet, created in 1999, for example– so that in the worst case they will simply end up generating a great community of feminists that will remain separate from non-feminists.

We also have to take into account the fact that networks are no longer just spaces where people can share ideas to make the world better, or factories of idyllic love: they also have great advertising potential, and advertising is still a tremendously male-centred world. This is why Boyd and Ellison have described social networking sites as “web-based services” or, in other words, as an extension of non-virtual reality rather than a new paradigm. As Judy Wajcman says in Technofeminism, given that “the relationship between technoscience and society is currently being subjected to profound and urgent questioning”, it seems reasonable that the much more recent phenomenon of digital technology should bring about utopian and dystopian visions. In this sense, we move between “utopian optimism and pessimistic fatalism” (Wajcman again).

Networked Feminism

The first Cyberfeminist Manifesto was presented by the group VNS Matrix in the mid-nineties in Adelaide, Australia, and the first Cyberfeminist International was held at Documenta X in Kassel. Although the cyberfeminist movement may be somewhat wild or constantly in the process of being redefined, its members continue to emphatically claim that gender inequality does not exist in cyberspace, and that the Internet has the capacity to transform conventional gender roles. Could this be wishful thinking, particularly among younger women who also think that the glass ceiling is a thing of the past in the non-virtual world too?

Stickers made for the 1st Cyberfeminist International. Source: Wikimedia Commons.

In the academic world, gender studies departments are already carrying out research into the creation and transmission of feminist discourses in the new communication environment (particularly social networking sites). Most of these studies find that the horizon of expectations of online feminism is no different to that of other types of feminism, but that it takes advantage of the new possibilities to carry out virtual actions and coordinate strategies. Unfortunately, in spite of the optimism of early cyberfeminists, it seems like there is still a long way to go before the Internet gets rid of sexual difference (Judith Butler talks about “undoing gender”). To say nothing of the fact that if sexual difference is ever erased, this change would be an excuse to carry out a kind of amnesty: to erase the past, forget the centuries of shame, and throw the key of domination into the cybernetic ocean.

For the time being, it is true that gender anonymity is possible on the Internet, that it allows us to be free of the tyranny of the body, and that this leads to a blurring of the boundaries between the sexes. “I’d rather be a cyborg than a goddess,” wrote Donna J. Haraway, arguing that we who live in a postmodern world are becoming cyborgs: “creatures in a post-gender world,” as she says in Science, Cyborgs and Women. But we are yet to see whether women’s participation on the Internet, on social networking sites, will substantially affect our hitherto silenced identities, and give us a greater presence as prescribers: will social media be the amplifier that we women have been seeking in order to find our place in this new reality?

In  order for this to be so, social networks must be able to pro-actively contribute to the construction of this reality, and not just passively reflect what takes place outside of them. In some sense, they must have the capacity to establish a new precedent, to “feminise the world that flows through them.” In other words, the question remains: Have women been invited to the social media party, or are we just gatecrashers? Are we valued or put up with? And most importantly, will we be able to convey transformative messages through them? And if we are already doing so, how are the results of this empowerment being monitored and measured?

We would like to think of social media as a shared space that works according that what Saskia Sassen has called “the logic of incorporation”. We want them to become a kind of fabric woven out of open networks that can incorporate new forms of knowledge that challenge those imposed by mainstream culture. To paraphrase Sassen again and transfer her ideas to the field of genre, we want networks that are able to dribble patriarchal distortion and foster new mechanisms, new systems of fair distribution. Having confirmed the usefulness of networks as communication channels (see for example The Freedom Train that recently used social media to organise a protest against the Spanish government’s proposed abortion legislation), we are yet to confirm their use as an instrument of transformation.

Do social networks really boost the visibility of women’s work? Or do they trap women in a new type of invisibility, like “kitchen technology” did? These questions are not intended to be pessimistic, only to highlight the ambivalence of social media and of their possible effects. Time will tell whether they are useful for dissent or for affirmation. Meanwhile, rather than talking about ‘(h)adas’ (a play on words between the Spanish word for Fairy and Ada), we should focus on being ‘(en)redadas’ (in a net, tangled together), in the sense of being connected and caught up, rather than hooked. This does not change the fact that, every day, millions of women all over the planet, in precarious phone centres or on new-generation laptops, continue to share their hopeful voices on Twitter, Tuenti or Facebook, with the certainty that somebody on the other side is listening.

Bibliography

Donna J. Haraway: Simians, Cyborgs and Women. The Reinvention of Nature (New York: Routledge, 1991).

Sadie Plant: Zeroes + Ones: Digital Women and the New Technoculture (New York: Doubleday, 1997).

Judy Wajcman: TechnoFeminism (Oxford: Polity Press, 2004).

Remedios Zafra: Netianas. N(h)hacer mujer en Internet (Madrid: Lengua de Trapo, 2005).

Un cuarto propio conectado. (Ciber) Espacio y (auto)gestión del yo (Madrid: Fórcola, 2010).

(h)adas. Mujeres que crean, programan, prosumen, teclean (Madrid: Páginas de Espuma, 2013).





Urban Revolution 4.0

We talk about specific examples and proposals to promote smartcitizens in the face of the concept of the smartcity.


Airphoto of St. Louis, Missouri, U.S.A. Imaged March 03, 1968. Source: Wikipedia.

In our last article for the CCCB Lab blog, we argued that there is a need to promote smartcitizens in the face of the concept of the smartcity: in other words, rather than the centralising, exclusive, hierarchical smartcity logic, we should defend the values of smartcitizens (innovation, collective intelligence, horizontality, collaboration, P2P, copyleft) as the foundation for the urban, territorial and social changes that are taking place in the contemporary world. After that initial theoretical foray, we will now offer a more practical approach, with specific examples and proposals.

How can we go about developing urban processes and projects that meet the needs of citizens today? Now more than ever, it is necessary to subvert many of the precepts and the logic that have governed the development of cities and urban design over the past few years. We need to come up with new imaginaries based on the collaborative construction of the city from a creative perspective. And digital culture and the values that ‘smartcitizens’ stand for can play a crucial role – although not necessarily by any means or at any price: as always, it is important to bear in mind the “whys” and “hows”, to critically analyse the new models for urban management and intervention, and to think about how they can be linked to citizen intelligence, innovation, social participation and free culture. We have to ensure we begin with a solid foundation for this incipient ‘neo-urbanism’, based on equality, cooperation, territorial cohesion and on social and environmental commitment.

We have recently being witnessing the creation of the myth of the “smartcity” as the benchmark for urban innovation and digital technologies applied to the city. But smartcities have not been hailed because of a transformative power that takes into account the kinds of values we have just mentioned. Rather, the myth responds to the interests and marketing campaigns of multinational corporations. At the same time – or in response to the rise of the smartcity – we now see the emergence of “smartcitizens”, who defend the use of citizen intelligence to solve urban problems. And this is the idea behind many collaborative strategies that are springing up with the aim of empowering the common good and, as such, promoting urban innovation at the service of the interests of citizens.

As could be seen in the recent Smartcitizens exhibition in Madrid, applying citizen intelligence to urban intervention and management can generate many different types of projects and developments. This text, however, will focus only on a few that we think contribute an innovative approach to the transformation of cities, and can thus be considered to come under the umbrella of urban innovation.

Urbanism in the digital era

Nowadays, all kinds of media and devices allow us to share valuable information among different agents and sectors of our society, and to become smart, active, participatory citizens. But we are still in the very early stages of the effort required to transfer this opportunity into a real paradigm change in terms of urban design and the transformation of the city, and in some cases the powers that be and traditional professional fields tend to try and limit this power.

Sensordrone is a very small multiple sensor device that communicates with smartphones with Bluetooth. Source: SmartCitizensCC.

Now that the flow of major public investment in urban development or renewal has run dry, the urban sector has come to a worrying standstill. Unable to adapt to the new social and urban dynamics that define the change of era we are currently in the midst of, urban planners and public administrations huddle and wait for the return of the good old times. Meanwhile, the city follows its course and continues to grow through other agents, other disciplines, other gazes.

This depletion of urban practices is a double-edged sword for the city. On one hand it has brought in new disciplinary perspectives, revealing the need for a transversal culture when it comes to urban projects and processes. On the other, this exclusion (self-exclusion?) of urban professionals from the spaces where new urban changes are brewing is dangerous in that it cuts them off from the possibility of an integral vision of the city – actions taken in a sectorial, partial way can generate negative or unexpected effects –; and leaves them without access to tools, methodologies and types of knowledge that can make them more coherent and efficient.

Urban innovation: life beyond urban orthodoxy

Urban planning has become a wasteland, creating a gap that has allowed for the emergence of urban practices from other sectors, which is, in turn, transforming the notion of urbanism as we knew it. These new ways of intervening in the city and developing urban innovation strategies are diverse and take many different forms, as we will see below.

One of the main hubs of urban innovation involves the initiatives promoted by technological innovators which are enabling citizens to access information, make decisions and organise collectively in order to use the city in smarter ways. We now have an enormous amount of applications and platforms that make it possible to visualise and share information about your city or neighbourhood, (Disqus, Livehood…), that help citizens to make informed decisions (Mindmixer, Near Interaction…), or enable us to take an active, entrepreneurial attitude to the urban habitat (Urbapps, Social Lab…).

These types of initiatives are by no means limited to encouraging the smart use of cities: they are also helping to improve them. An example is the Peruvian digital platform Datea, which managed to redesign the route of a public transport line in Lima (El Metropolitano) thanks to the reports and problems identified by the people who took part in it.

Another important field that is playing an crucial role in urban innovation today is the cultural and creative industries (CCI). The convergence of interest in the fields of free culture and the city for research and experimentation has allowed this sector to respond in different ways to this “in-between” period that we find ourselves in. One of the examples that best reflects this idea is ZAWP (Zorrotzaurre Art Work in Progress), a project that is working towards reclaiming the abandoned industrial heritage of the Zorrotzaurre peninsula in Bilbao, Basque Country, through creative activities and cultural programmes, thus offering an alternative urban model to the special plan designed by the architect Zaha Hadid for the area. The scale of the site makes it one of the most comprehensive examples of urban intervention, along with the Esto no es un Solar programme in Zaragoza, which aims to find uses for empty sites in the city through a process of collaborative decision-making and the labour of unemployed workers. Other equally interesting but smaller-scale projects in Spain include LaFábrika-detodalavida in Extremadura and Jazar in Pamplona. All of these initiatives call into question the classic forms of city-making by applying creativity, community entrepreneurship and peer-to-peer logic.

Zona Franca Zorrotzaurre: Udaberri ZAWP 2013. Source: ZAWP Bilbao.

Another of the interesting branches that are sprouting from the union of urban innovation and the CCIs are the so-called Citizen Labs, which have inherited the spirit of the Living Labs that are conceived as contexts for learning, dissemination, experimentation and collective and collaborative creation. Even though their scope of action is not the city in the strict sense, many of them host and promote initiatives that profoundly urban in nature. This is the case of MediaLab Prado and Intermediae in Madrid, for example, two spaces that are working with architecture collectives and providing the infrastructure necessary for the development of urban intervention projects. Some new projects that are also working along these lines include Hirikilabs, the citizen lab run by Tabakalera in Donostia-San Sebastián, and the process of setting up an Open Urban Lab that has started in Zaragoza in conjunction with Etopia. Centro de Arte y Tecnología and the Zaragoza Activa programme.

The proliferation of these types of spaces is reinforced by a favourable international context: in July last year, the Ciudadanía 2.0 project run by the Secretaría General Iberoamericana (SEGIB) in collaboration with the Citizen Participation Programme of the Panama President’s Office, the Department of Culture of the Prefeitura de São Paulo, the AECID, Unidos en Red Foundation and the CAF Development Bank of Latin America, began to promote the citizen innovation process. One of its aims is to draft a collaborative document for the joint creation of Citizen Labs in various countries in the region.

When we take all of these elements together, they create a new scenario from which we can start to collaboratively build an alternative future for collective decision-making and for the transformation of our cities. And here, the collaboration and synergies among the different stakeholders will be key: only collective intelligence, social participation, innovation and creativity, entrepreneurship, the potential of digital tools and ICTs, the ecology, sustainability and a cross-disciplinary approach will allow us to consolidate the urban revolution of the twenty-first century





IEML: A Project for a New Humanism. An interview with Pierre Lévy

Pierre Lévy talks about the creation of IEML a tool for the augmentation of collective intelligence by means of the algorithmic medium.


Pierre Lévy is a philosopher and a pioneer in the study of the impact of the Internet on human knowledge and culture. In Collective Intelligence. Mankind’s Emerging World in Cyberspace, published in French in 1994 (English translation in 1999), he describes a kind of collective intelligence that extends everywhere and is constantly evaluated and coordinated in real time, a collective human intelligence, augmented by new information technologies and the Internet. Since then, he has been working on a major undertaking: the creation of IEML (Information Economy Meta Language), a tool for the augmentation of collective intelligence by means of the algorithmic medium. IEML, which already has its own grammar, is a metalanguage that includes the semantic dimension, making it computable. This in turn allows a reflexive representation of collective intelligence processes.

In the book Semantic Sphere I. Computation, Cognition, and Information Economy, Pierre Lévy describes IEML as a new tool that works with the ocean of data of participatory digital memory, which is common to all humanity, and systematically turns it into knowledge.A system for encoding meaning that adds transparency, interoperability and computability to the operations that take place in digital memory.

By formalising meaning, this metalanguage adds a human dimension to the analysis and exploitation of the data deluge that is the backdrop of our lives in the digital society.And it also offers a new standard for the human sciences with the potential to accommodate maximum diversity and interoperability.

In “The Technologies of Intelligence” and “Collective Intelligence”, you argue that the Internet and related media are new intelligence technologies that augment the intellectual processes of human beings. And that they create a new space of collaboratively produced, dynamic, quantitative knowledge. What are the characteristics of this augmented collective intelligence?

The first thing to understand is that collective intelligence already exists. It is not something that has to be built. Collective intelligence exists at the level of animal societies: it exists in all animal societies, especially insect societies and mammal societies, and of course the human species is a marvellous example of collective intelligence. In addition to the means of communication used by animals, human beings also use language, technology, complex social institutions and so on, which, taken together, create culture. Bees have collective intelligence but without this cultural dimension. In addition, human beings have personal reflexive intelligence that augments the capacity of global collective intelligence. This is not true for animals but only for humans.

Now the point is to augment human collective intelligence. The main way to achieve this is by means of media and symbolic systems. Human collective intelligence is based on language and technology and we can act on these in order to augment it. The first leap forward in the augmentation of human collective intelligence was the invention of writing. Then we invented more complex, subtle and efficient media like paper, the alphabet and positional systems to represent numbers using ten numerals including zero. All of these things led to a considerable increase in collective intelligence. Then there was the invention of the printing press and electronic media. Now we are in a new stage of the augmentation of human collective intelligence: the digital or – as I call it – algorithmic stage. Our new technical structure has given us ubiquitous communication, interconnection of information, and – most importantly – automata that are able to transform symbols. With these three elements we have an extraordinary opportunity to augment human collective intelligence.

You have suggested that there are three stages in the progress of the algorithmic medium prior to the semantic sphere: the addressing of information in the memory of computers (operating systems), the addressing of computers on the Internet, and finally the Web – the addressing of all data within a global network, where all information can be considered to be part of an interconnected whole–. This externalisation of the collective human memory and intellectual processes has increased individual autonomy and the self-organisation of human communities. How has this led to a global, hypermediated public sphere and to the democratisation of knowledge?

This democratisation of knowledge is already happening. If you have ubiquitous communication, it means that you have access to any kind of information almost for free: the best example is Wikipedia. We can also speak about blogs, social media, and the growing open data movement. When you have access to all this information, when you can participate in social networks that support collaborative learning, and when you have algorithms at your fingertips that can help you to do a lot of things, there is a genuine augmentation of collective human intelligence, an augmentation that implies the democratisation of knowledge.

What role do cultural institutions play in this democratisation of knowledge?

Cultural Institutions are publishing data in an open way; they are participating in broad conversations on social media, taking advantage of the possibilities of crowdsourcing, and so on. They also have the opportunity to grow an open, bottom-up knowledge management strategy.

A Model of Collective Intelligence in the Service of Human Development (Pierre Lévy, en The Semantic Sphere, 2011) S = sign, B = being, T = thing.

We are now in the midst of what the media have branded the ‘big data’ phenomenon. Our species is producing and storing data in volumes that surpass our powers of perception and analysis. How is this phenomenon connected to the algorithmic medium?

First let’s say that what is happening now, the availability of big flows of data, is just an actualisation of the Internet’s potential. It was always there. It is just that we now have more data and more people are able to get this data and analyse it. There has been a huge increase in the amount of information generated in the period from the second half of the twentieth century to the beginning of the twenty-first century. At the beginning only a few people used the Internet and now almost the half of human population is connected.

At first the Internet was a way to send and receive messages. We were happy because we could send messages to the whole planet and receive messages from the entire planet. But the biggest potential of the algorithmic medium is not the transmission of information: it is the automatic transformation of data (through software).

We could say that the big data available on the Internet is currently analysed, transformed and exploited by big governments, big scientific laboratories and big corporations. That’s what we call big data today. In the future there will be a democratisation of the processing of big data. It will be a new revolution. If you think about the situation of computers in the early days, only big companies, big governments and big laboratories had access to computing power. But nowadays we have the revolution of social computing and decentralized communication by means of the Internet. I look forward to the same kind of revolution regarding the processing and analysis of big data.

Communications giants like Google and Facebook are promoting the use of artificial intelligence to exploit and analyse data. This means that logic and computing tend to prevail in the way we understand reality. IEML, however, incorporates the semantic dimension. How will this new model be able to describe they way we create and transform meaning, and make it computable?

Today we have something called the “semantic web”, but it is not semantic at all! It is based on logical links between data and on algebraic models of logic. There is no model of semantics there. So in fact there is currently no model that sets out to automate the creation of semantic links in a general and universal way. IEML will enable the simulation of ecosystems of ideas based on people’s activities, and it will reflect collective intelligence. This will completely change the meaning of “big data” because we will be able to transform this data into knowledge.

We have very powerful tools at our disposal, we have enormous, almost unlimited computing power, and we have a medium were the communication is ubiquitous. You can communicate everywhere, all the time, and all documents are interconnected. Now the question is: how will we use all these tools in a meaningful way to augment human collective intelligence?

This is why I have invented a language that automatically computes internal semantic relations. When you write a sentence in IEML it automatically creates the semantic network between the words in the sentence, and shows the semantic networks between the words in the dictionary. When you write a text in IEML, it creates the semantic relations between the different sentences that make up the text. Moreover, when you select a text, IEML automatically creates the semantic relations between this text and the other texts in a library. So you have a kind of automatic semantic hypertextualisation. The IEML code programs semantic networks and it can easily be manipulated by algorithms (it is a “regular language”). Plus, IEML self-translates automatically into natural languages, so that users will not be obliged to learn this code.

The most important thing is that if you categorize data in IEML it will automatically create a network of semantic relations between the data. You can have automatically-generated semantic relations inside any kind of data set. This is the point that connects IEML and Big Data.

So IEML provides a system of computable metadata that makes it possible to automate semantic relationships. Do you think it could become a new common language for human sciences and contribute to their renewal and future development?

Everyone will be able to categorise data however they want. Any discipline, any culture, any theory will be able to categorise data in its own way, to allow diversity, using a single metalanguage, to ensure interoperability. This will automatically generate ecosystems of ideas that will be navigable with all their semantic relations. You will be able to compare different ecosystems of ideas according to their data and the different ways of categorising them. You will be able to chose different perspectives and approaches. For example, the same people interpreting different sets of data, or different people interpreting the same set of data. IEML ensures the interoperability of all ecosystem of ideas. On one hand you have the greatest possibility of diversity, and on the other you have computability and semantic interoperability. I think that it will be a big improvement for the human sciences because today the human sciences can use statistics, but it is a purely quantitative method. They can also use automatic reasoning, but it is a purely logical method. But with IEML we can compute using semantic relations, and it is only through semantics (in conjunction with logic and statistics) that we can understand what is happening in the human realm. We will be able to analyse and manipulate meaning, and there lies the essence of the human sciences.

Let’s talk about the current stage of development of IEML: I know it’s early days, but can you outline some of the applications or tools that may be developed with this metalanguage?

Is still too early; perhaps the first application may be a kind of collective intelligence game in which people will work together to build the best ecosystem of ideas for their own goals.

I published The Semantic Sphere in 2011. And I finished the grammar that has all the mathematical and algorithmic dimensions six months ago. I am writing a second book entitled Algorithmic Intelligence, where I explain all these things about reflexivity and intelligence. The IEML dictionary will be published (online) in the coming months. It will be the first kernel, because the dictionary has to be augmented progressively, and not just by me. I hope other people will contribute.

This IEML interlinguistic dictionary ensures that semantic networks can be translated from one natural language to another. Could you explain how it works, and how it incorporates the complexity and pragmatics of natural languages?

The basis of IEML is a simple commutative algebra (a regular language) that makes it computable. A special coding of the algebra (called Script) allows for recursivity, self-referential processes and the programming of rhizomatic graphs. The algorithmic grammar transforms the code into fractally complex networks that represent the semantic structure of texts. The dictionary, made up of terms organized according to symmetric systems of relations (paradigms), gives content to the rhizomatic graphs and creates a kind of common coordinate system of ideas. Working together, the Script, the algorithmic grammar and the dictionary create a symmetric correspondence between individual algebraic operations and different semantic networks (expressed in natural languages). The semantic sphere brings together all possible texts in the language, translated into natural languages, including the semantic relations between all the texts. On the playing field of the semantic sphere, dialogue, intersubjectivity and pragmatic complexity arise, and open games allow free regulation of the categorisation and the evaluation of data. Ultimately, all kinds of ecosystems of ideas – representing collective cognitive processes – will be cultivated in an interoperable environment.

Schema from the START – IEML / English Dictionary by Prof. Pierre Lévy FRSC CRC University of Ottawa 25th August 2010 (Copyright Pierre Lévy 2010 (license Apache 2.0).

Since IEML automatically creates very complex graphs of semantic relations, one of the development tasks that is still pending is to transform these complex graphs into visualisations that make them usable and navigable.

How do you envisage these big graphs? Can you give us an idea of what the visualisation could look like?

The idea is to project these very complex graphs onto a 3D interactive structure. These could be spheres, for example, so you will be able to go inside the sphere corresponding to one particular idea and you will have all the other ideas of its ecosystem around you, arranged according to the different semantic relations. You will be also able to manipulate the spheres from the outside and look at them as if they were on a geographical map. And you will be able to zoom in and zoom out of fractal levels of complexity. Ecosystems of ideas will be displayed as interactive holograms in virtual reality on the Web (through tablets) and as augmented reality experienced in the 3D physical world (through Google glasses, for example).

I’m also curious about your thoughts on the social alarm generated by the Internet’s enormous capacity to retrieve data, and the potential exploitation of this data. There are social concerns about possible abuses and privacy infringement. Some big companies are starting to consider drafting codes of ethics to regulate and prevent the abuse of data. Do you think a fixed set of rules can effectively regulate the changing environment of the algorithmic medium? How can IEML contribute to improving the transparency and regulation of this medium?

IEML does not only allow transparency, it allows symmetrical transparency. Everybody participating in the semantic sphere will be transparent to others, but all the others will also be transparent to him or her. The problem with hyper-surveillance is that transparency is currently not symmetrical. What I mean is that ordinary people are transparent to big governments and big companies, but these big companies and big governments are not transparent to ordinary people. There is no symmetry. Power differences between big governments and little governments or between big companies and individuals will probably continue to exist. But we can create a new public space where this asymmetry is suspended, and where powerful players are treated exactly like ordinary players.

And to finish up, last month the CCCB Lab held began a series of workshops related to the Internet Universe project, which explore the issue of education in the digital environment. As you have published numerous works on this subject, could you summarise a few key points in regard to educating ‘digital natives’ about responsibility and participation in the algorithmic medium?

People have to accept their personal and collective responsibility. Because every time we create a link, every time we “like” something, every time we create a hashtag, every time we buy a book on Amazon, and so on, we transform the relational structure of the common memory. So we have a great deal of responsibility for what happens online. Whatever is happening is the result of what all the people are doing together; the Internet is an expression of human collective intelligence.

Therefore, we also have to develop critical thinking. Everything that you find on the Internet is the expression of particular points of view, that are neither neutral nor objective, but an expression of active subjectivities. Where does the money come from? Where do the ideas come from? What is the author’s pragmatic context? And so on. The more we know the answers to these questions, the greater the transparency of the source… and the more it can be trusted. This notion of making the source of information transparent is very close to the scientific mindset. Because scientific knowledge has to be able to answer questions such as: Where did the data come from? Where does the theory come from? Where do the grants come from? Transparency is the new objectivity.





A Secret History: Female Computing

Many women have actively participated in the development of computing and they deserve to be remembered.


Three members of the team led that worked on the ENIAC under Adele Katz. Philadelphia, Pennsylvania, c. 1942-1945 | Wikimedia | Public Domain

We have to go way back to the mid-nineteenth century to find computing pioneer Ada Lovelace, while in the very recent past we have the feminist hacker and activist Jude Milhon, founder of the cyberpunk movement. Along the way, many other women have actively participated in the development of computing, from Róza Péter, Grace Murray Hopper and Frances Elizabeth Allen to the team of talented young women led by Adele Katz. Even the glamorous actress Hedy Lamarr played a significant role in the inexorable scientific-technological revolution that is still taking us who knows where. These women deserve to be remembered, if only to verify that these days, fortunately for all of us, progress is the fruit of a joint effort.

Eudora Welty was an American writer born in Mississippi in the early twentieth century, who wrote books such as the Pulitzer Prize winning The Optimist’s Daughter, and numerous short stories that portray Southern life. One of these, ‘Why I Live at the PO’, fascinated University of Illinois software engineer Steve Dorner so much that he used the author’s name to baptise the email client Eudora, which was used on Apple and Microsoft operating systems, stored emails in mbox format, and even developed into a webmail version.

We are so used to moving in the shadow of male giants such as Bill Gates, Steve Jobs, and Mr Facebook (that is, Mark Zuckerberg), that we can end up thinking women are barred from Silicon Valley. We could easily surmise that computers, including that wonderful invention known as the Internet, are the product of purely male minds belonging to those who –according to the lingering cliché– have greater powers of concentration and a better-developed left side of the brain, and thus better spatial and orientation skills (which is why women apparently can’t read maps). But Eudora is not the only the only sign of women’s contribution to that inspired alliance between software and hardware; just as there have been prominent women in the history of chemistry (Marie Curie), nuclear physics (Lise Meitner), genetics (Barbara McClintock) and electronic engineering (Edith Clarke), quite a number of women have left their mark in the gestation of digital life, even though their minority status has tended to make them invisible.

Ada Lovelace (1815-1852), the woman considered to be the first computer programmer, also had links to the world of literature, given that she was the daughter of the poet Lord Byron –the same Lord Byron who wrote that it was easy to die for a woman but very difficult to live with one. Even though computers did not exist in her lifetime or for quite some time afterwards, the mathematician Ada Lovelace focused her work on the mechanical calculator, and she is credited with writing the first encoded algorithm intended to be processed by a machine (she suggested using perforated cards). This makes her a prominent figure in the history of computing. In a fitting tribute, in 1979 the United States Department of Defence named the programming language Ada after her.

Ada Lovelace‘s portrait, first programmer in the history of computers. Autor: William Henry Mote, 1838. The Ada Picture Gallery | Public Domain

The century of computing

But along with other advances such as the elimination of pandemics (unfortunately only in the developed world) and the use of the automobile on a mass-scale (at the expense of increased pollution), the twentieth century has clearly been the century of computer engineering, which has enabled the current democratisation of the transmission of information, among other things. The first woman we find in twentieth century computing is Hungarian mathematician Rózsa Péter (1905-1977), who was one of the founders of recursion theory and ended up applying recursive functions to computers. A bit later, in the fifties, Navy officer Grace Murray Hopper (1906-1992) was a pioneer in the use of electro-mechanical computers and later developed the programming language COBOL, which was targeted at novice users.

Also around the middle of the century, a small group of young women wearing tight cardigans and pin-up dresses helped bring to life history’s first-ever computer, which was called ENIAC and was publicly launched in 1946. The team was led by Adele Katz (1920-1946), who wrote the user manual, and the dynamic brigade consisted of Kay McNulty, Jean Bartik, Betty Snyder, Marlyn Wescoff, Frances Bilas and Ruth Teitelbaum, who were in their twenties at the time. Guided by Katz, whose married name was Adele Goldstine, they were the developers of the first software programmes.

And then there is the unforgettable Viennese Jewish actress Hedy Lamarr (1914-2000), who, during World War II, invented the precursor to the Wi-Fi technology that now is now used in most of the many screens that we share our lives with. An exceptionally talented telecommunications engineer, Lamarr may have been one of the most beautiful of Hollywood beauties, but she was also a passionate inventor. Which is why she invented, among other things, a secret communications system based on frequency hopping that was taken up by civil engineering from the eighties onwards: it became the foundation for the wireless communication that we now take for granted in our mobile phones and modems, not to mention GPS devices. Ironic, isn’t it? We women are accused of being unable to read maps, and we invented the GPS! We can take a moment to remember Lamarr each year on 9 November, which has been named Inventor’s Day in her honour. Maybe it should be Inventress Day!

It is also remarkable to learn that it was also a woman –with the burden of her right-hemisphere brain– who developed the world’s first word processor, although perhaps this shouldn’t surprise us given that women have always been attached to a sewing machine or, failing that, a typewriter. Evelyn Berezin (1925) was working at the Underwood Company when she created the first office computer in 1953, although it wasn’t until the revolutionary year of 1968 that she sketched out a programme that allowed users to store and edit texts. Meanwhile, Lynn Conway (1938), a transsexual who is Emeritus Professor of Electrical Engineering and Computer Science, focused on silicon chips and was part of the team responsible for the first superscalar computer in the sixties. Although, exotic as it may seem, the first person to earn a PhD in Computer Science in the mid-sixties was a nun, sister Mary Kenneth Keller (1914-1985), who went on to work in a laboratory that was usually reserved for men, where she contributed to developing BASIC programming language.




Once the foundations of computing had been laid and the race to speed up and improve computers began, more women started to join the sector. IBM researcher Frances Elizabeth Allen (1932), for example, was a member of the PTRAN team and a pioneer in the field of optimising compilers, and her work was dedicated to improving their performance. Her results were so good that she was made an IBM Fellow (the first woman to receive this honour) and won the prestigious, big-money Turing Award, which is considered the ‘Nobel Prize of Computing’, and which Intel and Google fund to the tune of 250,000 dollars.

Even before the turn of the century, people began to take an interest in recovering the feats of these valiant allies of computing, and in establishing a female genealogy that can determine the extent to which women have contributed to the advances we enjoy today. For instance, the names of Adele Katz’s ‘girls’ came to light in the eighties thanks to research carried out by Kathryn Kleiman at Harvard University, in what was one of the first steps towards revealing the role of women in the history of computing.

From the 21st century to the technological heavens

Once computing had gone further than we had previously imagined possible, and computers had spread beyond borders and over the oceans, it was only a matter of time before a female ‘hacker philosopher’ was born, destined to inspire the WikiLeaks spirit and to challenge the unstoppable flow of information, and to serve as a model for fictional characters like Lisbeth Salander from the famous Millenium trilogy. Even though the world of hackers seems to be all about long-haired guys with tattoos who show their pale, haggard faces on skype, there have also been others, including the woman known as ‘Saint Jude’. Jude Milhon (1939-2003) was a hacker and a feminist activist. We are indebted to her for the cyberpunk movement, and for her work encouraging the participation of women on the net. She was always the champion of the politically incorrect, with her war cry: ‘Girls need modems!’

From this point on, the frenetic development of computing opens up paths that we can’t even imagine yet, and visionaries like George Orwell and company are relegated to the realm of uchronia. Be that as it may, the progress we make from hereon in will be a shared effort, with men and women working shoulder to shoulder, in California and also in research labs in emerging countries that were not even on the IT map as little as five years ago.

And given that new technologies don’t exist in limbo –they are not neutral, they can spread the goodness of democracy or contribute to gender equality, for example– it is worth remembering the key contributions that we’ve mentioned here, and many more that deserve to be included. From Ada Lovelace, mother of computer programming, to the cyber women who have gone on to swell the ranks of cyberfeminism (including British philosopher Sadie Plant), many bridges have been built by women in this seemingly male world of information technology.

We are talking about technology that can help to bring about dialogue between cultures, so that information channels become paths that are open to the quest for the shared humanity that Zygmunt Bauman talks about. The fact that the hands which shaped its clay are male and female, female and male, unlike the hands that created machines such as cars and aeroplanes, is an unequivocal sign that we are already living in an age that is the fruit of shared effort. And this can only be a harbinger of a better future, in which we will move forward together, towards the technological heavens, without the need to avoid hitting a glass ceiling.





After the Deluge

The last decade has seen some fundamental truths take root regarding the status of information as a raw material and its role in the world.


Herman Hollerith (1860-1929), the inventor of a machine that used punched cards to rapidly process millions of pieces of data and the founder of the Tabulating Machine Co., which later merged to become IBM. Source: Library of Congress.

The last decade has seen some fundamental truths take root, or at least become widespread in the community, regarding the status of information as a raw material and its role in the world. Namely:

  1. The amount of data that we are able to produce, transmit, and store is growing at an unprecedented rate.
  2. Given that it is increasingly cheap and easy to store data, it is worth storing it by default.
  3. This bulk information contains large pockets of valuable knowledge that can be extracted. But only if we can ‘read’ it of course, and this gets more difficult as volumes get larger. The more data we have, the more we are forced to develop new ways of interpreting it.
  4. The extreme ease with which organisations can now generate data is offset by the anxiety that comes from the sense that they are letting the value buried within it slip away. That they are unable to separate the wheat from the chaff and extract every last grain.

Science, business and government organisations have developed a substantial technological infrastructure to capture and store as much data as possible at each stage of their operations. They invest in new fields of knowledge – the new Data Science – and emergent professions ranging from ‘data scientists’ to data analysts and information visualisation experts. But the fact remains that most large organisations are constantly overwhelmed in their attempts to control the information they produce. As a US military official put it, “we are swimming in sensors and drowning in data.”

This situation is typical of the age of the Data Deluge.

The idea that we are living in the aftershock of an enormous information explosion is hardly new. The sense that the data we are able to produce keeps increasing until it reaches unmanageable levels appeared just as IT became a real industry, implemented in an ever increasing number of administrative processes. The first recorded use of the term ‘information explosion’ was in an academic journal in 1961, followed almost immediately by an IBM ad in the New York Times. Today’s slightly less dramatic but equally sensationalist version is the data ‘deluge’ or ‘tsunami’, terms that became common usage among the academic community and the financial press midway through last decade.

Probably the most technical of the many ways of expressing the vertigo produced by the data explosion is Kryder’s Law. While Moore’s Law has been able to more or less correctly predict the rate of increase of the processing power of computers – doubling every 18 months –, Kryder’s Law tries to accurately predict our even faster-growing capacity to store ever-larger amounts of digital data in a limited space.

From our vantage point in 2014, this photograph probably more effectively illustrates the size and nature of the deluge. Fifty years ago, a hard drive was a huge piece of equipment the size of a small vehicle, with a memory that was able to store the equivalent of an MP3 song; a preposterously small part of the storage capacity of the mobile phone in your pocket or handbag.

The journey from paper tape and perforated cards – the first computer storage systems in the 1940s – to today’s USB flash drives and mini SD cards is another very eloquent expression of Kryder’s Law. This vertigo is best understood as the tension between a shrinking physical medium and a storage capacity that expands endlessly.

Now that our personal experience and life stories are being encoded in these magnetic and optical storage devices, their long-term preservation is an increasingly urgent matter. If our collective history is stored in the thousands of Data Centres scattered throughout the globe, these may come to play an archival rather than simply operational role. Even so, we have not come up with a strategy to ensure that this information remains accessible and useable in the future. Deals such as one that allows the US Library of Congress to archive a copy of all the messages sent on Twitter seem to guarantee that some important data will be saved. But every time a startup dies, or a small service is taken over by an internet giant and allowed to wither away, the data that it generated and preserves can disappear for ever, instantly, without trace.

The most important of the new archival institutions that have stepped in to fill this void is probably Brewster Kahle’s Internet Archive. Since 1996, the Internet Archive systematically crawls and ‘photographs’ the Web to store snapshots of what the Internet was like on a particular day in a particular year, defying its essentially unstable and changeable nature. This beautiful film by Jonathan Minard shows the physical infrastructure – housed in a former church – that makes the Internet Archive possible. A fire recently threatened this valuable heritage, although fortunately it only affected the area in which print books are scanned.




The Data Deluge does not show any signs of easing off, and Kryder’s Law is likely to continue to prevail. Perhaps the next step, the leap that will take us to another level, will be found in the medium that nature uses to store its data.

The European Bioinformatics Institute in Cambridge – Europe’s largest database of genome sequences – stores highly sensitive digital information that must be conserved and remain accessible for many decades. Hard drives, which require cooling and have to be replaced on a regular basis, are far from ideal for the task of storing the code of life. Hard drive technology is not nearly as sophisticated as the DNA itself.

This paradox did not go unnoticed by the zoologist and mathematician Nick Goldman, one of the ‘librarians of life’ who looks after database maintenance at the EBI: our storage media are fragile, take up space, and require maintenance, while DNA can store enormous amounts of data in a tiny space, for millions of years. In 2013, Goldman and his team announced that they had managed to transfer a modest 739 kilobytes of data to a DNA chain. Later, a computer managed to decode it and read its contents: Shakespeare’s 154 sonnets, an academic article, a photo of the researchers’ laboratory, 26 seconds of Martin Luther King’s most famous speech, and a software algorithm. It is still early days, but the EBI team has high hopes for the technique that they have developed: their long-term goal is to store the equivalent of a million CDs on a gram of DNA, for at least 10,000 years.





The Power of Algorithms: How software formats the culture

The only way to manage the data of Internet is through automated processing using algorithms.


Ada Lovelace‘s portrait, first programmer in the history of computers. Autor: William Henry Mote, 1838. Source: The Ada Picture Gallery.

The use of the Internet has spread further than computers and beyond the bounds of any specific discipline, and has come to permeate the texture of our reality and every aspect of our daily lives. The ways in which we relate to each other, obtain information, make decisions… that is, the ways in which we experience and learn about our surroundings, are increasingly mediated by the information systems that underlie the net. Massive amounts of information are generated by this constant interaction, and the only way to manage this data is through automated processing using algorithms. A humanist understanding of how this ‘algorithmic medium’ has evolved and how we interact with it is essential in order to ensure that citizens and institutions continue to play an active role in shaping our culture.

A young information and financial tycoon heads across Manhattan in a limousine in the first scene of David Cronenberg’ most recent film Cosmopolis, based on a novel by Don DeLillo. During the ride, Eric Packer monitors the flow of information that flashes up on screens as he leads us on a quest for a new perspective. His encounters with various characters and the sights and sound of the city that enter through the windows offer us a glimpse into the inner workings, consequences, and gaps of what the film calls ‘cyber-capitalism’. The journey ends with a confrontation between the protagonist –whose fortune has been wiped out in the course of the day by erratic market behaviour that his algorithms were unable to predict– and his antithesis, a character who is unable to find his place in the system.

The interplay between technology and capital – the computerised processing of bulk data in order to predict and control market fluctuations – is one of the constants of capitalist speculation. In fact, 65% of Wall Street transactions are carried out by ‘algo trading’ software. In a global market where enormous amounts of data are recorded and a rapid response rate gives you an edge over the competition, algorithms play a key role in analysis and decision-making.

Similarly, algorithms have found their way into all the processes that make up our culture and our everyday lives. They are at the heart of the software we use to produce cultural objects, through programmes that are often freely available in the cloud. They also play a part in disseminating these objects through the net, and in the tools we use to search for them and retrieve them. And they are now essential for analysing and processing the bulk data generated by social media. This data is not only produced by the ever-increasing amount of information posted by users, but also by tracking their actions in a network that has become a participatory platform that grows and evolves through use.

An algorithm is a finite set of instructions applied to an input, in a finite number of steps, in order to obtain an output – a means by which to perform calculations and process data automatically.

The term ‘algorithm’ comes from the name of the 9th century Persian mathematician al-Khwarizmi, and originally referred to the set of rules used to perform arithmetic operations with Arabic numerals. The term gradually evolved to mean a set of procedures for solving problems or performing tasks. It was Charles Babbage who made the connection between algorithms and automation with his hypothesis that all the operations that play a part in an analysis could be performed by machines. The idea was that all processes could be broken down into simple operations, regardless of the problem being studied. Although Babbage designed his Differential Engine and Ada Lovelace created the first algorithm for his Analytical Engine, it was Alan Turing who put forward the definitive formalisation of the algorithm with his Universal Machine in 1937. Turing’s theoretical construct is a hypothetical device that manipulates symbols on a strip of tape according to a table of rules, and can be adapted to simulate the logic of any computer algorithm. The advent of the Internet took this logical construct beyond the computer. Internet protocol (1969) and the web (1995) became a kind of universal container in which data could be stored, accessed and processed on any computer. These developments, along with the convergence that went hand in hand with the boom in personal computing in the eighties, meant that computation – numerical calculation – spread to all digitalised processes. Meanwhile, URLs allowed algorithms to interact and interconnect amongst themselves, eventually producing what Pierre Lévy calls the ‘algorithmic medium’: the increasingly complex framework for the automatic manipulation of symbols, which would become the medium in which human networks collaboratively create and modify our common memory.

Turing Machine. Source: Wikimedia Commons.

Algorithms play a part in all our everyday interactions with the social web. With 699 million users connecting each day, popular social networking site Facebook is working on the problem of how to display the updates of the many friends, groups and interests that its users can follow. Its answer is the algorithm known as EdgeRank, which processes data about our interests – our ‘likes’ –, the number of friends we have in common with the person posting a news item, and the comments posted on it, in order to prioritise what we see on our news feed and hide the ‘boring’ stories. The algorithm also tracks the graph of our contacts in order to suggest new friends.

Twitter similarly uses algorithms to suggest new accounts to follow, and to create content of the Discover tab and update its trending topics. In this case the complex algorithm doesn’t just work out what word is tweeted most often, it also calculates whether the use of a particular term is on the rise, whether it has been a trending topic before, and whether it is used among different networks of users or just one densely connected cluster. It does this by monitoring the ‘hashtags’ that interconnect all the tweets it appears in, which were introduced by Twitter in 2007 and have since spread throughout social media sites. It also uses URL shortening service or t.co (f.bo in Facebook), which is generated every time we use a social button to share a URL. These do not just minimise the number of characters in a post, but also transform links into data-rich structures that can be tracked in order to find out how they are shared no the platform and build up profiles of their users.

As well as social networks, the social web also includes all kinds of platforms that allow us to create and share information, including online publishing services such as blogs, recommendation systems like Digg and Reddit and search engines. All of these platforms rely on algorithms that work with specific criteria. The search engine Google, for example, which has to work in a medium consisting of more than 60 trillion pages, in which more than 2 million searches are carried out every minute, is based on the premise that “you want the answer, not trillions of webpages.” In this scenario keyword indexing is not enough, so Google’s PageRank algorithm imitates user behaviour by monitoring the links to and from every page, and then ranks the pages, displaying the most relevant results first. The algorithm also works in conjunction with others that process our search history, our language, and our physical location in order to customise the results.

Algorithms also process data generated by our online actions to suggest what books we should buy on Amazon, what videos we should watch on YouTube, and to determine the advertisements we will be shown on all these platforms. Aside from these algorithms that we regularly interact with, there are others such as Eigenstaste, a collaborative filtering algorithm for rapid computation of recommendations developed at UC Berkeley; an algorithm recently developed at Cornell and Carnegie Mellon Universities that reconstructs our life histories by analysing our Twitter stream; and the algorithm developed at Imperial College London to reduce Twitter spam by detecting accounts that are run by bots instead of humans. The growing presence of algorithmics in our culture is reflected in the #algopop tumblr, which studies the appearance of algorithms in popular culture and everyday life.

The examples mentioned here illustrate how information on the Internet is accessed and indexed automatically, based on data drawn from our online behaviour. Our actions generate a flow of messages that modify the inextricable mass of interconnected data, subtly changing our shared memory. This means that communication in the ‘algorithmic medium’ is ‘stigmergic’, which means that individuals alter the actual medium when they communicate in it. Every link that we create or share, every time we tag something, every time we like, search, buy or retweet, this information is recorded in a data structure and then processed and used to make suggestions or to inform other users. As such, algorithms help us to navigate the enormous accumulation of information on the net, taking information generated individually and processing it so that it can be consumed communally. But when algorithms manage information, they also reconstruct relationships and connections, they encourage preferences and produce encounters, and end up shaping our contexts and our identities. Online platforms thus become automated socio-technical environments.

Charles Babbage’s Analytical Engine on display at the King George III Museum, 1844. Source: Wikimedia Commons.

The use of automation in our culture has epistemological, political and social consequences that have to be taken into account. For example, the continuous monitoring of our actions transforms our existing notions of privacy; algorithms make us participate in processes that we are not conscious of; and although they increase our access to information – such enormous amounts of it that it is no longer humanly discernible – and boost our agency and our capacity to choose, they are by no means neutral and they can also be used for the purposes of control.

Most users see the net as a broadcast medium, like traditional media, and are not aware of how information is filtered and processed by the medium. Not only are the effects of algorithms imperceptible, and often unknown because they are in the hands of commercial agencies and protected by property laws, they have also become inscrutable, because of the interrelation between complex software systems and their constant updates.

Furthermore, algorithms are not just used for data analysis, they also play a part in the decision-making process. This raises the question of whether it is justifiable to accept decisions made automatically by algorithms that do not work transparently and cannot be subject to public debate. How can we debate the neutrality of processes that are independent of the data they are applied to? Also, when algorithms analyse the data compiled from our earlier actions they are strongly dependent on the past, and this may tend to maintain existing structures and limit social mobility, hindering connections outside of existing clusters of interests and contacts.

Given that these algorithms influence the flow of information through the public sphere, we need to come up with metaphors that make these processes understandable beyond the realm of computer experts. We need to make them understandable to people in general, so that everybody can participate in discussions about what problems can be solved algorithmically and how to approach these problems. Encouraging participation is a way to ensure the ecological diversity of the medium and its connection to pragmatics.

Software studies pioneer Matthew Fuller points out that even though algorithms are the internal framework of the medium in which most intellectual work now takes place, they are rarely studied from a humanistic or critical point of view, and are generally left to technicians. In his book Behind the Blip: Essays on the Culture of Software, Fuller suggests some possible critical approaches, such as: running information systems that really reveal their functioning, structure and conditions; preserving the poetics of connection that is inherent to social software or promoting use that always exceeds the technical capacities of the system; and encouraging improbable connections that enrich the medium with new potential and broader visions that allow room for invention.

Some initiatives along these lines are already occurring in the ‘algorithm medium’, actively contributing to the use of computing by non-experts and allowing user communities to influence its course. They include data journalism, which creates narratives based on data mining; free software, developed in collaboration with its users; crowdsourcing initiatives based on data that is obtained consciously and collaboratively by users; and the rise in the communal creation of MOOCs (massive open online courses).

On another front, cultural institutions also need to develop a presence in the virtual medium. By allowing online access to their archives, data, know-how and methodology, projects, and collaborators, they can promote new interests and connections, taking advantage of ‘stigmergy’ to boost the diversity and poetics of the medium. Similarly, workshops such as those organised as part of the CCCB’s Internet Universe project help to promote a broader understanding and awareness of this medium, and encourage greater and more effective participation.

The capacity and the scope of the algorithmic environment is now being strengthened through the use of artificial intelligence technology, as illustrated by Google’s ‘Hummingbird’ semantic algorithm – which is based on natural language processing – and Mark Zukenberg’s mission to ‘understand the world’ by analysing the language of posts shared on Facebook. It is important to encourage critical, public debate about the role of these mechanisms in shaping our culture if we want to ensure the continuing diversity and accessibility of the net.





How Polluting is the Internet?

The fact that the internet footprint is lower than that of traditional activities does not mean it is totally innocuous.


Manchester from Kersal Moor, William Wylde (1857). Source: Wikipedia.

There’s a big difference between sending a million emails and sending a million letters. Choices like buying an e-book instead of a print book, watching a film online rather than a DVD, or organising a work meeting on Skype rather than flying from Barcelona to London to meet face-to-face, all reduce environmental impact. There is no doubt that the internet makes many everyday transactions more efficient, but the fact that its footprint is lower than that of traditional activities does not mean it is totally innocuous. First, the inconvenient news: the internet consumes huge amounts of electricity, although it is difficult to measure exactly how much. The following questions and answers are intended to clear up some of the doubts around this issue.

When we talk about pollution and the internet, what are we measuring?

Jon Koomey, an expert in the environmental effects of information technology, says that the key elements we should measure are what he calls “the big three”:

Source: Jon Koomey.

  1. End-user equipment (computers, tablets, laptops, routers)
  2. Data centres (that store and host web pages)
  3. Access networks (the wiring and antennas that carry data)



OK, so what is the environmental impact of the internet?

The first thing that emerges after surveying various sources is that nobody knows for sure. In 2010, The Guardian came up with the figure of 300 million tonnes of C02 per year, “as much as all the coal, oil and gas burned in Turkey or Poland in one year.” A controversial article entitled “Power, Pollution and the Internet” in The New York Times put the figure at 30 billion watts of electricity in 2011, “roughly equivalent to the output of 30 nuclear power plants”. And according to Gartner consultants, the internet was responsible for 2% of global emissions in 2007, outstripping the carbon footprint of the aviation industry. A more recent study by the Melbourne-based research centre CEET in 2013 estimated that the telecommunications industry as a whole emits 830 million tonnes of carbon dioxide a year, and that the energy demands of the internet could double by 2020.

The CEET reports that the internet now accounts for 1.5% to 2% of the world’s energy consumption, which means that if the internet were a country it would rank as the fifth largest for energy consumption. Jon Koomey estimates that the direct electricity use of all the elements that make up the internet is probably around 10% of total electricity consumption, but he emphasises that it’s very difficult to calculate exact figures. “You can use a computer to play videogames or write a text and not be online, and this energy use is often counted as part of the internet even though it isn’t actually the case.”

Source: Jon Koomey.

If the internet is virtual, what causes its environmental impact?

Unlike a car that emits fuel through its exhaust pipe, it is difficult to visualise the environmental impact of the net. Its carbon footprint is mainly a result of the power required to keep its infrastructure running. Mobile phone antennas, the devices we use to access the internet, and data centres all require enormous amounts of electricity. This electricity can come from renewable sources, but it often does not. For example, in its report “How Clean is Your Cloud?” Greenpeace found that 70% per cent of the 400,000 mobile phone antennas in India don’t have access to reliable electricity sources, and diesel-powered generators are used to make up for the inadequate power supply. The big data centres in Western countries also rely on back-up diesel generators that kick into action in the event of power cuts.

Why do data centres consume so much electricity?

Data centres are the second most power-hungry elements of the internet, after devices. To get an idea of the energy needs of data centres, Facebook is building one in Prineville, Oregon that will consume around 78 megawatts of electricity, equal to around 64,000 homes.

Prineville Data Centre. Source: Facebook.

Even so, many experts believe that data centres are the most efficient way of keeping the internet running, given that centralizing servers in a single location makes it possible to harness synergies and minimise electricity use. “One of the advantages of the cloud is that it allows a much greater concentration of computing and processing power with less servers, which always translates to energy saving. But the only way to reduce pollution is to use renewable energy sources, as well as improving energy efficiency,” explains David Carrero Fernández-Balillo, systems expert and Country Manager de Stackscale Spain.

Facebook servers. Source: Facebook.

If data centres are more efficient, where does the environmental problem come from?

The problem comes from the energy sources that keep the data centres running. At present, most data centres work with energy companies that rely on coal or nuclear power stations to generate electricity.

The report “How Clean is Your Cloud?” found that 55.1% of the electricity used by Apple servers is generated by coal plants, as is 49.7% of the energy used by IBM servers, and 39.4% in the case of Facebook servers.  These considerable figures entail thousands of tonnes of carbon dioxide emitted into the atmosphere and the dirty air that goes with it.

Are the big corporations doing anything about it?

Quite a number of steps are being taken. In March 2013, Apple announced that all of its server centres now work with renewable energy, even though this announcement is a bit misleading as part of this target is being achieved through the purchase of carbon credits to offset the nuclear and coal-fuelled electricity used by some of its centres. The company has also built two solar energy plants in Maiden (North Carolina) to meet the needs of its server plants in the area. In addition, it is building a data centre in Prineville, Oregon, that will run on a mix of solar, hydroelectric, wind and geothermic energy.

Meanwhile, Facebook, is building a data centre in Iowa that will run entirely on wind power, which is expected to open in 2015. The company has set itself the target of powering its data centres on 25% renewable energy by 2015. In 2011, Facebook launched the Open Compute Project, an initiative through which to share its data centre management information and know-how in order to encourage industry best practices and help others to maximise their efficiency.

Google has also launched several initiatives designed to boost its environmental transparency. In response to growing media interest and to lobbying by environmental organisations like Greenpeace, the company decided to invite a journalist from Wired to visit one of its centres. Other transparency measures include the creation of a section on its website dedicated to its data centres and a commitment to achieving carbon neutral data centres.

“Google’s servers refresh 20 billion pages a day, process over 100 billion search queries a month, provide email for 425 million Gmail users and process 72 hours of video uploaded per minute to YouTube. And yet we’re able to do all that work with relatively little energy, compared to other industries. Data centres are responsible for between 1.1 and 1.5 percent of global energy use (compare that to transportation at 25 percent), and Google’s data centres are less than a percent of that. It’s a testament to the almost unimaginable improvements in computing power per watt that Moore’s Law has brought us over the past decades. Searching virtually all the world’s online information for a billion users with just 0.01 percent of global energy use illustrates how much less energy it takes to move electrons (information) than atoms (physical things),” writes Urs Hölze, senior vice president for technical infrastructure at Google, as part of a debate on the New York Times website.

All of these corporations have something in common: they invest millions of dollars in building their own infrastructure in order to minimise electricity consumption and reduce costs. The problem, Koomey says, is that many other non-technology companies also have their own infrastructure, and prefer to prioritise operational issues over efficiency. “Just adopting good practices will save at least 50% of data centre energy use, but many companies are failing to do so,” he says.

Why is Greenpeace so interested in this issue?

Greenpeace has focused on drawing attention to the many data centres that are still powered by dirty energy sources such as coal and nuclear plants. The environmental organisation’s lobbying has sparked debate between society and the companies that make up this industry.

Gary Cook, IT analyst at Greenpeace, believes that the problem lies in digital sector priorities. The industry is at a crossroads and has to choose between following in the footsteps of the polluting companies of the past, or being trailblazers of sustainability.

“Unfortunately, as Greenpeace highlighted in a report this year, ‘How Clean is Your Cloud?’ most Internet companies are choosing the quick and dirty path, expanding rapidly without considering how their energy choices impact society. As a result, they are powering the 21st-century data centres that are the engine of the Internet economy with 19th- and 20th-century coal and nuclear power,” said Cook in the debate on the New York Times website.

By way of example, he said that all the big internet corporations in the United States work with Duke Energy in North Carolina, which is not expected to generate even 4% of its electricity from renewable sources until 2030. Greenpeace acknowledges the commitments of companies like Facebook and Google, but says there is still a long way to go.

So what’s the deal? Wasn’t the internet supposed to be much more environmentally friendly than the analogue world it was replacing?

It is, and it will continue to be so. From 2000 to 2006 total internet traffic rose by 32,000,000%, while total energy consumption only increased by 200% during the same period says Jon Koomey. “An impressive achievement.”

“And we shouldn’t forget that IT improves the efficiency of non-digital industries, which account for 90% of the world’s electricity use,” adds the Stanford scientist.

Source: Jon Koomey.

What can I do to reduce the carbon footprint of my internet use?

In this immense digital infrastructure, where an individual counts for little or nothing in the overall scheme of things, one of the best ways to improve a website’s energy efficiency is by optimising its programming and design. A page with clean code and a balanced design will load more quickly than a site full of banners, pop-ups, large photos and external programmes that slow it down.

“At 1.4 MB, today’s average page is 15 times larger than it was 10 years ago, primarily due to images (881kB) and script (225kB) (…) This average page also makes more than 100 HTTP requests,” explains James Christie in a study of sustainable design published in A List is a Part.

Every visit to a website has a carbon footprint. Harvard University physicist Alex Wissner-Gross calculates that viewing a web page generates about 0.02g of CO2 per second, which goes up to about 0.03 g when viewing a website with complex images, animations or videos. “So when you are sitting in London viewing a website hosted in California, there are power plants on at least two continents actively pumping carbon dioxide into the atmosphere in order for you to watch that video or read that online newspaper…” he says. There are also simple steps you can take, as David Carrero explains, like “ensuring that the equipment we buy is appropriate to our needs but does not exceed them, in order to optimise energy use, and always switching off devices once we’re done, even the ADSL router at night.”

How much energy can I save by optimising my website? The case of Meneame.net…

With an average of 350,000 to 400,000 visits a day and 40 million page visits a month, Menéame is one of the Iberian Peninsula’s most visited sites. “We’ve measured our electricity use very carefully right from the start, for two reasons. Firstly because we needed to: the less you consume, the lower your electricity costs. Secondly, that’s our style. Ricardo Gali has a PhD in Computing, and one of his obsessions is energy saving systems for loading pages,” says Benjamí Villoslada, who co-founded the social news aggregator with Gall in 2005.

They put so much effort into energy efficiency that Villoslada believes a badly designed website would consume four times as much as Menéame in order to perform the same tasks. “You save money and society as a whole saves time and money. Everybody wins by making things simpler.” The steps they take to boost efficiency include constantly optimising the website database, keeping the code that runs the page clean and orderly, and making sure that there are no unessential elements in it. “We use a software to prevent bots. When we detect that an IP is making more than 40 requests per second, we block it.”

Menéame uses the Amazon Web Services server in Ireland to run the website. “It lets us pay for the actual consumption we need at any given moment. You need to have solid knowledge to work with them, but the set-up is good. There may be some more environmentally friendly places around, but if they are badly designed you can end up using more resources than you do with an efficient site like Amazon,” adds Villoslada.

Do any data centres run on 100% renewable energy?

Iceland is positioning itself as the location of choice for companies interested in minimising the environmental impact of their data centres. Its volcanic landscape means it has plentiful, reliable renewable energy, and 72% of the electricity generated in the country comes from hydroelectric and geothermic sources.

The company Verne Global was founded in 2007 and runs two data centres in the country, with clients that include companies such as BMW, Colt and Securitas. As well as clean energy, on average Iceland generates electricity at a quarter of the cost of other industrialised countries.

In Sweden, renewable sources account for 60% of the energy generated, a fact that Facebook took into account when it decided to set up a data centre in Lulea, a town some 100 kilometres from the Arctic Circle. It is a “colossus that helps Facebook process 350 million photographs, 4.5 billion “likes” and 10 billion messages a day,” according to Bloomberg Businessweek.

In this remote town, Facebook has hydroelectric power near at hand, and its premises have also been designed to take advantage of the freezing Lapland winter air to cool its servers without the need for air conditioning systems.

uleå FB Data Center – Cooling Fans. Source: Facebook.

What about the data centres in Spain and Catalonia?

“In Spain, over 1,800 companies are registered as ‘data processing and computing centres’, but most of them do not have their own data centres. And vice versa: many companies that own data centres are not included in this group. We do, however, have a reasonably comprehensive list of Spanish companies that offer their data centre facilities to third parties” says Jaime Fernández Gómez, Head of Infrastructure at Acens.

“Datacenter Dynamics estimates that data centres around the world were consuming over 40 gigawatts in late 2013, a figure that is expected to go up to 50 gigawatts by late 2016. An average nuclear power station produces around 1 gigawatt. If we look at the percentage of the world’s data centres that are located in Spain, we can estimate that between 1% and 2% of this energy is consumed in data centres here,” he adds.

In January 2003, IBM announced that it would soon open a cloud centre in Cerdanyola del Vallés, in the province of Barcelona. Its first client is the Generalitat de Catalunya, which is centralising its Health, Welfare and Family data at the centre.

So should we be worried?

The problem of the internet’s energy consumption is a problem that it shares with the rest of humanity. We have not yet found a system that allows us to meet our needs without relying at least partly on fossil fuels. When we do so, the internet will be a key element.

As citizens, understanding that the internet is neither infinite nor invisible is a good place to start. It isn’t enough to demand that companies use cleaner energy sources to keep their infrastructure running. Everybody can contribute by creating more lightweight website designs, not leaving their devices on standby, and buying equipment that meets but does not exceed their needs. “Computer efficiency has steadily doubled every year” over the past 60 years, says Koomey, a pattern that shows that the internet is part of the solution.





The Weight of the Cloud

In recent years a series of data centres have abandoned anonymity to become new monuments of the information age.


Inflation of a gas balloon in the gas plant at La Villette. Source: Wikimedia Commons.

The metaphor of the cloud has operated as an often overly opaque veil that separates us from the physicality of the net. In fact, one of the best-kept secrets of the Internet has been the massive infrastructures required to keep it running smoothly. But in recent years a series of data centres have abandoned anonymity to become new monuments of the information age.

Quincy, Washington

Quincy, Washington, is an agricultural city in the northeastern United States with a population of seven thousand, two supermarkets and two hardware stores. The New York Times described it as “a farming community in the middle of a desert”, and until recently its main economic activity was potato and bean harvests. But it would be no exaggeration to say that in the last few years Quincy has become one of the global capitals of the Internet. Not a capital of digital entrepreneurship, of course: Quincy has no startups and probably no coworking spaces or cafes full of people with MacBooks either. Quincy is the opposite of the flagship cities of cognitive capitalism, but online life as we know it would be unfeasible without places like this. Quincy is quite literally the Net.

Huge anonymous buildings, sometimes the size of several football fields, have been springing up in the farmland surrounding the town. Companies like Yahoo! Dell and Microsoft have picked the area as the site for several of their data centres –the big industrial facilities that are used to store and distribute all the information contained in that simultaneously personal and communal space that we now call ‘the cloud’.

The cloud: the generic name for all the services that safeguard our photos, work documents, and e-mail messages, the space that stores viral videos and hit songs, movie blockbusters and the digital maps that we rely on to find our way around cities everywhere, the space that we interact with dozens of times a day through our smartphones, tablets or PCs. ‘The cloud’ may be one of the most misleading metaphors ever coined by marketing executives, because what lies behind it is neither lightweight nor intangible. Our insatiable thirst for data has produced an enormous heavy industry that, in many respects, is not unlike the factories of the mechanical age.

Aerial View of Quincy, WA Data Centers, including the planned Sabey Data Center. Source: PRLOG.

Quincy is one of the key ports in the routes of global information traffic. It is joined by other similar anonymous towns such as The Dalles, Oregon; Ashburn, Virginia, and Lenoir, North Carolina, in the United States, and Sant Ghislain, Belgium, beyond it. These landmarks in the emerging geography of data are not random. In spite of Tom Friedman’s bestseller, the world is not flat, and nor is the Internet. Financial factors –land availability and tax incentives– come into play when deciding where to build them, but most importantly, data centres require direct access to infrastructures that can offer large amounts of electricity at a low cost, as well as a cold, dry climate that makes it easy to control the temperature inside the buildings, where thousands of whirring hard drives must be kept cool. Maintaining our relentless pace of information production and consumption, and ensuring that this data is available at all times, does not come cheap.

The electricity consumed by the vast industrial infrastructure that makes up the cloud is no trivial matter. Estimates of the exact amount of energy required to keep the many data centres around the world running vary, but they account for at least 1.3% of worldwide electricity use. The industry has embarked on an ongoing race to improve the efficiency of its systems, and to use renewable energy sources –nobody is keener to reduce the power bill than companies like Google, Amazon and Facebook themselves. But at the same time, the number of new facilities around the world continues to grow. And these electricity consumption figures only refer to the power used to keep the servers running. They do not include the electricity that we use to power our routers and charge our phones and computers, or the energy required to manufacture processors, hard drives, and multi-touch screens.

It becomes harder to hold onto the illusion that communication technologies operate in the virtual realm when we begin to grasp just how far they are from being a truly clean industry. The pollution emitted by data centres –particularly by the back-up diesel generators that come on in the event of a power outage, consuming fuel– are increasingly turning up on lists infringements of environmental regulations due to air pollutant emissions. For example, Amazon was cited with more than 24 air quality violations in the period from 2009 to 2011.




The reason that data centres are sometimes forced to burn diesel has to do with the sacrosanct concept of ‘uptime’, one of the cornerstones of the industry that refers to the percentage of time that a system is guaranteed to be available. Giants like Google and Facebook aspire to an uptime of 100%, and demand that it does not drop below 99.9%. In order to guarantee this level of reliability, the response capacity must have sufficient margin to keep the service available even during peaks when the demand is much higher than average. It takes many pieces working together correctly to ensure that we can upload that image to Instagram right now and not a second later, or so a new Miley Cirus or Lady Gaga video can go viral on social media.

Part-secret, part-monument

One of the reasons why it has been easy to maintain the illusion of the intangibility of the cloud for so long is the traditional secrecy of the data centre industry, which has carried out its work discretely and remained in the background. While the seductive interactive potential of the Web 2.0, social networks and mobile internet have been on everybody’s lips over the last ten years, the technological architecture that makes them possible has developed in the shadows, without any desire to draw attention to itself. For many years, the big technology corporations have jealously guarded the number and location of their data centres. And this desire to remain invisible, which is usually based on the need to maintain strict security measures in the facilities and to prevent industrial espionage, is expressed in the design of its buildings: massive boxes lacking in architectural personality of any kind, closed in on themselves, without identifying features or corporate logos that allow passers-by to associate the container with the contents within it. Data centres may be the definitive non-places, because they are barely even buildings in the usual sense –up to 85 per cent of the construction costs of data centres goes towards their mechanical and electrical systems. And these are enveloped in as little architecture as possible.

While the internet sector has avoided talking about the industrial spaces that lurk behind its interfaces, it is still odd that there has been nothing architecturally remarkable about almost any of them. There are exceptions of course, and the most unusual of them for the last few years has been Pionen, a data centre located in an old nuclear bunker in central Stockholm that began operating in 2008. Owned by the ISP Bahnhof, it was designed to resemble a set from a 1970s James Bond film, with waterfalls, greenhouses, meeting rooms suspended in space, and even German submarine engines as decoration. In 2010, Pionen hosted the Wikileaks servers and all their secrets.

But in the last two years industry attitudes have started to show signs of change, and some experts are predicting the end of the age of anonymity. Aware of the curiosity they arouse, as the media and infrastructure tourists take an interest in their anonymous shapes, Silicon Valley giants know that these facilities are destined to become the architectural symbols of a new power; the castles of the information age.

On 6 June 2011, in his last public appearance just four months before his death, Steve Jobs showed some photos of the data centre that the company had built in Maiden, North Carolina, specifically for the launch of iCloud, the service that would store the documents of Apple users. And in December 2012, Google took the unexpected step of showing the interior of several of its data centres around the world through a series of images by the photographer Connie Zhu.

Google Data Center.

The new data centres seem destined to accept that their symbolic nature makes them more than just run of the mill industrial infrastructure, as architects transform them into icons –into the physical embodiment of the Net. The windowless façade of the Telehouse West installation in East London, for example, is made up of rectangles that give it the pixilated effect of low-resolution computer images. Other data centres hide in full view, right in the centre of cities, cannibalising buildings that had originally been designed for different purposes. In Manhattan, the New York Telephone Company building, a grey, 32-storey skyscraper at the foot of the Brooklyn Bridge that can be seen from many parts of the city, is being gutted so that it can be turned into another container of servers and cooling systems.

While there is no end in sight to our growing data storage needs, the data centre industry continues to spread throughout the planet, beyond city centres and farmland. Facebook operates a data centre in Sweden, for example, located in the town of Lulea, just 100 km south of the Arctic Circle. And in 2008 Google filed a patent for a possible ‘floating data centre’ prototype that operates autonomously on the high seas, fuelled by the power of the wind and the movement of the waves. It is hard to tell whether this vision will end up becoming part of our infrastructural landscape, but we do know that the number of people required to keep each of these memory warehouses running –each one is currently staffed by 25 to 40 employees– is likely to keep getting smaller. The data centre of the future is an enormous dark warehouse where the light never comes on. The robots that look after their maintenance don’t need it.





The Technorevolution in the Classroom

Technology must play an important role in a new educational model, so as to stimulate learning and transform it into a more enriching and participatory process.


Mechanical calculating machine by George B. Grant Co. Illustration published in Scientific American, May 1877. Source: Wikipedia.

Education experts warn that we have an anachronistic educational system, created 300 years ago to meet the needs of a period that revolved around the industrial production of objects. They advocate the need for a new educational approach based on the generation of ideas, creativity and emotions, with the capacity to prepare children for the challenges of a world in which unlimited access to information has changed the way we communicate, process information, and think. Technology must play an important role in this new model, so as to stimulate learning and transform it into a more enriching, participatory and engaging process.

The idea of using the Internet in his classes had not crossed Jon Beasley-Murray’s mind until that moment. As a lecturer in Latin American literature at the University of British Columbia in Canada, he had not thought of Wikipedia as anything other than a poor source of documentation for his students. But in 2008, after being struck by his students’ passion for the Net, he decided to try a different strategy and carry out an experiment.

He asked his students to create entries on a series of South American novelists whose online presence was slight or non-existent. The challenge was to create high-quality content, because if their articles were given ‘good article’ status by Wikipedia they would automatically get an A for the course, and if an article was ‘featured’ – at that time there were only 2000 featured articles in English– they would get an A+.

The students split up into teams, started to gather information, and acquired all the traditional research skills (reading, documentation, referencing, writing and critical learning) in the process. Three of the teams wrote pieces that were classified as ‘good articles’ by Wikipedia, and one –the entry on the novel El Señor Presidente– was featured. At the end of the academic year, the students were thrilled. They had not only completed an interesting project, they had also made it ‘real’: through their work they had contributed to global knowledge, and their efforts had been recognised by thousands of Internet users.

This Canadian teacher’s initiative is an example of the new wave of educators who believe that new technologies, used wisely, can stimulate learning and transform it into a more enriching and participatory process. It is an approach that has been clear for a long time now in countries such as Finland, for example, which has the world’s best education system according to the latest PISA report. Students in this Scandinavian country are taught how to think, rather than memorise, and this entails studying subjects that students are interested in as well as using educational tools that motivate them, namely new technologies.

Students in Finnish classrooms watch videos on YouTube, use Wikipedia to research their projects, read digital comics, listen to music, and create playlists for assignments that can include designing a soundtrack to express the feelings of Silvius, the heartsick character in Shakespeare’s As You Like It, for example. The idea is to find a hook that engages the kids, and then actively involve them in the learning process.

Programmed device for the study of mathematics as part of the program MatiTec de ITESM Campus Ciudad de Mexico. Source: Wikipedia.

But we can also find similar initiatives closer to home. There are already teachers in Catalonia and Spain who use augmented reality to show their students what a volcano is, or what a tyrannosaurus looked like; others programme small robots, create video documentaries about different types of energy and post them online, or get students involved in collective transmedia poetry projects. Other projects include Espiral in Catalonia, Novadors association in Valencia, and Aulablog, run by teachers all over Spain. The techno revolution seems to be well and truly underway in schools, and is starting to change the way we teach and learn.

“The existing educational system needs to be changed from top to bottom. It is totally anachronistic and obsolete, and it bores kids,” says Marc Prenksy, the education expert and provocateur who came up with the concept of ‘digital natives’ in 2001 and unleashed a worldwide debate on education. Meanwhile, Sir Ken Robinson, one of the leading international experts in this area and the protagonist of the most-watched TED Talk, argues that our schools were designed during the industrial revolution in order to train factory workers, people who could do the same thing repeatedly for many hours in a row. Schools followed the same pattern: all the children memorised and repeated certain information over and over again.

“But now we need to educate individuals who are able to face the unknown, who can deal with unexpected situations, come up with new ideas to problems that are changing all the time. Acquiring digital literacy will allow children to become more open-minded and creative, and prepare them to deal with the challenges of an uncertain future,” says Sergi Jordà, a lecturer at Universitat Pompeu Fabra.

From the production of objects to ideas

There have been more discoveries in the last fifty years than in the entire history of humanity. We have made the transition from an industrial society organised around the production of objects to an information society in which the raw material and the driving force are ideas, creativity, passion, and the ability to invent new jobs that are more in tune with the era that we are living in. Moreover, scientific advances have allowed us to learn more about how our brains work. As the neuroscientist Francisco Mora explains in his book Neuroeducación. Sólo se puede aprender aquello que se ama (“Neuroeducation. We only learn by doing what we love”), we don’t learn by memorising or repeating things, but by doing, experimenting, and, above all, by means of emotional engagement.

Neuroscience has shown that attention plays a key role when it comes to learning. “It’s like a window that opens into the brain, through which we learn and memorise information from the world around us,” Mora says. “Without attention there is no explicit memory or knowledge.” And one way to capture the attention of students is to use something they are passionate about –digital technologies–, excellent allies when it comes to stimulating curiosity and motivation, which are basic ingredients of learning.

Even so, simply introducing digital tools into the classroom is no guarantee in itself. As high school teacher Sandro Maccarrone explains, “new technologies can be used to continue to do exactly the same thing (that we have been doing for the past 4,000 years). Or to do it worse, because the information is presented in a less focused way.” As such, we need a solid educational project that goes hand in hand with methodological change, in which instead of simply transmitting information to students, teachers become their guides, helping them to ask the right questions, stimulate their curiosity, accompany them as they look for the information they need, and teach them to work in teams: skills that they will need throughout their adult lives.

Technology in the classroom

But what are the advantages of using digital tools in the classroom? Aside from the appeal of digital blackboards or augmented reality, which attract students’ attention, most experts emphasise that one of the key points of new technologies in the classroom is that they make it possible to customise education, so that each child can follow his or her own process and be given tools that support autonomous learning, experimentation and research.

Another important element is an Internet connection, because it provides students with a window into the world, and much more information than they could ever get from their teachers or textbooks. Nonetheless, teachers still need to develop the critical faculties of students, teach them to distinguish between important information and simple noise.

Many digital tools promote teamwork, collaboration and interaction, and as such they strengthen students’ empathy and emotional skills, which are essential for relating with others. And they also stimulate the students’ creativity. Jordi Adell, a teacher in the education and school organisation section of the Department of Education at James I University in Castellón de la Plan, and director of its Centre d’Educació i Noves Tecnologies (CENT), draws attention to the fact that kids have mobile phones in their pockets that incorporate a video camera, still camera, voice recorder, Internet connection, and GPS. A huge number of educational activities can be carried out with mobile phones: “the truly revolutionary thing is to put technology in the hands of the kids so that they can create, construct and learn.”

“Students don’t learn from teachers or from technology. They learn from thinking about what they are doing, they learn from what they have done, from the processes. Thinking promotes learning,” adds Jennifer Groff, a researcher at the Massachusetts Institute of Technology (MIT) Media Lab.

Students of School of Science and Technology, Singapore. Source: Wikipedia.

Some people take the idea even further and claim that technology even allows children to learn on their own. In the late 1990s, Sugata Mitra, a teacher of education technology at the University of Newcastle in the United Kingdom, began to carry out a series of experiments in which he left computers in very poor shanty towns in India without giving the inhabitants any kind of explanation, and returned a few months later. The results were always the same: the village kids had learnt to use the computer on their own, without instructions; the kids taught each other, and even managed to learn about complicated things like molecular biology in English, a language that was unfamiliar to them.

“If we are curious enough, we can learn by ourselves,” he says. This year, the TED Prize awarded him a million dollar grant to continue this project. Another initiative based on a similar philosophy is the massive open online courses or MOOCs that emerged on the net a few years ago. These courses, which may be free or require students to pay an enrolment fee, are offered online by universities along with educational material, recorded lectures, bibliographies and podcasts so that students can follow them and interact with fellow students all over the world. This increases access to knowledge and democratises it, so that it may end up reaching the most disadvantaged corners of the globe.

The dark side of technology

But not all education experts have such a positive view of the use of digital tools in the classroom. Many of them warn of the dangers of multitasking while children learn, for example. Some studies claim that if students’ attention is divided amongst a range of information and entertainment sources as they study, do homework or sit in class, they learn more superficially, understand and remember less, and have more difficulties when it comes to applying what they have learnt to new concepts. David Meyer, professor of psychology at the University of Michigan, warns that it appears that activities such as sending whatsapps or updating Facebook require as much mental engagement as school does. “Multitasking has very high cognitive costs,” he says.

Other studies warn that digital technology has a negative impact on the behaviour of children. They say it tends to make them antisocial, prevents them from concentrating in class, and distracts them. And some teachers fear that students will get used to the copy-paste mentality and the Wikipedia philosophy of finding answers quickly and easily. And that they will become used to the overly colloquial language used on social media and be unable to change to a more formal register in their schoolwork.

“There is a lot of confusion right now,” says Lali Bosch, professor of philosophy and a member of the core research group at the Innovation and Research on Teaching Philosophy Group. “New technologies have great potential in terms of knowledge, but there is also a problem: Karl Marx showed us that transferring the qualities of people onto things creates a situation of fetishism, and digital screens can lead us into this spiral. Knowledge, which arises in response to curiosity, must be at the heart of education. Therefore, anything that arouses curiosity is welcome.”

Technology is present in all areas of our life, and just as we wouldn’t consider it reasonable to tell banks, hospitals or industry to go easy on technology, most kids don’t find it reasonable for schools to ban them from using it. They see the Internet, for example, as essential for interacting with their friends, for having fun, and for learning too.

“We need to teach children digital literacy, to teach them reading and writing, but also the language of machines given that understanding how programming works and having basic skills will be the Latin of the 21st century,” argues Sergi Jordà from UPF, “if we want them to become adults who understand what is happening in the society they live in and to have greater control over their lives.” These days, Wikileaks, Anonymous, and anything that entails some resistance to global trends involves fluency in these tools. If we teach children digital literacy, we will have freer citizens.”





The Geopolitics of the Internet

The laying of the cables that make the Internet a global network raise geopolitical issues similar to those that are at stake in the case of oil pipelines.


Main telegraph networks (1875). Soutce: Wikipedia.

The Internet is not an abstract entity that moves through the air. Rather, it is based on a mammoth physical infrastructure that spreads throughout the globe. The laying of the cables that make the Internet a global network raise geopolitical issues similar to those that are at stake in the case of oil pipelines. Securing a reliable source of connectivity has become as important as guaranteeing oil supply.

On 28 March 2013, the almost 3 million inhabitants of Armenia were cut off from the Internet for twelve hours. The digital blockade that the people of this country suffered on that day had nothing to do with the actions of an authoritarian government. Extreme weather conditions were not to blame either. The reason for the breakdown was much more outlandish than that. In a rural area of Georgia, 15 kilometres from the capital Tbilisi, a 75 year old woman with a handsaw single-handedly cut the cable that supplies Armenia with most of its broadband Internet. The pensioner assured the media that she was looking for firewood and had cut the cable by mistake. “I don’t even know what Internet is,” she told AFP after the accident.

This story was picked up by media outlets around the world, drawing attention to an aspect of the Internet that is often overlooked. Even though we are on the path to unstoppable digitalisation, there is still a widespread idea that the Internet travels through the air. Everybody knows that cars are made in factories, but many people still think of the Net as something intangible and without bounds. This is an erroneous notion that terms such as ‘the Cloud’ help to perpetuate.

In reality, the inner workings of the Internet include a very important physical component that can, in extreme cases, be affected by a grandmother cutting a wire in a neighbouring country. Just as oil travels from place to place by means of huge pipelines and cargo ships, data moves through optic fibre cables that wind their way around the world to take a YouTube video to your computer screen. You can display this video because it is stored in data centres full of servers that act as hard drives. These spaces have a considerable physical presence.

Google, for example, has 13 data centres including one at Hamina in Finland, at a former paper mill that was converted into a data centre in 2009. Google has already invested 350 million euros in the project, and plans to spend a further 450 million euros to expand it. To get an idea of the magnitude of the project, this construction work will make Google the biggest foreign investor in the Nordic country. These figures reflect the mammoth size of these centres that spread over tens of thousands of square metres. And Google is not alone. Facebook, Microsoft and Amazon have similar facilities scattered around the globe.

Google Data Centers.

Visiting many of these data centres led journalist Andrew Blum to conclude that “each piece of the cloud was a real, specific place –an obvious reality that was only strange because of the instantaneity with which we constantly communicate with these places.” The writer spent several years on the physical trail of the Internet, an experience that he recounts in his book Tubes, in which spells it out clearly: “the networks of the Internet are as fixed in real, physical places as any railroad or telephone system ever was.”

The growing use of mobile phones to access the Internet can also be misleading: data transfer through the air only makes up a tiny part of the journey, between your device and the closest antenna, which is connected to the underground and submarine cables that carry data beneath the ground and the bottom of the seas and oceans.

“We take wires completely for granted. This is most unwise. People who use the Internet (or for that matter, who make long-distance phone calls) but who don’t know about wires are just like the millions of complacent motorists who pump gasoline into their cars without ever considering where it came from or how it found its way into the corner gas station,” wrote Neal Stephenson in an article entitled Mother Earth, Mother Board and published in Wired in 1996. This 42,000 word essay that now has cult status took the author around the world, on the trail of the 28,000 kilometre FLAG fibre-optic cable that began operating in 1999.

During his travels as a self-appointed “hacker tourist”, the intrepid gonzo journalist interviewed the daredevil engineers responsible for implementing this ambitious major global infrastructure project.

“The FLAG system, the mother of all wires, starts at Porthcurno (England) and proceeds to Estepona (Spain); through the Strait of Gibraltar to Palermo (Sicily); across the Mediterranean to Alexandria and Port Said (Egypt); down the Gulf of Suez and the Red Sea, with a potential branching unit to Jedda; Saudi Arabia; around the Arabian Peninsula to Dubai, site of the FLAG Network Operations Center; across the Indian Ocean to Bombay; around the tip of India and across the Bay of Bengal and the Andaman Sea to Ban Pak Bara, Thailand, with a branch down to Penang, Malaysia; overland across Thailand to Songkhla; up through the South China Sea to Lan Tao Island in Hong Kong; up the coast of China to a branch in the East China Sea where one fork goes to Shanghai and the other to Koje-do Island in Korea, and finally to two separate landings in Japan – Ninomiya and Miura, which are owned by rival carriers,” writes the American writer.

Like FLAG, hundreds of submarine cables are estimated to criss-cross different countries, creating an infrastructure that is vital for guaranteeing the operation of the Internet (in the USA, 95% of external communications use submarine cables). On top of this, an enormous number of cables operate without crossing borders.

Areas in which the water is less than 1000 metres deep pose the greatest difficulties when it comes to laying cables. Threats to the physical integrity of cables in these areas include ship anchors, shark bites, or being entangled in fishing nets at the bottom of the sea. Sabotage can also come into play, as it did in March this year when the Egyptian police arrested three scuba divers who cut the SEA-ME-WE 4 cable near Alexandria. Their actions slowed down the Internet connection for more than 24 hours in this Egyptian city and other areas that relied on the cable.

For these reasons, the preferred sites for landing cables are solitary tracts of land. “Cables almost never land in industrial zones, first because such areas are heavily travelled and frequently dredged, second because of pure geography. Industry likes rivers, which bring currents, which are bad for cables,” Stephenson explains.

On the corporate front, it is not unusual for telecommunication companies that compete with each other in different markets to from consortiums of a dozen companies to get a new cable going. This strategy lowers their costs and ensures the capacity required to supply the connection that their customers demand.

The geopolitics of Internet cables

As mentioned at the start of this article, the incident that left Armenia without an Internet connection drew attention to the vulnerability of the country’s infrastructure. If something as simple as a handsaw can destroy your network, you have a serious problem. In the more highly developed systems used in Europe and the United States, data have so many different ways of getting from A to B that in the case of an accident they are much more likely to find alternative paths to their destination. When we move away from Western territories and into more troubled zones, the configuration of the digital infrastructure becomes more complex, and also very interesting from the geopolitical point of view.

“Everything you read about geopolitics, about spheres of influence and national interests and so forth has a counterpoint on the Internet and how Internet structure plays out,” said James Cowie, CTO of Renesys, in a lecture at the Berkam Centre at Harvard University. To illustrate his point, the networks expert discussed the way the Internet is configured in the Middle East.

As Israel is surrounded by enemy countries, for example, its connection with the rest of the world depends on submarine cables that mainly go through Cyprus, Sicily and Greece to reach Europe and the United States. Palestinian territories, Cowie says, get their connection partly from Israel and partly from European operators that land in Jordan, which is in turn connected to the rest of the world by cables that pass through Saudi Arabia and the branch of the FLAG submarine cable that lands at the port of Aqaba.

Lebanon, which has no diplomatic relationship with Egypt, depended almost exclusively on a cable from Cyprus until 2011, when the I-ME-WE cable that begun operating in 2009 to connect France and India added a submarine branch from Alexandria to the coastal city of Tripoli. “They suddenly had a terabyte of capacity landing on their doorstep,” Cowie explained, which eventually took the pressure off an overloaded system.

In early November, the Internet connection was under threat because the Lebanese government owed money to the company that operates I-ME-WE. On 2 November the Finance and Telecommunications Minister issued a press release confirming that it had paid the 3.2 million dollar debt. The Minister also announced that if they were disconnected from the I-ME-WE network, the government had an alternative plan to use the Alexandros cable that had recently landed in the country. Securing a reliable connection has become as important as guaranteeing energy supply by using various sources to diversify the risk.




In the case of Turkey, its geographical position as a bridge between Asia and Europe is giving it the renewed importance that made it so influential in the Ottoman Empire. As Cowie says, “the fibre cable routes between Europe and Turkey flow down the old trade roads.” The country’s diplomatic strength is being backed up by investments from telecommunication companies such as Turk Tekekom and TurkCell, which are building new optic fibre cable routes towards Iraq, the Caucasus and Saudi Arabia. “They have realised that they are potentially a major exporter of Internet,” Cowie says, a way of increasing the country’s sphere of influence over its neighbours.

Above all, Cowie believes that geopolitical pragmatism prevails when it comes to planning the cable routes: “this has become a major strategic question: how to get the Internet cheaper, faster, and more secure without going through countries we don’t like.”

These same reasons led the promoters of FLAG to choose different routes in order to secure diversity, which, as Stephenson explained in his article, refers to “the principle that one should have multiple, redundant paths to make the system more robust.”

19th Century madmen

Access to high quality global communications would never have been possible without the work of the intrepid mid-19th century entrepreneurs who built the first submarine cables that cleared the way for the revolution of the telegram. “The only things that have changed since then are that the stakes have gotten smaller, the process more bureaucratised, and the personalities less interesting,” Stephenson says.

Back then, the margin of error was so great that only 5,000 out of the total of 17,500 kilometres that had been laid in 1861 worked. “The world has actually been wired together by digital communication systems for a century and a half. Nothing that has happened during that time compares in its impact to the first exchange of messages between Queen Victoria and President Buchanan in 1858,” Stephenson adds.

The path to the digitalisation of society will keep moving forward, but it will always be backed up by an infrastructure that is not only very real, but can be traced back to 19th century submarine cables.

The Eastern Telegraph Company network (1901). Source: A.B.C. Telegraphic Code 5th Edition, Atlantic-cable.





_Big Bang Data

_BIG BANG DATA is an exhibition about the ‘datafication’ of the world curated by José Luis de Vicente and Olga Subirós with ZZZINC activities management.


1900 US Federal Census. Source: Tony Masiello.

_BIG BANG DATA is an exhibition about the ‘datafication’ of the world curated by José Luis de Vicente and Olga Subirós with ZZZINC activities management. In the lead up to the opening in May 2014, CCCB LAB offers a window into the production process through a series of articles in which the curators offer a preview of the principal strands of content.

The production of massive volumes of data is one of the key facts of our time. As the cost of storing digital data plummeted over the past fifteen years, the number of devices and sensors that capture, produce and transmit data has grown exponentially. Some 2.5 trillion bytes are generated every day, and as much data was as produced in 2009 alone as in the entire history of humanity up until that point. The information explosion is growing unchecked, and nobody knows where it will end.

This deluge of information is preserved by the booming data centre sector – the physical, tangible reality that discretely hides behind the misleading metaphor of ‘the Cloud’. The vast industrial infrastructure that spreads throughout the globe and stores memories, words, images and actions already represents two percent of the world’s electricity use, equivalent to the energy consumption of a country like Norway.

But Big Data is more than just an industry, and it promises to bring about a fundamental transition in the history of knowledge. Over the last decade, scientific and academic institutions, government departments, and organisations that are forging new languages for communication have had a growing sense that behind the data explosion lie new ways of solving problems and formulating questions that will lead us to radically rethink how science is practiced, how value is generated in the economy, and how we organise ourselves politically and as a society.

The data that is produced by sensor networks in industrial technologies and infrastructures, but also by individuals in social media, via mobile phones, credit card transactions and GPS devices, are now seen as an opportunity and a powerful tool. Numerous disciplines and fields of knowledge, from physics to urban design, historiography and healthcare, are exploring the enormous potential of analysing massive amounts of previously unavailable detailed information. Understanding this data and making them understandable to others also requires a new language, and new means of expression.

Notes on Matters Affecting the Health, Efficiency, and Hospital Administration of the British Army (1858). By Florence Nightingale. Source: Wikimedia Commons.

A data-driven culture will also be an eminently political culture. Data can be a key element in promoting a cleaner, more participatory democracy and more efficient administrative management models. However, data can also be wielded as a means to establish an unprecedented state of mass surveillance. Access to data, their value, and the right to their use are at the heart of many contemporary conflicts: from the fight for transparency to the political repercussions of the Wikileaks and Snowden cases, and the commercialisation of privacy by the Silicon Valley giants.

Is data the new oil – as technology entrepreneurs like to claim –, a source of infinite wealth? Or should it be first and foremost a tool with which to build a new political culture based on transparency and accountability? How does data force us to revise our traditional view of privacy? Rather than accepting the role of passive consumers, or resigning ourselves to being turned into products by those who sell and market our privacy, can we be active, critical and committed producers of data?

From May to October 2014 at the CCCB, BIG BANG DATA will be explore the effects of the data tsunami, the technological innovations that made it possible, and its impact on science, culture, politics and society.

Open Street Map. By Eric Fischer. Source: Wikimedia Commons.

BIG BANG DATA is the first instalment of BETA, a series of projects that explore the critical spaces at the intersection between science, technological innovation, culture and society. For over five months, the exhibition room on the Level 3 of the CCCB will become a “temporary institution”, a hybrid space combining the exhibition format with production and reflection processes that offer visitors the necessary tools to make sense of emerging fields that have a very direct impact on our lives today.

The exhibition will combine art and design projects, historical documentation, prototypes of new services and technologies, and interviews with experts from around the world. But in addition, the exhibition space will also host a laboratory, with resident researchers, that will become the venue for a series of participatory projects and activities throughout the five months of the Big Bang Data project.

BIG BANG DATA includes an extensive programme of workshops, ‘hackathons’, educational activities for different target audiences, and meetings of local communities. It will coincide with related projects such as Big Data Week – a network of events held in 25 cities around the world – and will be present in different forms at many of the festivals, conferences and fairs scheduled to take place in Barcelona during the same period.

Until the space opens in May 2014, CCCB LAB offers a window into the production process of BIG BANG DATA through a series of articles in which the curators expand on the core ideas that the project will revolve around.





From Smart City to SmartCitizens

The contradictions of smart cities –that great promise that would supposedly resolve the problems of the city– has ended up generating a new term: Smartcitizens.


Fragment of Phrenology Diagrams fromVaught’s Practical Character Reader (1902). Source: The Public Domain Review.

The ‘smart city’ –that great contemporary promise that would supposedly resolve the problems of the city– has a history of contradictions, which has ended up generating a new term that is heir to the concepts rooted in the knowledge society: SmartCitizens. Acknowledging this expression entails accepting that there is a need to rethink the effects of the exclusively technological solutions that were behind the initial smart city proposals. As such, the SmartCitizens concept encompasses initiatives that arise from collective intelligence and point the way to the transformation of the socio-economic structure of our cities, based on the capacity for connectivity, for sharing knowledge, and for being proactive with our environment.

Smart cities here, smart cities there, smart cities everywhere. Over the past few years there has been a smart city boom, to the extent that it seems that all cities have suddenly become ‘smart’[1] overnight. The exponential growth of the term has eclipsed other earlier, conceptualisations that were more comprehensive, such as the “sustainable city”[2], or that more adequately respond to the network era and to the new socio-economic relations stemming from it, such as the “knowledge city”[3]. While this terminological overexposure became deeply embedded in professional and institutional forums, in citizens themselves it simply generated an “enormous indifference”, as the president of the Spanish Federation of Municipalities and Provinces (FEMP) and Mayor of Santander, Iñigo de la Serna, recently admitted. This seems to be a clear indicator of the how remote these ‘smart cities’ are from their inhabitants.

If we are to understand this disruptive reality, we need to question the causes and consequences of the smart city phenomenon, as a springboard from which to start to build (or to reinstate) a conceptual urban framework that genuinely responds to the needs of citizens and is geared towards the effective, comprehensive improvement of the urban habitat.

Much ado about nothing

Even though the current information society was behind the emergence of the ‘smart cities’ concept, it wasn’t until the big technology corporations (IBM, CISCO, Siemens, etc.) began to take an interest in it (and to apply their marketing strategies to it) that the term began to forge its hegemony[4]. And so it was that, in 2010, numerous prestigious international media outlets (Time, The Guardian, The Times, Financial Times, etc.) began to report on this new wave of smart cities, dedicating articles, special issues, and even entire sections to the subject. At the same time, organisations all over the world began to programme a myriad of events and congresses around the topic, monopolising and influencing professional and institutional agendas. As a result, smart cities have become the main focus of the urban debate.

But instead of shedding light on how smart cities make it possible to move forward in relation to earlier theoretical underpinnings, this whole visibilisation campaign simply generated confusion and mistrust. The fact is that this initial discourse only addressed the implementation of new technologies in our cities, and failed to show how the techno-smart city utopia would improve quality of life, or how it could be of benefit to citizens. As such, the technophile justifications pushed by the multinationals simply –but not openly– disregarded internationally accepted premises[5], and ignored the fact that technology for it’s own sake goes against the principles of sustainability and contributes little towards the construction of the knowledge society, other than generating some of its infrastructure.

Exhivition«Smartcitizens» in Madrid. Source: SmartCitizensCC.

Nonetheless, we can now say that smart cities have recently begun to rethink their exclusively technological orientation. For instance, it is now widely acknowledged that the usefulness of smart grids is limited unless they are formulated to include citizens. Which means that it is pointless to make efforts along these lines in a context such as Spain, where the government has decided to penalise the self-consumption of electricity. In any case, this citizen-centred approach is also proving controversial because the majority of service providers simply see citizens as potential clients of the services that they can offer in smart cities. If we think of the different stages of generating, managing and using information, we see that even the most favourable cases aim to exclude citizens from the management stage, which is the key element of the business model that the big corporations are backing.

This approach again weakens the potential envisaged in the knowledge society and the open and distributed logic of the network era, generating a hierarchical, closed structure by positioning citizens as users and consumers rather than active collaborators in the transformation of their environment.

Opening up the smart city to citizens

Predictably, this take-over of smart cities proved controversial, and led to the demand for a new term that comes from the same root but is geared towards providing a necessary counterpoint to the corporatist logic. This is where the ‘SmartCitizens’ concept comes in: a new urban subject that is penetrating deep into the smart city vortex, empowering active citizens as a key element of the knowledge society, and as the backbone that runs through new urban policies.

The SmartCitizen idea reflects that fact that when we talk about smart cities and smart citizens we are talking about more than just optimising the control, use, and efficiency of infrastructures. Smart cities are also about democratising information at all scales and in all its forms (open data, open city, open government, etc.), about the knowledge and active participation of citizens. Because these two aspects, together, will make it possible to improve our habitat and our quality of life.

Against this backdrop, the notion of SmartCitizens that is starting to take hold is that which places citizens at the centre of discussions around smart cities, and defends the principle according to which you can’t have smart cities without smart citizens. In the face of the exclusive logic imposed by economic powers, networked smart citizens are generating new practices and new imaginaries that work together for a (much-needed) rethink of the smart city. The rationale behind this reformulation needs to be cooperation among the various actors that come into play (civil society, public administration, scientific and academic entities, economic agents, etc.). And its symbol of identity must be the sharing of knowledge. And this is precisely were the idea of SmartCitizens intersects with that of the free software movement: sharing and working together to improve the efficiency of processes, unleashing the power of collective intelligence in order to reach optimal solutions.

Open code smart cities

It is true that the means that now allow us to share valuable information among the many different actors and sectors of our society –and as such to transform ourselves into smart, active, participating citizens– lay the groundwork for the development of an urban model that is more democratic, more equitable, more sustainable and more comprehensive. But the efforts required to turn this opportunity into reality are still at a very early stage, and, generally speaking, the spheres of power have no interest in making this effort, when they don’t actually hinder it. This, however, does not mean that smart citizens, along with a new redefinition of smart cities, cannot be promoted from other spheres.

Exhivition«Smartcitizens» in Madrid. Fuente: SmartCitizensCC.

Taking advantage of the possibilities of the Internet, citizen intelligence is springing forth in initiatives organised by technological innovators who equip citizens to access information, make decisions, and work collectively. Free software is actually a sure-fire source of tools, apps and technological solutions to support this citizen empowerment, so much so that it is already possible to find all types of open source technology throughout the entire smart city value chain[6]: from the Internet of things (sensors, hardware, software, RFID technology, etc.), to Big Data (complex data storage and processing on a mass scale), and all kinds of applications[7].

But moving beyond this highly technical approach to smart cities and ultra-connected societies, the SmartCitizen concept also defends the need to overhaul the way we conceptualise city-building technology. And in this case the yardstick is not the sophistication of the technology that generates it, but its capacity to generate community, forge networks, and set up knowledge transfer channels that favour social autonomy. This again breaks the dominant discourse, because from this perspective, an urban market garden is at least as smart as a smartphone.

In short, the idea behind the SmartCitizens concept is that the most effective urban technology is that which emerges from collective intelligence, helps to generate community, opens up channels for citizen appropriation, can be reproduced, is efficient, and is geared towards meeting the real needs of civil society. SmartCitizens reveal that the future of cities is in our hands; in the hands of an intelligent, collaborative society.

Paisaje Transversal is an office that promotes, coordinates, designs and provides consultancy services on innovative urban analysis and transformation processes, from participation to ecology and creativity, always tailored to the local reality: http://www.paisajetransversal.com/
Since 2007, Paisaje Transversal also operates as a platform for research and theory around the topics of city and territory, which provides theoretical support to the office. Its most visible side is the blog www.paisajetransversal.org, which has been one of the world’s most influential Spanish-language architecture blogs for over four years.


[1] For example, the Red de Ciudaded Inteligentes (Intelligent Cities Network), which was created in Spain in 2012 as an alliance of half a dozen City Councils, now has 41 city-members and this number is still growing steadily.

[2] To find out more about the comprehensive nature of the “sustainable city” urban model, we recommend the theoretical work carried out by the Agencia de Ecologia Urbana: http://bcnecologia.net/en/conceptual-model/model-sustainable-city

[3] “(The knowledge city) Is a geographical area in which –according to a plan and an overall strategy adopted jointly by society and the government– the actors involved have the common goal of building an economy based on the development of knowledge.” José Natividad González Parás.

[4] For more on smart cities, we recommend the following compilation of articles (in Spanish) by Manu Fernández (@manufernandez): http://es.scribd.com/doc/61950985/Smart-City-Tecnologias-emergentes-para-el-funcionamiento-urbano

[5] The Hype Cycle for Smart City Technologies and Solutions research report published by Gartner Inc. in 2012 defined a smart city as “an urbanized area where multiple public and private sectors cooperate to achieve sustainable outcomes through the analysis of contextual information exchanged between themselves. (…) The interaction between sector-specific and intra-sector information flows results in more resource-efficient cities that enable more sustainable citizen services and more knowledge transfer between sectors.”

[6] The smart city technological value chain was analysed by Telefónica Foundation, among others, in its report Smart Cities: Un primer paso hacia la Internet de las cosas. Available at: www.fundacion.telefonica.com/es/que_hacemos/media/publicaciones/SMART_CITIES.pdf

[7] In Spain, the CENATIC National Open Source Software Observatory offers an interesting compilation of open source technology in relation to smart cities in the article “Open Smart Cities I: La Internet de las Cosas de Código Abierto”.





Big Data and Digital Humanities: From social computing to the challenges of connected culture

Big Data applied to the field of cultural production brings the Digital Humanities up against the new challenges of a network-generated data culture.


Fragment of Radiolaria, illustration serie by Ernst Haeckel (1862). Source: Biodiversity Heritage Library. Source: Harvard University.

Big Data is the new medium of the second decade of the twenty-first century: a new set of computing technologies that, like the ones that preceded it, is changing the way in which we access reality. Now that the Social Web has become the new laboratory for cultural production, the Digital Humanities are focusing on analysing the production and distribution of cultural products on a mass scale, in order to participate in designing and questioning the means that have made it possible. As such, their approach has shifted to looking at how culture is produced and distributed, and this brings them up against the challenges of a new connected culture.

5,264,802 text documents, 1,735,435 audio files, 1,403,785 videos, and over two billion web pages that can be accessed through the WayBack Machine make up the inventory of the Internet Archive at the time of writing. Then there are also the works of over 7,500 avant-garde artists archived as videos, pdfs, sound files, and television and radio programmes on UBUWEB, the more than 4,346,267 entries in 241 languages submitted by the 127,156 active users that make up Wikipedia, and the ongoing contributions of more than 500 million users on Twitter. And these are just a few examples of the new virtual spaces where knowledge is stored and shared: open access, collaboratively created digital archives, wikis and social networks in which all types of hybridisations coexist, and where encounters between different types of media and content take place. As a whole, they generate a complex environment that reveals our culture as a constantly evolving process.

Twitter data visualisation. Generated by Scraperwiki.

In the 1990s, computers were seen as “remediation machines”, or machines that could reproduce existing media, and the digital humanities focused on translating the documents of our cultural heritage, tributaries of print culture, into the digital medium. It was a process that reduced the documents to machine readable and operable discrete data. As Roberto A. Busa explains in the introduction to A Companion to Digital Humanities, humanities computing is the automation of every possible analysis of human expression. This automation enhanced the capabilities of documents, which gradually mutated into performative messages that replaced written documents as the principal carriers of human knowledge. Meanwhile, the processes used to analyse and reproduce the texts were also used to develop tools that would allow users to access and share this content, and this brought about a change from the paradigm of the archive to that of the platform. These twin processes transformed the way research is carried out in the humanities, and determined the content of Digital Humanities 2.0.

Scanning or transcribing documents to convert them to binary code, storing them in data bases, and taking advantage of the fact that this allows users to search and retrieve information, and to add descriptors, tags and metadata, all contributed to shaping a media landscape based on interconnection and interoperability. Examples of projects along these lines include digital archives such as the Salem Witch Trial, directed by Benjamin Ray at the University of Virginia in 2002, which is a repository of documents relating to the Salem witch hunt. Or The Valley of Shadow archive, put together by the Center for Digital History also at the University of Virginia, which documents the lives of the people from Augusta County, Virginia, and Franklin County, Pennsylvania, during the American Civil War. More than archives, these projects become virtual spaces that users can navigate through, actively accessing information through a structure that connects content from different data bases, stored in different formats such as image, text and sound files. The creation of these almost ubiquitously accessible repositories required a collaborative effort that brought together professionals from different disciplines, from historians, linguists and geographers to designers and computer engineers. And the encounter between them led to the convergent practices and post-disciplinary approach that came to be known as Digital Humanities. These collaborative efforts based on the hybridisation of procedures and forms of representation eventually led to the emergence of new formats in which the information can be contextualised, ranging from interactive geographic maps to timelines. An example of these types of new developments is the Digital Roman Forum project, carried out between 1997 and 2003 by the Cultural Virtual Reality Laboratory (CVRLab) at the University of California, Los Angeles (UCLA), which developed a new way of spatializing information. The team created a three-dimensional model of the Roman Forum that became the user interface. It includes a series of cameras, aimed at the different monuments that are reproduced in the project, allowing users to compare the historical reproduction with current images. It also provides details of the different historical documents that refer to these spaces, and that were used to produce the reproduction.

This capacity for access and linking is taken beyond the archive in projects such as Persus and Pelagios, which allow users to freely and collectively access and contribute content. These projects use standards developed in communities of practice to interconnect content through different online resources. They thus become authentic platforms for content sharing and production rather than simple repositories. The digital library Perseus, for example, which was launched in 1985, relies on the creation of open source software that enables an extensible data operation system in a networking space, based on a two-level structure: one that is human-accessible, in which users can add content and tags, and another that incorporates machine-generated knowledge. This platform provides access to the original documents, and links them to many different types of information, such as translations and later reissues, annotated versions, maps of the spaces referred to… and makes it possible to export all of this information in xml format. Meanwhile, the Pelagios project, dedicated to the reconstruction of the Ancient World, is based on the creation of a map that links historical geospatial data to content from other online sources. When users access a point on the map-interface they are taken to a heterogeneous set of information that includes images, translations, quotes, bibliographies and other maps, all of which can be exported in several file formats such as xml, Json, atom and klm.

WorldCloud of abstracts of the most recent congress organised by the ALLC: The European Association for Digital Humanities. Generated with Processing.

These projects are examples of the computational turn that David M. Berry theorises in The Computational Turn: Thinking about Digital Humanities: “Computational techniques are not merely an instrument wielded by traditional methods; rather they have profound effects on all aspects of the disciplines. Not only do they introduce new methods, which tend to focus on the identification of novel patterns in the data against the principle of narrative and understanding, they also allow the modularisation and recombination of disciplines within the university itself.” The use of automation in conjunction with digitalisation not only boosts capabilities for analysing text documents, it also creates new capabilities for remixing and producing knowledge, and promotes the emergence of new platforms or public spheres, in which the distribution of information can no longer be considered independently of its production.

In The Digital Humanities Manifesto 2.0 ( written in 2009 by Jeffrey Schnapp and Todd Presner, this computational turn is described as a shift from the initial quantitative impulse of the Digital Humanities to a qualitative, interpretative, emotive and generative focus. One that takes into account the complexity and specificity of the medium, its historical context, its criticism and interpretation. This reformulation of objectives sees digital media as profoundly generative and analyses the digital native discourse and research that have grown out of these emergent public spheres, such as wikis, the blogosphere and digital libraries. It thus allows users to become actively involved in the design of the tools, the software, that have brought about this new form of knowledge production, and in the maintenance of the networks in which this culture is produced. This new type of culture is open source, targeted at many different purposes, and flows through many channels. It stems from process-based collaboration in which knowledge takes on many different forms, from image composition to musical orchestration, the critique of texts and the manufacturing of objects, in a convergence between art, science and the humanities.

The generative and communicational capabilities of new media have led to the production and distribution of cultural products on a mass scale. At this time in history, which Manovich has dubbed the “more media” age, we have to think of culture in terms of data. And not just in terms of data that is stored in digital archives in the usual way, but also that which is produced digitally in the form of metadata, tags, computerised vision, digital fingerprints, statistics, and meta-channels such as blogs and comments on social networks that make reference to other content; data that can be mined and visualised, to quote the most recent work by the same author, «Sofware takes command». Data analysis, data mining, and data visualisation are now being used by scientists, businesses and governments as new ways of generating knowledge, and we can apply the same approach to culture. The methods used in social computing –the analysis and mapping of data produced through our interactions with the environment in order to optimise the range of consumer products or the planning of our cities, for example– could be used to find new patterns in cultural production. These would not only allow us to define new categories, but also to map and monitor how and with what tools this culture is produced. The cultural analysis approach that has been used in the field of Software Studies since 2007 is one possible path in this direction. It consists of developing visualisation tools that allow researchers to analyse cultural products on a mass scale, particularly images. For example, a software programme called ImagePlot and high-resolution screens can be used to carry out projects based on the parameterisation of large sets of images in order to reveal new patterns that challenge the existing categories of cultural analysis. One of these projects, Phototrails, for example, generates visualisations that reveal visual patterns and dynamic structures in photos that are generated and shared by the users of different social networks.

Another approach can be seen in projects that analyse digital traces and monitor knowledge production and distribution processes. An example of this approach is the project History Flow by Martin Wittenberg and Fernanda Viégas –developed at the IBM Collaborative User Experience Research Group–, which generates a histogram of the contributions that make up Wikipedia.

Big Data applied to the field of cultural production allows us to create ongoing spatial representations of how our visual culture is shaped and how knowledge is produced. This brings the Digital Humanities up against the new challenges of a network-generated data culture, challenges that link software analysis to epistemological, pedagogic and political issues and that raise many questions, such as: how data is obtained, what entities should we parameterise, at the risk of failing to include parts of reality in the representations; how we assign value to these data, considering that this has a direct effect on how the data will be visualised, and that the great rhetoric power of these graphic visualisations may potentially distort the data; how information is structured in digital environments, given that the structure itself entails a particular model of knowledge and a particular ideology; how to maintain standards that enable data interoperability, and how to go about the political task of ensuring ongoing free access to this data; what new forms of non-linear, multimedia and collaborative narrative can be developed based on this data; the pedagogical question of how to transmit an understanding of the digital code and algorithmic media to humanists whose education has been based on the division between culture and science; and, lastly, how to bring cultural institutions closer to the laboratory, not just in terms of preservation but also in the participation and maintenance of the networks that make knowledge production possible.





César Rendueles: “We think about technology as a solution to our problems when we should think about the political horizon that allows us to deal with them”

In this interview Rendueles talks about free knowledge, differences between real and virtual life, or the role of the social sciences.


The latest book from César Rendueles, “Sociofobia: El cambio político en la era de la utopía digital” (Sociophobia: Political change in the era of digital utopia), firstly questions the ideological consensus surrounding the capacity of communication technologies to induce positive social dynamics. It also undertakes a analysis of the capitalist society as a system that destroys community bonds, situating citizens at the centre of political demands.





The Digital Noah’s Ark

The net is taking root as the dominant paradigm for the digital repository, but can the Internet really replace archives?


Mundaneum’s Universal Bibliographical System, Mons (Wallonia).

While archives and libraries took care of housing and conserving society’s knowledge in the twentieth century, the Internet now looks set to step in as the natural successor for this role. Digital content of all types has several advantages over its analogue equivalents: more information can be stored in less space, it is easer to copy, indexing and querying are more efficient, etc. As part of this trend, the net is taking root as the dominant paradigm for the digital repository, the place where we can find whatever we seek. But can the Internet really replace archives?

Digital technologies have boosted content creation, and its production and dissemination have expanded exponentially as a result of the Internet. The net breaks down geographical barriers and allows us to access any document from any place. As such, copies lose relevance in favour of hypertext: there is no longer any need to copy information when we can link directly to the original. Everything is in “the Cloud”, a perfect metaphor for the massive digital depository that seems to have no specific form or location. And like real-life clouds, it is also changeable and ephemeral.

The Internet is a work in progress, in perpetual beta. It evolves rapidly, and in spite of its youth, its memory is poor. As users, we have adapted perfectly to this whirlwind of change. We easily discover and adapt to new services and just as easily forget the former applications, which almost immediately become memories of what seems to us like a distant past. With neither trauma nor regret, the new buries the old on a daily basis. Not many initiatives survive, and the few that do constantly evolve and are reinvented. This dynamic affects our models of information production, and we have grown accustomed to constantly using and discarding it. But what trace will we leave behind for the future? How can we escape the data deluge?

The Internet is a huge container, but it is a significant departure from the ideals of order and preservations that govern archives. For this reason, ever since the net began, different initiatives have set out to conserve its content, which would otherwise seem doomed to end up in the enormous digital black hole. But archiving the Internet is by no means easy, and it raises new dilemmas that do not have a simple or single solution.

Technical issues

When it comes to archiving content, digital technologies have one major advantage – how easily we can copy content. Compared to other formats, copying digital documents is far quicker and involves virtually no loss of quality. Nonetheless, the net introduces the problem of how to access originals and how to conserve them.

Internet access protocols only allow us to read part of the original code of the files in our computer, only client-side languages. The rest of the code is executed in remote servers and it reaches us in its final version. This means that in order to make full and accurate copies of most websites we need the collaboration of their owners so that we can access the files. Otherwise, all we can do is capture the websites using programmes that crawl the site to collect and save the URLs of its pages. But this system has serious shortcomings when it comes to saving all the contents of a website, because often the crawling software fails to find all of the pages. This doesn’t just lead to a loss of files, it also makes it necessary to check the code and correct possible errors.

Once we do have all the files, we must grapple with a great diversity of languages and types of documents that make preservation a complex matter. The digital world is perpetually evolving, so new machines eventually become unable to read old files. In addition, there is the short lifespan of digital media (the average lifespan of a hard drive is estimated to be around 5 years), so files and media have to be constantly updated. And in spite of all these maintenance tasks, it is sometimes necessary to preserve old hardware and software in order to enable access to obsolete documents.

Internet Archive servers.

Legal issues

Aside from the technical characteristics of the Internet that complicate access to and conservation of online content, there are also legal problems when it comes to storing them. The terms that apply to the legal deposit of analogue documents and publications are not applicable to the net, and this means that archives and libraries have to negotiate the copyright for each website individually before they can copy it. To complicate things further, the legislation regarding these matters is national, so different legal frameworks will apply depending on the website. It would be virtually impossible to manage these rights at the international level, because doing so would entail identifying and contacting all the authors of all the content on the net.

Legal problems also arise when it comes to dealing with content that may be defamatory, illegal, or considered obscene. In these cases, the material has to be reviewed and a decision made about whether it can be conserved and/or made available for consultation. Finally, the archiving process also has to take into account the presence of personal details on the net, and to abide by the relevant data protection laws.

One possible solution that is being considered by several institutions as a way of dealing with these legal issues is the creation of opaque files that limit access and consultation, and as such reduce the legal risks (such as the economic rights of original works, or problems with defamatory or illicit content). But reducing public access to an online file in the medium term substantially reduces its advantages and benefits.

Ethical issues

Notwithstanding the legal issues discussed above, the net is still in a very primitive stage, and there are many regulatory gaps in relation to it. The Internet is a young medium, and we are still unable to imagine the repercussions that our actions may have in the future. And as it is a mass medium, these risks increase exponentially.

One of the big questions that need to be answered is whether the Internet is actually a publishing medium, and whether its users see it as such. The net is an ephemeral medium, and it is used in many different ways. It is a multi-faceted platform that address both the public and the private spheres, without any clearly defined boundaries. As such, we need to think about the extent to which we can archive content that was not published with the intention of being conserved, or that its authors may want to delete at some point in the future.

These dilemmas become more pronounced when we think about archiving social networks, which are probably the most personal side of the net. Although the information in these networks is generally quite irrelevant at an individual level (given its mainly personal nature), it may be of interest when taken as a whole for helping us to understand global events – such as the 15-M movement or the Arab revolutions –, and archiving projects have already started working with these platforms. For example, the United States Library of Congress has started to archive twitter in its Twitter record project, and is considering the legal and ethical terms of this archive.

Curatorial

Without question, the biggest problem we face in archiving the Internet is its magnitude. The enormous amount of content and its staggering growth make it an almost chimeric task. that leaves us no option but to select and prioritise the material that is deemed to be most valuable for future research.

One of the main questions that we have to resolve in regard to an Internet archive is how to define what an online publication is. The diversity of the ever-increasing formats on the net make this a difficult task, but it is of prime importance if we are to create a meaningful and genuinely useful archive.

In geographic terms, given that the Internet is a global, interconnected network, it is difficult to define borders and almost impossible to understand it from a solely national perspective. But as we have noted above, an international archiving project would be forced to deal with the political and legal frameworks of each country, and this would add further complexity to the project. One possible solution may be to create a network of national archives that can work together in order to enable global consultation.

Archiving the Net

As Yuk Hui wrote in the Archivist Manifesto, “archives are reservoirs of discourses that make possible an archaeology of knowledge.” It follows that an Internet archive should not simply store information from the net, but also order it so that we can access it in a simple, logical way. It is important to save information, but without order it will simply be an ocean of bits that we will get lost in whenever we try to navigate through it. At the same time, one of the reasons for archiving is to preserve information that future generations will be able to study with enough hindsight to understand our history as a society. This means keeping a medium and long-term perspective in mind when selecting content.

In short, it would appear that the characteristics of the net force us to choose what we want to save. But at the same time, the fact that we are dealing with information from the present means that we lack the perspective to make this choice with enough certainty. Given this dilemma, we can consider two possible types of archive: a stricter and more defined archive, that sets out certain boundaries and contours but then saves as much as possible within them in order to provide in-depth information; and another that is more photographic, that covers large areas and offers snapshots that are more superficial but allow us to make global connections.





The Internet: A History Under Construction

This chronology is simply a point of departure for an Open Map of the History of the Internet, and which welcomes all of your contributions.


Puente de los Franceses, Madrid

The history of the Internet is an open process, a story in progress. It is impossible to say exactly when it began, and it cannot be attributed to a single inventor. The Network of Networks is the result of numerous scientific and technological advances, of the development of computer languages, of the formulation of different media theories, of the creation, disappearance and resurgence of enterprises such as digital services companies that have woven the Net into what it is today.

The crucial influence of the Internet on every aspect of our lives and on the type of society we are creating make it essential for us to continue to deepen and expand our understanding of it: the wonders that it has made possible and those that it promises, but also the evident dangers of falling into uncritical fascination, or of intellectual surrender in the face of the complexity of the factors that come together in its evolution. For this reason, we would be well advised to accept that the history of the Internet remains open in the light of new disciplines such as network theory, digital humanities, media archaeology, media ecology, etc. And to realise that its timeline is made up of familiar milestones, but also includes huge gaps in our knowledge and reflection. Hence the need to consider the Internet as an expanding universe, with its micro and macro extinctions and the birth of new species. A story that we are all creating together – a collective narrative that should aim towards achieving increasingly democratic, distributed access to knowledge, beyond the decisive influence of scientists, technocrats, governments and corporations.

Others have already tackled the challenge of a historiographical exercise of this kind, including more educational projects such as The history of the Internet in a Nutshell (Six Revisions, 2009) and When was the Internet invented? A people’s History of the Internet (The Guardian, 2009), and other approaches such as Updating the history of web technologies & browsers (Vizzuality, 2012), a visualisation of web technologies and browsers, The Big internet Museum (2011), an online museum of Internet milestones that opens 24/7, and the Internet Hall of Fame’s Living History Timeline (Internet Society, 2012) a timeline that focuses on people who have contributed to the progress of the Net. Each of them do it differently.

And now we have started our own version, which is not by any means exhaustive but is simply a point of departure for an Open Map of the History of the Internet, and which welcomes all of your contributions.

1822 // Charles Babbage designs the first mechanical computer.

1848 // George Boole invents Boolean algebra.

1880 // Alexander Graham Bell invents the telephone.

1934 // Paul Otlet, who is considered to be one of the fathers of “documentation” and was obsessed with classifying all of the world’s knowledge, publishes Radiated Library, a book that anticipates computers and the Internet.

1937 // Alan Turing develops the theoretical concept of the modern computer, and later develops software. He is also considered to be the father of artificial intelligence.

1945 // Vannevar Bush imagines the “Memex”, a microfilm reader with a structure that is analogous to that of the World Wide Web.

1959 // Paul Baran, along with Donald Davies and Leonard Kleinrock, is one of the promoters of packet switched computer networks as a survival strategy for the North American communications network in the event of nuclear attack.

1963 // Douglas Engelbart invents and patents the first computer mouse.

1965 // Andries van Dam and Ted Nelson define the concepts of hypertext and hypermedia.

The first Interface Message Processor (IMP), used to interconnect participant networks to the ARPANET.

1968 // Computer scientists Vinton Cerf and Robert Kahn design TCP/IP protocols (Transmission Control Protocol/Internet Protocol) to connect computers in the military network ARPANET, the precursor of today’s Internet.

1968 // Marshall McLuhan coins the term “global village” to describe the human interconnectivity generated by electronic communications media on a global scale.

1969 // The first computer link-up transmission takes place over a dialup telephone line between UCLA and Stanford universities.

1969 // A group of employees from Bell Laboratories at AT&T start to develop UNIX, an operating system that is not a free software project but is extremely stable.

1971 // Michael Hart launches Project Gutenberg, a global effort that aims to make public domain books and documents available in a wide range of digital and eBook formats. It is sometimes considered the birth of the eBook.

1972 // Ray Tomlinson creates the first e-mail protocol and becomes the first person to send an e-mail.

1973 // The Xerox Alto mini computer is created. It is the first step towards the development of personal computers (PC).

1975 // Altair produces the first microcomputer. This same year, Bill Gates and Paul Allen found the Microsoft Corporation.

1975 // The Jargon File. The New Hackers Dictionary begins to compile hacker slang from computer programming cultures.

1976 // Research carried out by Whitfield Diffie and Martin Hellman on a series of mathematical functions leads to the development of a new theory of cryptography that makes secure e-commerce possible.

1976 // Steve Jobs and Steve Wozniak found the Apple Computer Company in their garage. The following year they create the first personal computer with colour graphics.

1978 // The first magnetic tape is developed in the USA.

1978 // Gary Thuerk sends the first unsolicited bulk e-mail message (later known as spam) to 393 California Arpanet users.

1980 // IBM hires Paul Allen and Bill Gates to create an operating system for a new PC, and they develop DOS.

1982 // The writer William Gibson coins the term “cyberspace in his short story “Burning Chrome” and later popularises it in his novel Neuromancer (1984).

1982 // Introduction of SMTP (Simple Mail Transfer Protocol).

1983 // Richard Stallman launches the GNU project to create a Unix-compatible operating system composed entirely of free software.

1984 // The Domain Name System (DNS) is implemented, introducing seven domains for the first time: .com, .edu, .gov, .mil, .net, .org, .int

HTML code.

1990 // A group of physicists from CERN led by Tim Berners-Lee develop HTML programming language and, in 1991, they create the prototype of the World Wide Web, using the infrastructure of the to query remote files through texts linked to other files.

1991 // Saskia Sassen writes Global Cities, a book that explores the fact that the new powers are based on global information architectures and symbolic production.

1991 // Linus Thorvald starts working on the LINUX kernel, the beginning of open source software.

1992 // As Bill Clinton’s vice-president, Al Gore, the instigator of the legislation that brought about ARPANET, implements policies for the development of information superhighways that favour interconnection between places, people, and terminals.

1993 // Tim Berners-Lee posts weekly research updates on his personal website What’s new in ‘92, creating what is often considered to be the first blog. It isn’t until 1997 that Jorn Barger officially baptises them “weblogs”.

1993 // Mosaic, the first web browser, is launched.

1994 // Tim Berners-Lee creates the World Wide Web Consortium (W3C), an international community that develops open standards to guide the long-term growth of the Web.

1994 // WebCrawler, the first full search engine, is created.

1994 // Jerry Yang and David Filo, two graduate students from Stanford University co-found Yahoo! an Internet portal that offers a web directory and related services.

1995 // Java is created, Jeff Bezos founds Amazon, Pierre Omindyar launches EBay and Jack Smith and Sabeer Bhatia create Hotmail.

1996 // John Perry Barlow publishes the Declaration of the Independence of Cyberspace, a manifesto that defends the Internet as a new political space.

1996 // Larry Page and Sergey Brin, two Computer Science PhD students from Stanford University, create a more efficient search engine called Google, which ranks the results according to the number of hits and the importance of the page they link to.

1997 // RSS (Really Simple Syndication), a format for sharing content with subscribers, is launched.

1997 // The Institute of Electrical and Electronic Engineers (IEEE), chaired by Vic Hayes, creates standard 802.11 which allows the use the microwave spectrum to transmit Internet data (Wi-Fi).

1998 // Sergei Brin and Larry Page found Google Inc.

1999 // Auto-ID Center develops the concept of the Internet of Things, which in computing terms refers to a network of interconnected objects from daily life.

1999 // Shawn Fanning, John Fanning and Sean Parker create Napster a filesharing system that popularises distributed peer-to-peer (P2P) architecture.

1999 // Peter Thiel and Max Levchin found PayPal.

1999 // Nokia releases the 7110, the first mobile phone that can connect to the Internet using the Wireless Application Protocol (WAP) browser, at a maximum data transfer speed of 9.6 kilobits per second. Websites adapted for mobiles immediately start to appear online.

2000 // The Wireless Ethernet Compatibility Alliance (WECA) begins using the Wi-Fi® trademark to certify Wi-Fi products that conform to interoperability standard IEEE 802.11b. It guarantees that all electronic equipment that displays the Wi-Fi logo can connect to each other wirelessly, regardless of the manufacturer.

2000 // Social movements begin to appear in defense of net neutrality as one of the fundamental principles of the Internet, based on treating all data equally.

2001 // The first third generation mobiles (3G) are tested in Japan.

2001 // Lawrence Lessig sets up Creative Commons, a non-profit organisation that works towards reducing legal barriers to creativity through legislative changes and new technologies, allowing authors the freedom to decide the conditions under which they share their work online.

2001 // Wikipedia is created.

2003 // During the Gulf War, Iraqi architect Salam Pax publishes a blog in which he describes things that are happening on the streets of Baghdad and are not being reported in the media. Media outlets such as the BBC, Reuters and The Guardian quote him as a reliable source, sparking a boom of the blog phenomenon.

2003 // Nokia releases the 6650, the first 3G compatible mobile phone.

2004 // Adam Greenfield publishes an article in which he coins the term “ubiquitous computing to refer to the way computers are integrated in people’s environments and no longer perceived as discrete objects.

2004 // In the article “The Long Tail” published in Wired, Cris Anderson describes new types of businesses and economic models that have emerged with the Inernet, such as Amazon and Netflix.

2004 // Tim O’Reilly talks about the Web 2.0 concept for the first time in a lecture.

2004 // Marck Zuckerberg creates Facebook to communicate with friends and fellow-students from Harvard University.

2005 // Michel Bauwens publishes the book Peer to Peer and Human Evolution, which expands the P2P “meme” beyond technological culture.

2005 // The term “social network” comes into use with MySpace. Chad Hurley, Steve Chen and Jawed Karim create the online platform Youtube, which allows users to publish and share videos.

2006 // Henry Jenkins publishes Convergence Culture: Where old and new media collide, in which he describes the way technological, industrial, cultural and social relationships have changed in the media flow of our culture.

2006 // Twitter is created.

2006 // The website Wikileaks is set up, although it does not start operating until 2007-2008.

2007 // Rudolf Giffinger publishes Smart CitiesRanking of European medium-sized cities, in which he puts forward six criteria for classifying smart cities: smart economy, smart mobility, smart environment, smart people, smart living and smart governance.

2007 // The IPhone is launched and smartphones become widespread.

2008 // Clay Shirky publishes Here comes Everybody, in which he looks at collaborative crowdsourcing and explores how new digital media have influenced people’s ability to organise themselves collaboratively without the need for traditional infrastructures.

2008 // Dave Cormier and Bryan Alexanderinvent the term MOOC (Massive Online Open Course) when the enrolment figures for their course Connectivism and Connective Knowledge (CCK08) hits almost 2300 students.

2010 // More tablets that can connect to the Internet (iPad, Samsung Galaxy Tab, ebook or similar) are launched on the market.

2011 // Greenpeace publishes the report “How Dirty is your Data”, which condemns the environmental footprint of large Telecommunications and Internet corporations.





The Digital Black Hole (DBH)

It is estimated that around four fifths of the information and knowledge generated on the Internet since its creation has disappeared.


Giant Ring of Black Holes. Source: Smithsonian Institution.

Except for rare exceptions, highlighted above all for their media impact value, the dominant history of the Internet is that of a narrative sequence of overwhelming advances and spectacular successes. It has ceaselessly conquered spaces of all kinds, physical and spiritual, material and digital, connecting living beings and machines, so that we are no longer capable of even imagining whether, in this digital expansion, there can ever be any turning back. If we were to consider this question with regard to the Internet, the first thing we would say is that we are talking about a young, almost carefree cyberspace, which still has a long way to go before we are presented with that remote possibility of its “deflation”.

However, processes of expansion travel very rocky paths: not all moves are advances. Sometimes truly catastrophic regressions occur, however much we don’t see them or don’t get full news about the event. As if they were copies of the black holes that we barely know, these digital black holes explain to us, from the other side of the mirror, the dynamics of the Net: what we do on and with it, what it does to us and the consequences. Digital black holes, as happens in the universe that we do know, chisel out the landscape of cyberspace; they determine its dynamics and establish the game rules from the side of the “dark matter”, which we do not see even though we intuit that it exists, although we do not pay much attention to it either.

Digital black holes (DBHs) do all this and much more. In some cases, you, dear Internet User, are simply yet another victim of the voracious appetite of the DBHs fed by reasons of force majeure that are irremediable and overwhelming, like a tsunami. In others, it is precisely you who causes its apparition and you who hands over to it all the sustenance you are capable of providing it: hopes, visions, money, effort, intelligence, plans, resources of all kinds and, even, to say it with a commonplace that is currently so in vogue, those dreams that you had believed were finally going to come true. You hand it all over, and what’s more, you don’t even look back, perhaps dazed by the apparent simplicity and inevitability with which the disaster has occurred, in the majority of cases without leaving any trace or memory. At the end of the day, we don’t all have the mass media following our every step.

When the Net is examined from this perspective, a panorama appears to us that is much more useful for understanding a phenomenon as extraordinary as the Internet and the numerous ways in which we manage in the digital universe. When observing the Net from a “deep telescope” to unravel its most discreet structures, the first thing to surprise us is that never before had the History of the Internet (of the digital universe) been tackled from the viewpoint of the extinction of knowledge and information as a result of the upheavals that, to a greater or lesser measure or degree of transcendence, have constantly shaken cyberspace since its creation. According to the highest of the different measurements that have been made in this respect, it is estimated that around four fifths of the information and knowledge generated on the Internet since its creation has disappeared. The DBH has taken charge of alleviating such a burden and erasing it from the virtual map, almost without leaving a trace.

The reasons for these cataclysms are many and of very diverse origin. The same thing occurs with their dimensions and reach. Sometimes they are authentic “major extinctions”, at other times what disappears are technological structures of different dimensions, or groupings of very specific info-systems or of some of the species that populate them, such as organisations that depend upon these regions. What is moving in the depths of this agitated scenario are powerful events, frequently difficult to foresee or distort, sometimes with the appearance of events of force majeure, at other times promoted by internet users themselves, where a preponderant role is played, above all, by what we could call the four horsemen of the digital apocalypse, constantly launched into a gallop from the first day that what we today call the Internet started to be shaped, and that constantly, bit by bit, are transforming what, to our eyes, looks like a relatively manageable digital universe:

  1. the dynamics of the connected populations. From its creation, the Internet has been doubling its population approximately every 12-16 months. To put it another way, at any time, those of us connected form part of a population of which 30% has coexisted for less then 24 months on the Net and 70% have been moving around its ins and outs for more than two years. And logically, at any time, the composition of this population is changing, even if we do not notice it, or we have no way of noticing it. And their activity varies between the absolutely unpredictable and the approximately foreseeable, which is an extended arch packed with surprises.
  2. the evolution of information technologies. This is an everyday experience that everyone explains according to the place occupied but that affects everyone equally because it shapes the technological substrate of the Net: from distribution lists, to websites, blogs, mobile apps, innovations that appear on the horizon…
  3. the constant expansion of virtualisation processes: previously it was a case of life in computers, now we are heading towards machines and physical spaces (and sidereal space).
  4. as a corollary of the previous three, the overwhelming acceleration of the speed of information and knowledge turnover, in many cases reaching 0: creation and consumption seem to form part of the same act. Therefore, the validity of the ephemeral (disposable information) is reinforced with respect to the long-lasting (information likely to be converted into knowledge).

The result of the combined and unequal action of these horsemen is that we are constantly pushing forward the frontiers of the virtual space and opening up new territories in which we have no knowledge of their relief, their structures, their functions and, above all, the implications of occupying them without the knowledge that we would use, for example, in similar actions in the physical world. Furthermore, each new creation/invasion of these spaces usually corresponds with changes in population dynamics, accompanied by leaps in technological evolution and the dissemination of innovative processes of virtualisation. Therefore, the information turnover speed accelerates, which exponentially increases the need for new knowledge to negotiate the emerging challenges. However, it is not easy to solve this dilemma, because to do so it would be necessary to get around the digital black hole. And, it is quite apparent that this mission is not at all easy for us.

Image of Altavista search engine.

Where and how can we check that the DBH has sunk its teeth in, with ill-fated consequences? In this respect, we have access to a few methodologies, some emerging, others consolidated, that allow a species of inverse engineering to be carried out to detect its presence and the reasons behind it. In fact, we at Lab-RSI have developed a growing typology with a high prediction value for detecting projects destined to be swallowed up, sooner or later, by the DBH. But we are now going to use a simple classification of three kinds of extinctions, according to criteria of intensity and density, whose outstanding features are easily recognisable by any Internet user:

The major extinctions. Changes in the dominant technological structures that bring with them colossal losses of information and knowledge, of experiences and relations, of virtual organisations and structures. For example, the leap from BBS and organised virtual communities through the collective distribution of electronic mails towards the first platform technologies (citizen networks, Compuserve, AOL, APC, Servicom, etc.) that even offered direct access to the Net; the transfer from these platforms towards the Web, which meant an almost immediate loss of enormous amounts of information and knowledge that, either were not converted into the new format in time, or were simply abandoned in the face of the arduous task represented by their “transfer”. The bursting of the technology bubble, the massive liquidation of the so-called dot.com companies, also considerably fattened the DBHs. Hardly any traces remain of those magnificent investments by telecoms operators in “reference” portals, propped up by grand media outlets created in the blink of an eye and wrapped up in companies generously paid for based on dreams that were barely vapours in the digital cave…

Medium-sized extinctions. Suddenly, without knowing very well why, broad sectors of activity on the Web disappear, without leaving a trace. Some technological modalities or ways of doing things change, cultural norms appear that were hardly glimpsed as they reared their heads, and a disproportionate importance is acquired by the narratives of marketing agencies or the mass media, which transmit partial, segmented, self-interested views, focusing on a few select brands, despite referring to a universe where billions of people coexist. This reductionist analysis, which strengthens the quality and capacity of certain focal points of the Net, such as Silicon Valley, certain universities and their supposed reputation in “online learning”, the “refreshing” tendencies in company creation, fashions in e-commerce, etc., fool thousands and even millions of inattentive Internet users who conceive and execute their initiatives in the very jaws of the DBH without even noticing the risk.

Everyday extinctions. Each day, the digital black hole acts like a giant rubbish tip for initiatives, projects, processes of all kinds, resources and inestimable volumes of information and knowledge of all sorts and organised in numerous ways. All this is caused by a mixture of ingredients that have been active almost since the very start of the Internet: poorly conceived projects, distorted view of the technological requirements for acting on the Internet, imprecise or oversized objectives, virtual structures that do not correspond with the resources available or with the goals aimed to be reached, expectations fabricated in the smoke factories that proliferate everywhere, following of tendencies that supposedly promise eternal riches, etc. In each one of these bites, the DBH destroys, in the solitary discretion of cyberspace, files, databases, websites, lists of all kinds, projects, expectations, information flows and, of course, unexplored possibilities, and nobody knows whether they will ever form part of “the priorities” again.

Each of these extinctions, whether due to force majeure or perpetrated by humans, bear considerable loads of mistaken or non-substantiated decisions, taken at times of change (all times are times of change on the Internet thanks precisely to the 4 horsemen of the digital apocalypse) that were perceived at times as the confirmation of rites established in preceding activities, or as the prologue of fascinating adventures, but that would never have been undertaken in the physical world in these circumstances. For this reason, the digital black hole is a very rich concept that allows us to obtain unrivalled knowledge on what we do or should do on the Net, on the formulation of our plans and projects, on the knowledge necessary to be able to execute them and, in short, to prevent our actions becoming a new helping of food for a digital black hole that, let’s not forget, only has to await us and suck in what we generously offer it.





The Latest Post-Digital Revolution: The Internet of Things, Big Data and Ubiquity

We build bridges that bind the virtual world closer to the physical world, so that information is not only accessible from anywhere but also in everything.


Photogram of Metropolis, Fritz Lang (1927).

“Gestural interfaces that can be used to access, connect and process data captured in real time; shopping malls that recognise us when we walk in, and where polite virtual agents address us from interactive screens, remind us of our recent purchases and offer a selection of products tailored to our needs and tastes; the capacity to locate and track the movements of any person through the city… and even to predict the future.” This is how engineers at MIT Media Lab, Microsoft Research, and Austin-based Milkshake Media described the world circa 2045 when Steven Spielberg asked for their advice while preparing the screen version of the famous Philip K. Dick novel. Our reality is still nowhere near the massive, seamless network that structures and brings to life the world of Minority Report, but it would appear that this world made up of always-connected smart objects – or something very similar to it – is inevitable. As Adam Greenfield explains in his book Everyware: The Dawning of the Age of Ubiquitous Computing, computer ubiquity, in its numerous forms – augmented reality, wearable computing, tangible interfaces, locative media, near-field communication – is evolving every day, building bridges that bind the virtual world, or “dataspace”, closer and closer to the physical world, so that information is not only accessible from anywhere but also in everything.

See for example the recently opened Burberry flagship store at 121 Regent Street in London, an example of the spectacle of consumption that merges all the information from this clothing company’s website with the physical space. An augmented reality project in which information spreads throughout the architectural space by means of interactive screens that share information in real time through hyperspace. From watching a catwalk show or the launch of one of the products sold at the store, to the planet-wide sharing of cultural events programmed there.

Another example that connects information to context can be found in the numerous sensor networks that collect information in our environment, for purposes ranging from improving sporting performance to preventing damage from tsunamis, volcanoes and radiation leaks, or improving road traffic flow and safety –

Image taken from Murmurs of Earth by Brian Gardiner.

Concussion Detector is a wearable sensor that measures the impact of blows to the head suffered by athletes during games. The data recorded is sent to the coaches, who are equipped with an iPad where they can check them against the impact history of the players to help them make appropriate decisions about whether the player should stay in the game. As well as improving player safety, this project developed at Cagan Stadium in Stanford is also a massive data-capturing initiative that aims to improve diagnostic capacity in general.

Image taken from Murmurs of Earth by Brian Gardiner.

Another connected sensor project, in this case related to the development of smart cities, is the Parking Spot Finder, a sensor network that aims to improve traffic flow and clear up congestion in streets in the city centre. To do this, it detects whether parking spots are occupied and sends the information to smartphone users. The database is also used to adjust the prices of parking meters based on demand.

All of these sensor systems collect petabytes of data that are sent to the “cloud”, where they interact with other data sets and are processed in real time, in order to produce knowledge that is distributed through the net. A state of affairs in which collective intelligence linked to the Internet pervades the environment thanks to its latest evolution: the Data Web.

Web 3.0 or the Data Web is an evolution of Web 2.0, the Social Internet understood as a platform. It is a network in which software is offered as a service in order to connect users to each other. This Web, whose value lies in the contributions and uses of net users, is the start of collective intelligence. In order for the Web to be able to offer answers and create knowledge based on the information provided by users on a massive scale, this information must be in a form that can be handled, understood and worked with in real time. This is what the Data Web does. This new development is based on a series of standards and languages that make it possible to assign metadata to Internet content. These metadata, or data on data, are machine-readable and add information that enables all web traffic to be identified, located and monitored. The result is a system of related databases, in which different subsystems can be used to track all information related to a particular object, and to generate relevant responses. When these data do not just derive from our interactions on the Internet, but also from the network sensors that are spread throughout the physical environment – producing the data flood that characterises the Big Data phenomenon –, and when they also leave the limited frame of screens and become accessible in physical space through different types of augmented reality, then we have the Internet of Things, or, as Tim O’Reilly calls it, the Squared Web, or the Web encountering the world.

This encounter with the world, in which information materialises in our everyday surroundings through the dissemination of smart objects, leads us into the realm of Ubiquitous Computing.

Ubiquitous computing was described by Mark Weiser at the Computer Science Laboratory in Xerox PARC in 1988 as a “calm technology” that disappears into the background, allowing users to focus on the tasks they are carrying out rather than on the computer. Unlike virtual reality, which creates a disconnected world inside the screen, ubiquitous computing is an “embodied virtuality”. Dataspace materialises in the world through the distribution of small interconnected computers, creating a system that is embedded in the world, making computing an integral, invisible part of everyday life in physical space. The project that Weiser and his colleagues were working on in this sense consisted of a set of devices – tabs, pads and boards – that worked at different scales and could identify users and share and access different blocks of information from various physical locations. For example, a phone call could automatically be forwarded to wherever the intended recipient happened to be. Or the agenda agreed on by group of people at a meeting could be physically displayed to the group and then transferred to the personal diaries of each person involved. The result was a type of technology that was as intuitive and unconscious as reading, that moved out of the user interface and created a responsive space in which things could be done. A space in which the virtual nature of computer-readable data, and all the ways in which this data can be modified, processed and analysed, spread through physical space in a pervasive way (widespread dissemination).

There are still obstacles to achieving the pervasive space that characterises ubiquitous computing, such as: the diversity of existing operating systems and programming languages, which hinders communication among computers; the lack of design standards that would enable the homogenisation of the systems involved; the existence of gaps in the universal distribution of ultra broadband, which is necessary for the flow of these data; and the lack of real demand from the general public. But even so, the “intelligent dust”, as Derrick de Kerckhove, calls it, of this Augmented Mind is starting to spread throughout our environment. Aside from the sensor networks and augmented reality systems mentioned above, which we can access from our smartphones through applications such as Layar, we are also starting to see systems that identify users, allowing their actions to be automated. Commonplace examples include different types of cards with RFID chips, such as the transport cards used in some countries – Oyster in London and Navigo in Paris – and Teletac, which is used in Spain to pay motorway tolls. There is also the NFC, or near-field communication system, a mobile application that reads user-stored information such as credit card numbers or the codes of tickets booked and transmits it to nearby devices, so that the person carrying the telephone can make payments or enter shows. All of these applications provide contextualised information on demand, everywhere and in many situations, making it easier to interact with the information overload that characterises our society. They record data about our identity, location and interactions, turning them into new subsystems of data that can then be used by other systems. The fact that the system needs to identify all the objects and persons involved in order to be able to react to them means that any augmented or “pervasive” space is also a monitored space.

Collective intelligence increases the awareness of our surroundings and our potential options for interacting with it. But the pervasiveness and evanescence of ubiquitous technology makes it an unconscious mediation, a highly relational and complex system that is based on internal operations and interrelations with others, and that is imperceptible to the user. It is a system that can restructure the way in which we perceive and relate to the world, and also our consciousness of ourselves and of others. Without our being aware of our involvement in it, or of the magnitude of its connections, or even sometimes of its very presence.

In this way, ubiquitous technology becomes an “apparatus” as defined by Giorgio Agamben, based on his interpretation of Foucault’s use of the term. An apparatus is anything that has in some sense the capacity to capture, orient, determine, intercept, model, control or secure the gestures, behaviours, opinions or discourses of living beings. An apparatus must bring about subjectification processes that allow the individuals involved to interact with them. This means that they can be “profaned”, returned to the process of “humanisation”. Or, in other words, to the set of cultural practices and relations that have produced them, where they can be appropriated by human beings who are active and aware of their environment. The imperceptible nature of the fuzzy system of ubiquitous technology makes it impossible to profane, so that it becomes a strategic system of control at the service of a vague and imperceptible power.

Big data and the systems that materialise information in our environment would seem to have the power to make us happier, helping us to plan our cities and carry out our life plans. But we should stop and ask ourselves whether our cities and our environment in general really need to be “smart”. The qualities that make us engage with our environment are not its functionality and efficiency, but its aesthetic, historical and cultural aspects. The “embodied virtuality” of our post-digital world must be developed in conjunction with aesthetic strategies that allow us to visualise and understand the data flows that surround us, as well as the systems of smart objects that drive them. By doing so, we will not only be able to limit these systems to the areas of our lives in which they can be truly useful, we will also be able to appropriate them, leading to significant relationships. Collective intelligence and its ability to spread throughout our environment should increase our ability to act performatively in the world, making us conscious of the systems of human and non-human agents and relationships that make up our reality at any given moment. It should not become an imperceptible system that can diminish our capacity for agency and lessen our control over how we present ourselves in the world.





What is Insect Media?

Insects are a reference of network culture, from talk of hive minds and distributed networks to algorithms that function like ant colonies.


Them! (1954)

They are everywhere and they can be perceived as quite the alien intelligence; six-legged, with their numerous eyes, capacities of motion and sensation so different from our own. No wonder science fiction has been inspired by insects. But also other fields, like robotics as well as network design. Insects are more than creepy-crawly bugs; they are also a central reference point of so much of network culture, from talk of hive minds and distributed networks to algorithms that function like ant colonies; some refer to our cognitive capitalist practices as “pollen society”.

It is in this context that I talk of insect media — the entanglement of notions borrowed from biology with high tech technology; understanding media culture through such seemingly simple examples of life which however outperform humans in so many ways. The idea of insect media is then a theoretical as well as a cultural historical exercise to understand the long exchange between discourses of technology and discourses of biology.

Science fiction is a good example. If you want to be futuristic, you do not anymore fantasize in terms of humans and animals,or  androids and humanoids; instead preferably insects and other non-human animals. This is the lesson one gets even from a glimpse of past years of science-fiction discourse, such as Ian McDonald’s The Dervish House. The nanotechnological future Istanbul is pitched as the 21st century version of the Silk Road node, defined by its booming nanotech cluster of businesses and tech companies. The Cronenbergian 1980s fantasies of human-insect–hybrid (as in the Fly) are superseded by the fiction version of spider robotics and insectoid-drones part of security and surveillance regimes in Istanbul plagued by various suicide sects.

First edition of Gläserne Bienen (1957).

It’s not that swarms themselves are new. Their history goes back to early twentieth century research into “superorganisms” and the collective mind that for instance ant nests seemed to exhibit. This is the pre-digital version we are being offered now of the internet society: a world of connectivity and emergent intelligence. Indeed, a whole longer history of cultural techniques of swarming is to be considered in relation to the actual technological development in smart robotics. Fiction of Ernst Jünger is again emblematic of a certain epistemological framing of media “development”, as already in his Gläserne Bienen, The Glass Bees (1957) he pitched a future of nanorobotics. In Jünger’s vision, this was tied to a becoming-obsolescent of the actual animal worlds, which corresponds to the analyses of such writers as Akira Mizuta Lippit concerning the dual bind of modern urban technological landscapes and animals: disappearance of rodents and other non-humans is paralleled by the animalisation of media, which seems to be clear from even a cursory representational analysis of early media, so fascinated by agility of animal bodies as well as animation worlds of rodents and animal farms.

To follow the ideas by Lippit, the intertwining of animals and technology was an inherent part of the modernization and emergence of technical media at the end of the nineteenth century. The disappearance of animals from urban cultures of technical media was paralleled with the appearance of animals in various discourses from media (e.g. cinema) to modern subjectivity (e.g. psychoanalysis). As Lippit notes, from metonymies of nature animals became embedded in the new industrial environment, where the idioms and histories of numerous technological innovations from the steam engine to quantum mechanics bear the traces of an incorporated animality. James Watt and later Henry Ford, Thomas Edison, Alexander Graham Bell, Walt Disney, and Erwin Schrödinger, among other key figures in the industrial and aesthetic shifts of the late nineteenth and early twentieth centuries, found uses for animal spirits in developing their respective machines, creating in the process a series of fantastic hybrids.

There is a whole media zoology. This term is used in a parallel sense to that of a “zootechnical” approach to elaborate the wider entanglement of communication practices in relation to animal research – and in addition, as we will see below, to a wider media ecological stance. Media zoology refers to this cultural historical situation in which we design and understand high tech media culture through animal worlds.

But of course there is more to this grounding of media zoology than looking at media through its content and what is on the screen. Indeed, the worlds of such fiction as The Dervish House remind us that media as technologies – as abstract, yet embodied, as concrete, but massively distributed in wireless and network age – work much more efficiently when they are not modelled on the human form. This is why marine biologists turned US military and security advisors, talking about octopuses make international news. I am referring to the University of Arizona marine ecologist Rafe Sagarin advising on learning about decentralised organisational methods from the tentacle marine creatures. The handbook for such experimental ideas is of course the media philosopher Vilem Flusser‘s Vampyroteuthis Infernalis: A Treatise, with a Report by the Institut Scientifique de Recherche Paranaturaliste.

In terms of media theory, this relates bypassing of some of the traditions of theory of technology from Ernst Kapp to Marshall McLuhan. A lot of them have tried to suggest that we always design media as models of the human and as extensions of Man. Instead, as for instance Siegfried Zielinski has argued too media are much weirder in their relations to the world.

Hence, in this spirit of media archaeology and even media variantology, there is another tradition of media theory that we can excavate. This alternative takes aboard animals in various forms and starts the theoretisation of media from a different set of affordances. Indeed, when talking of media ecology, one should not forget the early writings of Harold Innis, which feature such literally ecological themes as rivers, fur and beavers as well as the more conceptual takes on “medium” that track its history in between biology and technical media. Reading through early ethological and biological literature from 19th and early 20th century, one discovers a range of attempts to pitch a cross-discplinary approach to for instance insect lives, which resembles almost an attempt to understand the tiny animals through “cultural” techniques: the life of insects is one of dance, acoustics, communication and housing, as for instance in J.H.Fabre’s Social Life in the Insect World.Such dilemmas of methodological and theoretical interest are expressed by William Morton Wheeler, a pioneer in the study of social life of animals and a writer on “emergence” way before it was captured as part of complexity theory. In early 20th century, Wheeler lamented the restriction of the notion of the social and its scope: “Unfortunately, also, the science of comparative sociology has remained undeveloped. It has, in fact, fallen between two stools, because the sociologists have left the study of the animal and plant societies to the biologists and the latter have been much less interested in these societies as such than in the structure or individual activities of their members.” What if his willingness to expand social sciences to animal and plant lives could be taken as a further step to suggest the same with our media related investigations? If so many of the early research into animal psychology and formations of the social could be seen, anachronistically, as tongue in cheek mapping of cultural techniques of non-human life, perhaps we can more seriously suggest an extended mediatic approach to animals but also other elements of the non-human ecology – organic and non-organic?

Insect Media: An Archaeology of Animals and Technology. Jussi Parikka (2010).

Indeed, we need approaches such as “insect media” that rigourously track the cultural, historical and mediatic contexts in which technological media culture develops. This is even more urgent now, in the midst of the current ecocatastrophe: electronic waste is one of the growing problems, and all of our high tech electronic gadgets carry with them toxic material. Cloud computing demands huge amounts of energy. Media technologies are themselves embedded in ecological consequences, not just animal metaphors. Perhaps ecology and animals are better and more ethical ways to understand technical media culture?





#Machine controls the power

Thoughts about 'Citizenship, Internet and Democracy'.









Evgeny Morozov: “A lot of people overestimates the democratic potential of the Internet”





Leila Nachawati: “There is a gap between what people want and what media covers”





Nicholas Mirzoeff: “Internet doesn’t give us democracy but is potentially democratic”





Geert Lovink: “Facebook is not interested in offering a rich social life, but his own profit”





Smart Citizens in the Data Metropolis

How are 'smart cities’ prefigured? Which forms and dynamics will take cultural institutions and citizenship action?


Blade Runner (1982).

With the Smart City Expo taking place in Barcelona over the next few days, it is a good opportunity to pose some questions about how these ‘smart cities’ are prefigured, and about the forms and dynamics that cultural institutions and citizenship action will take within them. New opportunities, new challenges.

Suddenly, one of the classic media artworks has become an apt metaphor. In Jeffery Shaw’s 1989 Legible City, users hop on a stationary bicycle and navigate their way through the streets of a city of words, a 3D simulation made up out of meanings. A city of data. Twenty-three years later, the essence of smart cities is not too distant from Shaw’s scenarios.

To start with, contemporary cities are going through a change that may seem obvious but is fundamental: it will no longer be necessary to go and look for information in one of the temples of the past, or to absorb it through the media. Data flows through closed and open networks, on earth and in the skies. Data from sensor networks or M2M ( Machine to machine) connections, information provided by users of social networks, and crowdsourcing will come down from the cloud and enhance physical urban space. The great challenge of how to turn all that data into meaningful, useful information still remains, but the data and the devices that ‘sense’ it and put it into circulation already exist and are already connected.

Increasingly, companies that design user experiences are starting to develop physical interfaces or mobile apps that integrate data and city life. Urbanscale, by Adam Greenfield is a good example, with its UrbanFlow service that augments the city’s screens (at bus and train stations, streets, etc.) with “designed and situated” information that allows citizens to find what they are looking for, plan routes, and even participate in civic life.

For further examples we need look more further than Barcelona, where the company WorldSensing has set up sensors to capture traffic data that can then help drivers to find a parking space using a mobile app. Along similar lines, the Barcelona-led European project iCity seeks to open up urban infrastructures that interested agents can enhance with open data in order to offer services of public interest that improve urban life. A parking metre that offers information on the air quality at its location, an app that lets you know whether the public swimming pool or part are packed to overflowing, a ticket vending machine for public transport that offers you the chance to participate in a popular consultation as well as selling you your weekly ticket.




Apocalyptic and integrated, utopia and dystopia. As usual, our ideas about the future are influenced by strongly conflicting ideas. There are those who believe that smart cities imply the rise of an Orwellian society, where technology will be exclusively in the hands of monopolies and authoritarian governments, and will only be used to monitor and control citizens. Security and privacy are still a problematic frontier. On the other hand, more optimistic perspectives believe that technology and data will open up doors to transparency, civic participation and the emancipation of sections of society that were previously excluded. They also defend the sustainable city, in which the community itself, by means of its access to open data, reduces its energy use or adopts more responsible forms of behaviour. The Tidy Street project in Brighton is a great example of a citizen initiative to self-regulate electricity consumption.

This is why the smart city meme must go far beyond resource optimisation and hi-tech efficiency projects. While corporations such as IBM offer city councils smart city- in-a-box type solutions that require large technological investments even though there is no conclusive proof that a system that works in one city will work in a different one, research suggests that smart cities will not exist unless citizens are at the centre of the equation.

This year, the Institute for the Future and the Rockefeller Foundation released ” A Planet of Civic Laboratories“, a report that suggests that in order for cities to be truly smart, data must generate inclusion and development. Top-down solutions proposed by big technology companies are not enough. According to the report, in today’s cities there is a growing and opposing force of entrepreneurs, hackers, and “citizen hacktivists” who are pursuing a different vision of the future city. Their pitch is that urban data in the form of information can promote cities that are more democratic, more inclusive and more resilient.

These do-it-yourself (DIY) urbanists use open-source technologies and cooperative structures for citizen-driven initiatives, strengthening social commitment and ensuring that technological process remains in line with civic interests. Along these lines, projects such as Smart Citizen (a kit containing sensors for measuring environmental indicators and connecting via the online platform Cosm), by Barcelona FabLab, and DCDCity, incubated at MediaLab Prado, nourish the smart city concept from the other side: open code, do-it-yourself philosophy and citizen participation. Given this situation, what is the role of schools and cultural centres? How can these projects be integrated into the cultural agenda and the educational curriculum? How will ‘smart citizens’ be educated?

In the future, successful cities will almost certainly have to integrate these two models. Ideal solutions combine large-scale platforms with big citizen-led innovations. This integration is already taking place to a certain extent, but public administrations need to shape and encourage it as part of an agenda of openness, transparency and inclusion.

Cities are like living organisms with a spirit that extends way beyond the technological network and infrastructure. Human communities are what make and sustain a city’s specific DNA, and it is these particularities – sometimes whimsical or even inexplicable – that must be taken into account when designing innovation in a city context.

It is also worth imagining what new infrastructures will reconfigure the landscape of hyperconnected postmodern cities, guiding the information that flows through them in real time, from traffic control to Facebook ‘likes’, air pollution levels, breakdowns on train networks, and road problems reported through services like FixMyStreet.

What form will the new, hyperconnected flaneur take, now that our right to lose ourselves in the city or discover unexpected spots while looking for a late-night pharmacy is no longer taken for granted? Perhaps one possible role of cultural institutions will be to imagine new urban experiences that enrich physical space with a certain poetry, to return some serendipity to the street experience, or to help us resignify data or reencounter furtive space. To finish with, a curious anecdote that illustrates the contrasts of this zeitgeist: on the way to the airport, the taxi driver ironically said to me: “New taxi drivers don’t know their way around the city any more. They go wherever the GPS tells them. Do you know what I call this thing? An Idiot-Guide.”





James Curran: “The Internet has been an opportunity for major conglomerates to extend their dominance”





Apps in the digital ecosystem. Interview with Mara Balestrini





Inside Digital Life. Interview with Jussi Parikka

Jussi Parikka is a writer, media theorist and Reader in Media & Design at Winchester School of Art (University of Southampton). He is also Adjunct Professor of Digital Culture Theory at University of Turku, Finland. Parikka attended the I+C+i workshop on Our life in the network to speak about digital humanities, networks as living systems, defenders and critics with technology, digital viruses vs. physical viruses and other issues related to media ecology. Juan Insua, Head of CCCB LAB, interviewed him during his stay in Barcelona. Here’s the first part of the interview; soon we’ll publish the second part.








Collective Individuation: A New Theoretical Foundation for Social Networks

Despite their increasing ubiquity, there is no fundamental philosophical theory of social networking, and we believe this has limited the technical development social networking to very limited use-cases. We propose to develop a theoretical discourse on the new generation of social networks and to develop software prototypes for an alternative. Our project centres on the question: what is collective individuation and what is its relation to collective intelligence?

Current social networking websites and network-science are based on individuals as the basic analytic unit, with social relationships as simple “ties” between individuals. In contrast, this project wants to approach even individual humans as fundamentally shaped by their collective social relationships, building from Simondon’s insight that individuation is always simultaneously psychological and collective. Our proposal should enable new kinds of social imagination and social structure through redesigning the concept of the ‘social’ in the time of Facebook.

Facebook and the problem of individuation

The Origin of Social Networks: Moreno and Saint Simon

One of the emerging research areas of web science and network analysis is the attempt to analyze social networks in terms of network theory as it directly descends from sociology. Under the traditional sociological use of graph theory, individuals are considered nodes and their social relationships mapped to edges. In its entirety, a network can be seen as the representation of certain social relations. Such a conceptualization seems to be a foundation of a new discipline mediating the computer science and sociology and cultural studies. Our questions start from: where did this entire conception come from? What legitimates its being? What is the consequence of such a conceptualization?

J. L. Moreno(1889-1974), a psychologist and founder of sociometry was one of the first sociologists to demonstrate the value of graph-theoretic approaches to social relationships. The most-often quoted example is Moreno’s work at the New York State Training School for Girls in Hudson, where run-away cases were more than 14 times than the norm! Moreno identified it as a consequence of the particular network of social relationships amongst the girls in the school via the creation of sociological surveys that helped him “map the network”, and then he used the self-same model to propose another allocation plan that successfully reduced the number of run-away. The belief in the representation of social relations by ‘charting’ prompts Moreno to write that ‘as the pattern of the social universe is not visible to us, it is made visible through charting. Therefore the sociometric chart is the more useful the more accurately and realistically it portrays the relations discovered.’ [1] But one should be careful that by doing this, the charting is no longer a mere representation of social relationships, but also these maps of social relationships should be used to realize what Moreno called social planning, meaning to reorganize “organic” social relationships with the help of planned and technologically-embodied social networks. At this point that we can identify a question which is not yet been tackled significantly by research, which Moreno already proposed in 1941: the superimposition of technical social networks upon pre-existing social networks ‘produces a situation that takes society unaware and removes it more and more from human control’ [2] This lost of control is the central problem of the technical social networks currently, and in order to address this phenomenon, we propose to question some of the presuppositions that have been hidden in the historical development of social network analysis.

Despite their explicit mapping of social relationships, social networking analysis is actually an extreme expression of social atomism. This proposition has to be understood sociologically and philosophically: The presupposition of the social networks is that individuals constitute the network, and hence individuals – which in traditional sociology (if we count Actor Network Theory as an alternative), tend to be humans –  are the basic unchanging units of the social networks. If there is any collectivity, it is to be considered primarily to be created by the sum of the individuals and their social relationships as embodied as connected individuals in the network’s graph.  This view is at odds with what has been widely understood in anthropology: namely that there is a meaning of a society, community, or some other collectivity that is beyond merely the sum of individuals and their relationships.  Historically, it can be noted that the development of collectives as originally exist in the form of families, clans, tribes, and so on and so forth even pre-dates the notion of the autonomous individual [3].

The reemergence of sociometry should attribute to the proliferation of technical networks, and here we must recognize that today is not longer human relations are mapped in sociometry but virtually anything which can be digitalized, or more precisely anything can be represented as data. The arrival of network society supported by technological infrastructure further reinforce the concept of sociometry, and also the spirit of industrialization which one can trace back to Saint Simon. The French sociologist Pierre Musso shows that Saint-Simon was the first philosopher who fully conceptualized the idea of networks via his understanding of physiology, which he then used to analyze vastly different domains, albeit more imaginatively rather than concretely as done later by Moreno.[4] Saint Simon indeed envisioned networks as including communication, transportation, and the like, holding the idea of a network as both his primary concept and tool for social transformation. Saint Simon believes that through industrialization, it is possible to create a socialist state by reallocating wealth and resources from the rich to the poor, from the talented to the less talented, like an organism attains its inner equilibrium by unblocking all the circulations.

Today we know from history that Saint Simon’s sociology was blind to the question of classes which was later analyzed by Karl Marx in Das Kapital. Marx’s vision of the society is often distorted as social planning, which is more or less the codification of collections in the Soviet fashion. Moreno criticized this distorted figure of Marx and proposed that the ‘next social revolution will be of the “sociometric” type. The revolutions of the socialistic-marxistic type are outmoded ; they failed to meet with the sociodynamics of the world situation’. Moreno’s announcement maybe demonstrated today by Facebook as some of the pop writers on technology would say, but in fact what Moreno means by that has to further discussed, especially the concept of spontaneity. But neither Saint Simon’s distinctly old-fashioned industrial vision is considered, since it is obviously that socialism doesn’t come naturally through industrialization, but what is new is the imagination of a new democratic society (one can probably call it Facebook democracy), which is frictionless through the mediation of networks. It is the same for Moreno, the sociometric revolution never gets rid of its own shadow.

Alienation and Disindividuation

The graphical portrayal of social networks as nodes and lines reinforces the perception of Moreno and Saint Simon that social relations always exit in the form from one atomic unit to another. This image, with its obvious bias towards vision, has become the central paradigm in understanding society and the technological systems. Yet any image is also a mediation between the subject and object that preconfigures – or pre-programs – a certain intuition onto the world [5].  One can imagine that the image itself of a social network as merely lines and dots constrains innovation as it cannot understand how to graphically represent any collectivity beyond the individual as primacy, but always take it only consequence or byproduct of the map of interconnected atoms. This is something Moreno forgot or he couldn’t see at his time: the materialization of social relations, not in the figure of charts on the paper, but controllable data stored on the computer which mediate the actions of users. What Moreno called a sociometric revolution is a postulation that through certain sociometric planning, the spontaneity of human interactions can be enhanced. Moreno gained this insight from his long time works on psychodrama, based on which he criticized psychoanalyst especially Freud couldn’t ‘act out’. But we want to point out that firstly seeing each individual as a social atom already implies an extreme form of individualism that intrinsically dismiss the position of collective; secondly today when sociometrical vision is materialized in social networking website, what is at stake is exactly Moreno’s own faith in spontaneity and the question of individuation.

Social networking sites like Facebook stay within this paradigm by providing only digital representations of social relations that pre-exist in a richer social space, and allows new associations based on different discovery algorithms to emerge. Facebook’s very existence relies largely on the presupposition of individualism, as the primary unit in Facebook is always the individual’s Facebook profile. One can always recall the original idea of Facebook, as it was shown in the film, the young Mark Zuckerberg created Facebook as a tool to express his sexual desire, that is to say a libidinal economy intrinsically individualistic. This exploitation of libidinal economy is not new today, in the past decades, we already witnessed the exploitation of libidinal energy in consumerism [6]. In the turn of the 20th century, the father of public relations, Edward Barnays adopted psychoanalysis in his marketing techniques and integrated the economy of commodities with the libidinal economy. Those cynics who is so used to laugh at psychoanalytical readings of advertisements may have to step back when they discover that in fact Barnays is the nephew of Sigmund Freud.

Barnays employed the psychoanalysts to participate in designing marketing strategies. One of the well known examples is to promote the tobacco business, since at that time there was not many women smoked in united state. Barnays hired the female movie stars to smoke in the public, this create a circuit of libidinal economy which has to be completed through the action of smoking, which is also to say buying the cigarette. Today it is no longer simply cigarettes, but whatever commodities. We see a circuit is completed on the bodies of human being, firstly the bodily exploitation and secondly by controlling the desire of the workers to speed up the circulation process. On Facebook, it seems as if the users have their own will to execute actions, but in such as technological system, the vision, actions have to adopt the configurations and functions of the system. In general, on other sites such as Google+ group profiles or anonymous profiles are actively discouraged. One cannot deny that these social networks are able to bring people together and form groups whose activity ranges from shopping to protests. Yet we have to be careful here, as these groups are positive externalities in economic terms. These social networking website support only a few collective actions, but are instead optimized for individuals to map their own network of friends so they can leave individuals commenting on each other’s posts and clicking on very basic individual operations such as ‘Like’ and ‘Want’, which are now increasingly littered throughout the entire Web.

When the users are considered as social atoms which can then be superimposed onto a technological network, the spontaneity and innovation within the collective is given to control of the networks, which is mainly driven by intensive marketing and consumerism aimed at individuals [7]. Social networks have obviously become both an apparatus to express and control the desire of the users. The subject is an atom, and within the social networks, subjectivation becomes an engineering process subjected to careful monitoring and control, which has been thought of by theorists like François Perroux [8] as a source of a new kind of alienation. This is not entirely dissimilar to the alienation which Marx described in Das Kapital which was produced by having human workers adapt to the rhythm of the machines, so the worker loses control of his vital energy and ultimately his time to reflect and to act. When Marx describes the vital forces of the collective, he uses the German word Naturwüchsigkeit, which can literally translated into English as the nature-growth-ness, which is similar to what Moreno calls spontaneity [9]. If an existential critique can be introduced here, we can say time hence attention of each social atom is chopped into smaller pieces and disperse on the networks by the status updates, interactions, advertisements, and the like. This form of collective that is exactly what Martin Heidegger would call ‘das Man’, the ‘they’ who exhausts one’s time without giving meaning to one’s own existence. In fact, Bernard Stielger would hold that these constructed social atoms are not individuals are not really ‘individuals’, but the disindividuals, as they seem to have lost their ability to act out and to relate except within the apparatus of an atomistic social network [10].

Social Engineering and Technical Engineering

Moreno’s sociometry as response to both Marx’ economic materialism and Freud’s psychological materialism encounters its own impasse today; Moreno and Saint-Simon didn’t take digital networks and telecommunication into account in their theories – yet nonetheless technological materialism is currently tied to this new digital economic, psychological, and technological network.[11] Society is mediated by data. Sites like Facebook uses graphs of personal connections to predict and hence ‘recommend’ products, and so produce desires in the individual that show that the autonomous individual is in fact shaped not only by their relationships in the network, but by the existence of the network itself. While the Internet is a distributed and decentralized network, industrialization reverses this principle as simply to maintain a social graph for analysis the size of Facebook requires immense centralization. At the same time it creates a technical reality, with a deception of being an unmodifiable default. Yet, we have to ask: is Facebook a social collectivity, or the false image of one? Going beyond the social graph, we need to grasp other possibilities of ‘social networks’.

The social engineering of facebook is supported by its multiple features ranging from sharing and ‘I like’ functions to privacy settings. Here we sees the unification of social engineering and technical engineering, which also poses the great challenge to the humanities. It will be necessary to look at how these realities are created and accepted, for example if one tries to leave, one losses everything, including the social relations, profile data, the possibility of communicating with friends. Even when one uses social networking sites, individuals and expressions are conditioned by the capacities permitted according to the features of the website and there is little to no privacy. One cannot choose to be anonymous, on the other hand the verification of identities become more and more an important to industry.

There can be political considerations, for example, in China the social networks request the users to prove their identities by showing their identity cards, and this may be in response to the fact that the question of anonymity is seemingly increasingly important for democracy and transparency as has been shown by Wikileaks. There is even a demand for anonymity, as the Japanese Ni Chanel(2ch) which entirely operates on the basis of anonymity has became one of the most popular social network website in Japan. These features would obviously be vital to those in the Middle East, London, Spain, and #OccupyWallSt. If subjectivation within social networks is an engineering process, what is necessary is to produce a new type of thinking and new form of social networks. Some of this thinking can be seen in various slogans: data portability, privacy and personal possession of data. These slogans are natural responses  to the monstrous ability of social networks to create “walled gardens” out of personal data. Though these slogan are important to fight against the dictatorship of Facebook, they still lack an overall reevaluation of facebook and a vision of an alternative social network which is not merely an immediate response.

Project, projection and collective individuation

Simondon and Collective Individuation

Hence we propose to rethink from the perspective of the collective, as a remedy to the individualistic approach of the current social networks. Sociometry demands a mapping which is becoming more and more precise, and reflects the probabilities of connections, interactions, marketing, that is an individuation that leads to disindividuals. Can we think of an new kind of individuation that cannot be reduced to statistics, and whose power only work in ambiguity, instead of precisions? The French philosopher Gilbert Simondon proposed in his book L’Individuation psychique et Collective a model of individuation which can be therapeutic to the current technological catastrophe.[12]

Simondon suggests that individuation is always both psychical and collective. What Simondon means by psychical individuation can be considered to be the psychology of individuals, for example under the situation of anxiety, grief, angry, etc. Simondon also emphasize on collective individuation. For Simondon, individuals and groups are not opposite to each other, meaning while in the group, one loses his or her singularity, as what was considered as the Soviet type of collectivism. Instead, the individual and the group constitute a constant process of individuation. Psychical individuation to Simondon is more an individualization, which is also the condition of individuation, while collective individuation is one that bring the individual to constant transformation. Hence one can understand that nature is in fact not in opposition to human being, but rather the primary phase of being, human being and the technical milieu created by them constitute the second phase of being, which if we can say so, it is the technical individuation proposed by Bernard Stiegler.

Simondon hence rejected the American  microsociology and psychology, which indirectly includes Moreno’s sociometry (it is considered as a type of microsociology), as being substantialism. The substantial approach towards individuals and groups easily ignores the dynamic of the social, and see individual and collective as interiority and exteriority that has to be separated . This approach falls prey to the extreme of psychologism and sociologism. Simondon instead proposes to think of individuation as a necessary dynamics between individuals and groups. He distinguishes ‘in group’ and ‘out group’, and suggests to think of ‘in group’ as an intermediate between individual beings and ‘out group’. Simondon considers individuation as a process of crystallization. Considering a supersaturated solution is undergoing crystallization, by absorbing energy, each individual ion is transforming itself according to the relations with others, that is its milieu. What Simondon is trying to provoke is to bypass the question of social as a question of form, which is also a question of norms, and consider it as dynamics of energy distribution. Crystallization is a process that though finally gives a form, e.g, the identity of a specific crystal, it is also at the same time a process depends less on the form(on can always figure out forms) but rather on the redistribution of energy and matter. One may sense some similarity between Moreno and Simondon in this respect, that is the spontaneity of ingroup and outgroup; and it is also by this reason that we believe Moreno’s sociometric technique though can be used today to analyse social networks like Facebook, Twitter, but it also post tremendous danger of social engineering that fall back to psychologism and sociologism.

Projects as the Basic Unit of Group

One may want to ask:  isn’t what we have seen on Facebook already a psychic and collective individuation? It is true to a Simondonian philosophy can be a tool to analyze social relations, but one must go beyond the limit that thoughts are merely tools of analysis, and recognize that it is also tools for transformation. As we have seen, Facebook individuates primarily atomistic individuals, and we propose to start from the collective instead in order to redesign the relation between the individual and the collective. Instead of a social atom, we must find out how a collective social network changes shapes the individual and take this phenomenon as primacy. This social network will be one that enables collective individuation but also as a remedy to the industrial toxication and exploitation of libidinal energy.

Hence we want to reflect on the question of group, and we want to propose that what distinguishes a collective from an individual is the question of a common project pertaining to groups. Take for example Ushahidi, a website that provided mapping capabilities after the earthquarke in Haidi in 2010 in order to help Haiti recover from the crisis. Using a web-based platform, Ushahdidi enabled both locals and overseas volunteers to collect SMS messages with a special code to map the crisis in order help save people who might otherwise be lost.  After the earthquake and  tsunami in Japan in 2011, engineers from Japan developed a map of damage of the tsunami and the emergencies need to be taken care of through analyzing tweets and other social medias. The dynamics of these projects go far beyond simply posting status updates, but allow people to dynamically work together on common goals. It is the moment of the formation of projects that allow the individuals to individuate themselves through the collective, and so give meaning to the individual. On Facebook, one can establish a group, a page, an event, it seems to allow a common project to appear, but it doesn’t provide the tools for collective individuation based on collaboration.

Passing from a philosophical model to its realization in a technical system, we propose that the social networking site should exist as a set of tools to enable the collective creation and administration of a project. The collective intelligence is activated insofar as the group successfully uses its human and technical abilities to accomplish its goals. A user must always belong to a project, without which he or she will not be able to fully utilize the features – and projects are defined by groups. This is a first attempt to tackle the individualism proposed in the current paradigm of social networks. Each project is defined by a goal and requirements of fulfillments as collectively initiated and updated by members of the group. Tasks will be assigned to users either in the form of individuals or subgroups, the progress of the tasks will be monitored and indicated. However, the collective should be dynamic rather than static, groups can be merged together to form larger projects and a project can also be split into smaller collectives. Groups can discover each other and communicate to seek possibility of collaborations and information sharing.

Case Studies and a Possible Framework

In our project ‘Social Web’, we look at some of the current models, including Wikipedia, some open source platforms, and alternative social networking projects like Lorea, Federated General Assembly, Crabgrass, and Diaspora –  as well as unusual social networking websites such as Ni Channel, NicoNico Douga in Japan. Some of these groups already demonstrate the value of groups and projects, for example the encyclopedia project of Wikipedia, also Lorea and Crabgrass to create an alternative social networks that favor groups and common working spaces. We also recognize that though each of them has some of the collaborative features necessary for a new kind of social network, they don’t really take the idea of individuation at the core of there designs. Besides returning to the primacy of groups, and emphasize on group management, we also suggest some other technical features for such a vision of collective social network:

  1. The network primarily exists as directed social communication aiming at a project, and to enable these various natural language forums such as discussions and wikis should be added. However, unlike traditional social networks, the purpose of the social networking site will be to help users store and refine data, with the data being stored in an open format such as RDF.  Users and groups have the permission to manage the data of the project, and retrieve this data using tagging and search. Mapping should be employed as one possible, and easily interpretable, way to understand collective data collection.
  2. Anonymity can be allowed under certain conditions (for example the group is wholly anonymous, or the group decides to open to anonymity) by collective projects.  For example, in Ni Channel, one of the  reasons that the inventor wants it to be anonymous is that there won’t be segregation that might harm the formation of collectives. [13] Personal data should be accessible only to the collective, and not even to those that run the server. Concerning the security of the networks, data either on the servers will be encrypted by implementing public key infrastructure, with the group being defined by shared public keys. Hence the ISP and system administrators won’t be able to access the data on the server. Secondly the data will be stored distributed across multiple servers in order to minimize the consequences of attacks.

Conclusions & future work

The above outline is an introduction philosophical framework of a funded project on ‘social web’. Facebook to us, represents an industrialization of social relationships to the extreme that it transforms the ‘social’ to a totally ‘atomic’ individualism. The Saint Simon’s imagination of socialism based on the believe of the common good and well being of individuals through building networks is deemed to be a failure, but the relation between network and society take a more aggressive form at the time of ubiquitous metadata. Moreno’s sociometry technique probably finds its best companion today on Facebook and other social networking apparatus, but celebrating the reemergence of sociometric technique is only blind to the danger posed by the presuppositions of such theory and the technological development. We propose that social computing today must go beyond the traditional digital humanities, which  propose to analyze the social transformation by taking technologies into account, rather it will be more fruitful to follow what Stiegler calls pharmacology, which is to say technology is both good and bad at the same time, but it is necessary to develop a therapeutic approach against the toxicity generated by it, which in our case is Facebook(s).

Collective individuation proposes that another social network is possible, and it is necessary to consider an economy which is far more than marketing, click rate, number of users, etc. For us, a project is also a projection, that is the anticipation of a common future of the group. By tiring groups to projects, we want to propose that individuation is always a temporal and existential, rather than merely social and psychological, the projecting a common will to a project, it produces a co-individuation of groups and individuals. The project is under development, but we hope the above outlines show the problem of the social networks and the limits of digital humanities (especially those who embraces sociometry) in understanding social computing, and it is clear that a new method towards software development is possible, and urgent.


[1] J.L. Moreno, Who Shall Survive? Foundations of Sociometry, Group Psychotherapy and Sociodrama, Beacon House Inc .Beacon, N. Y. 1978

[2] J. L. Moreno, Foundations of Sociometry: An Introduction, in sociometry, American Sociological Association , Vol. 4, No. 1 (Feb., 1941), pp. 15-35

[3] Such a view of individualism is also naturalized in economic studies since Adam Smith, who saw division of labour as a natural development and the exchange between individuals as the origin of economic life.

[4] P. Musso, Aux origines du concept moderne : corps et réseau dans la philosophie de Saint Simon. In: Quaderni. N. 3, Hiver 87/88. pp. 11-29. doi : 10.3406/quad.1987.2037

[5] One can also speak of the Weltbild as deployed by Heidegger, where Heidegger showed that an image is not simply a representation of the world, but also that the world can be controlled and manipulated as an image.

[6] Bernard Stiegler, For a New Critique of Political Economy, Polity, London, 2010

[7] After the Like button, Facebook has announced in September 2011 of  introducing the Want button, that is designed for marketing, http://www.auctionbytes.com/cab/abn/y11/m09/i23/s01

[8] The French economist François Perroux took up the question of industry and social transformation from Saint-Simon and developed a vision of collective creation, in which humans and machines act on each other and through the standardization of objects, human beings can renew their life style, and produce a system of ‘auto collective creation’. Notably Perroux was also influenced by Schumpeter, especially the concept of creative destruction.

[9] Hence one should recognize the problematic of Moreno’s critique of Marx, and one may be able to develop a new relation between Moreno and Marx

[10] B. Stiegler, états de choc : Bêtise et savoir au XXIe Siècle, Mille et une Nuit, 2012, p.102-105, where he proposes three types of disindividuation, firstly the regression to the pure social, what is pure social is the animal form of life; secondly the deskilling process by technologies, for example when the craftsmen had to enter factories and gave up their own skills and way of life; thirdly the process of ‘bracketing’ the previous individuation which produces a ‘quantum jump’ and exceed the threshold of the psychical transformation, according to Stiegler, these three types of disindividuations cannot be separated.

[11] J. L. Moreno, The Future of Man’s World,, New York Beacon House, Psychodrama Monographs, 1947

[12] Gilbert Simondon, L’individuqtion Psychique et Collective, à la lumière des notions de Forme, Information, Potentiel et Métastabilité, Paris, Editions Aubier, 1989 et 2007

[13] Satoshi Hamano, Architecutre no seitaikei: Johokankyo wa ikani sekkeisaretekitaka( The Ecology of Architecture), Chinese translation, Taiwan, 2011





Using McLuhan

CCCB (C) Gloria Solsona, 2011.

In this brief text I will take up some thoughts on current interpretations of the work of Marshall McLuhan that I dropped in an earlier post on my blog Hipermediaciones. The starting question might be: how do we read McLuhan? What are we making his texts say? I think there are several ways to approach the explosive and diverse work of the Canadian author. Some readings are based on very specific disciplinary fields: there are readers from media studies, other from the education sphere, there are even people who come to McLuhan from marketing or publishing. Each of them build a McLuhan to measure, and often find in his texts interesting answers but and new challenges. McLuhan boasted of this very thing as one of the lecturers at McLuhan Galaxy Barcelona 2011 reminded us. McLuhan wanted us to think.

Other possible readings of McLuhan travel throughout the disciplinary spectrum. One of the most widespread ways of interpreting it is that it focuses on the predictive character of his ideas and aphorisms. During the McLuhan Galaxy Barcelona 2011 and again at conferences, panel discussions or communications sessions, the predictive ability of McLuhan was present at the talks. I myself have been actively involved in this game that McLuhan is read with an eye on the paper and the other in the reality of the twenty-first century (see this post at Digitalismo, 2010). McLuhan died in 1980 but would have anticipated the web, Facebook, the school crisis, and if we stay on track event the victory of Barça over Manchester on the nightof May 28, 2011 …

Was McLuhan our Nostradamus? I must confess that after this full-immersion in three days we had in Barcelona 2011 Galaxy McLuhan we left saturated of MacLuhan predictions. I think we are reducing the complexity of his thought and contributions to a trade show of a magician with a crystal ball. McLuhan was an acute analyst of the reality of the postwar years, when television was nthe ew media that broke all the schemes and the global agenda jumped from the barricades of Paris to some footsteps at the Sea of Tranquility in the lunar surface. That was the reality that McLuhan analyzed and inspired, not our reality, full of social networking, convergence and disintermediation.

Faced with a stagnant academic world torn between European critical schools (“we must denounce the bourgeois ideology of the media”) and U.S. empirical studies (“we must study the effects of TV on Children”) McLuhan broke with a sideways and irreverent glance, impossible to fit in the academic parameters of his time. McLuhan did not stay in the observation of pure empirical data but had the necessary clarity to frame this element in a larger process. In other words, McLuhan identified and synthesized patterns brilliantly in his famous aphorisms. With “the medium is the message” McLuhan was identifying a pattern and, for the same price, he proposed a new way to interpret the media and culture.

Rather than reading McLuhan with a microscope to identify predictions, I think we will pay a better tribute by trying to rebuild his view on things, discovering the cognitive and discursive “protocols” that led scientists to break the mold and create a different view of reality. As a summary: it is better to interpret the media today with McLuhan’s eyes than trying to try create a story where McLuhan would have predicted our media reality 50 years ago. The reading I propose should be devoted less to finding predictions and more to identifying patterns in his speech. What I propose, in other words, is a McLuhan reading of McLuhan.





From bits to atoms, from click to touch

Siftables, or “the dominó computer” concept, was created by David Merrill and Jeevan Kalanithi at MIT.

A mother travels by subway with her small daughter who pulls at the bag incessantly. Resigned, she unzips and pops up an iPad. The kid picks it up like she was born with the device in her hands and starts to slide her fingers through the screen. The ability to enter cyberspace by touch, to expand pictures using the thumb and middle finger in a spontaneous gesture seem to be the strange result. There’s the girl, now immersed in a world of bits that flow of cyberspace through a peephole rectangular screen, into the physical world composed of atoms where others look at her curiously.

Concepts such as multi-touch and signs as the “two finger zoom” have entered the daily lives of users with the speed of a meteorite, and though still at an early stage, they suggest a series of questions that in the last decade have changed the direction of research in the field of human-computer interaction. These are the first steps toward the massification of interfaces looking to be increasingly tangible, a concept that could emancipate us from the tyranny of the screen, the mouse and cursor, and reconcile bits and atoms in integrated technologies.

From GUI to TUI

Terms like Tangible Interaction (TI) and Tangible User Interfaces (TUI) were coined in 1997 by Hiroshi Ishii at the MIT MediaLab, although research and implementations already existed linked to these concepts since the early nineties. Tangible user interfaces (TUI) combine control and representation in a single physical object or space. With direct manipulation of graphical user interface (GUI), people interact with digital information through their graphical representations (icons, windows, menus) using pointing devices (mouse, keyboard, joystick), while TUI emphasizes the tangibility and materiality, the physical embodiment of the data, bodily interaction and the incorporation of these systems in real physical spaces.

Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms” is the title of the seminal paper in which Ishii suggests that current interfaces open up an unbridgeable gap between cyberspace and the world of atoms. In his opinion, the ultimate goal of tangible interfaces is to connect the digital information with our everyday material objects and architectural spaces that surround us. The idea is to give physical form to digital information and bits so we can manipulate with our hands and allow the environmental recognition of the information in the periphery of consciousness.

While recognizing that the GUI was crucial to the democratization of access to information, the researcher was inspired by those scientific instruments at museum (with a wealth of aesthetic and performance) which once served to measure the passage of time, predict the movements of the planets or draw geometric shapes to reflect on the future of human-computer interaction. What have we lost to have concentrated all our operations with information on personal computers leaving aside the sense of touch, peripheral perception, tangible objects? Today much of our interactions with others and cyberspace are confined to traditional GUI, entrenched in boxes on desktops, laptops and small screens that require our attention and propose ways of interaction that separate us from the world of atoms.

Ishii used the abacus as a metaphor of what the final tangible interaction should be. Unlike personal computers, pocket calculators, devices that have different input and output (keyboard and screen) and are integrated through an arbitrary convention, on the abacus, the components of input and output match and operations are performed by direct manipulation of the results. In this sense, the TUI would seek to increase the real physical world by coupling digital information in everyday physical objects that users can manipulate, a different approach from Augmented Reality.

Within the scope of tangible interfaces there are different concepts ranging from the famous interactive tables, to the hybrid objects and interactive environments. Here are some of the most significant.

The Marble Answering Machine: Created by Durell Bishop in 1992 is one of the prototypes precursors of what later became known as IT. Voice messages are represented with marbles that the user must pick up and put into a slot so you can listen to them. With the view the repository only, you may know, without having to think, if you have messages or not, whether many or few.

Live wire: forerunner of the ambient devices, is a sculptural instrument created by Natalie Jeremijenko at Xerox PARK, consisting of a plastic cord hanging from a small motor mounted on the roof. The engine is connected to an ethernet network of the company, so each packet that flows through it causes a movement in the cord. The bits that travel through the wires of computers are made tangible through the movement of the cord indicating the workers if the network is very saturated (busy) or not. The sculpture appeals to peripheral perception of users.

Reactable (2003): A table of musical collaboration conceived within the Music Technology Group at Universitat Pompeu Fabra. It was first presented at a concert in 2005, but managed to achieve mass popularity as millions of people watched the demonstration show on YouTube and then Björk used it  on her world tour in 2007. Of the whole interfaces that make up the spectrum of TUI, the so-called tabletops are the most popular. Among the pioneers, we can recall the Sensetable or Audiopad, both developed at MIT Medialab in the late nineties. Regardless of the various technologies used (several optical detection systems, electromagnetic, ultrasonic, etc.), interactive tables typically combine tracking control objects on its surface with projection techniques made onto large screens. This type of device allows the user to get information, work with it to create either individually or collectively. The Surface by Microsoft (2007) and the  DiamondTouch at Mitsubishi are other famous interactive tables.

Siftables (2008): Pieces of plastic the size of cookies developed by MIT that communicate with each other and the computer allowing very specific interactions. They have motion sensors, tilt (tri-axial accelerometers), proximity (infrared), flash memory, a mini processor, Bluetooth and touchscreen OLED displays. They are used to play and make calculations, among other activities. Siftables probably contain the gene of the project “Intelligent Modeling Physical Systems”, created by architect John Hamilton Frazer in the 70’s, which consists of intelligent cubes which are recognized by proximity and allow prototyping ideas in 3D.

Intelligent environments with walls, ceilings and floors, interactive hybrid objects and tables that react to gestures and promote collaboration mean rethinking the ways we consider interacting with digital information and its counterpart in the world of atoms. The perception of space, the sense of touch and peripheral awareness are some of the cognitive abilities that these technologies attempt to address to overcome the limitations of the GUI.

References

Tangible bits: towards seamless interfaces between people, bits and atoms. In CHI ‘97: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. H. Ishii y B. Ullmer (1997)

Interactive Surfaces and Tangibles. Tap. Slide. Swipe. Shake. Sergi Jordà, Carles F. Julià, and Daniel Gallardo (2010)





Charles Leadbeater and “cloud culture”




Throughout the four cycles of I+C +i, a picture of the conflicts that affect the present cultural practice has been drawn: the notion of collective creation, the call for participation, the crisis of intellectual property, the emergence of the prosumer, new formats, media convergence and the new economy. The last session of this year (December 14) is devoted to distributed economy and will be closed with a talk by Charles Leadbeater on Cloud Culture, a topic that synthesizes all the concepts discussed during I+c+i.

Charles Leadbeater is a financial journalist turn consultant on economic innovation. His latest book We Think: The Power of Mass Creativity (2008) explores how the Internet is changing our world, creating a new culture in which more and more people can participate, share and collaborate, both with ideas and information. The conflict between the increase of mass collaboration and attempts to maintain control from top to bottom (top-down) will be one of the battles of our time. Leadbeater has become an expert in the increase of knowledge economy and the Internet, and has extensively studied what he calls “The Pro-Am Revolution“: Professional-Amateur (the contributors that Bruce Sterling defined) that show how online work independently may have great impact on politics, culture, economy and development.

This year Counterpoint (The British Council think tank) has commissioned him to write a paper on Cloud Culture, of which he will speak in this session. According to Leadbeater we have entered a new era of Internet, where, thanks to cloud computing a new cloud culture emerges.

In the world of cloud computing our data (mails, documents, images, songs …) is remotedly stored on a digital cloud which can be accessed from any device at any time (think about the way we may access our e-mail from a mobile device). In this context, sharing information and programs makes a lot of sense, at least in theory, as it paves the way for innovation and creativity (just as free software platforms have been created this way).

Like the sky, there are many types of digital clouds: there are commercial ones (Google or Amazon), social ones (Wikipedia) or public ones (The World Digital Library). They can be open (Twitter or WikiLeaks) or closed ( governments). They can even be permanent (scientific clouds, such as those created around CERN) or transitional and emerging (such as viral marketing campaigns). The increase of digital clouds will change culture, creativity and the relationship between them: data storage in the clouds, the expansion of broadband, new search technologies, and access through new devices should make culture more accessible and connected, increasing the expressive capacity of people in a large cloud Pro-Am.

But this cloud culture meets three major threats to which solutions will be proposed during the session. On one hand the control of powers that had dominated and prescribed the discourse (governments, companies, etc.). These institutions see how the cloud allows a greater diversity of cultural expressions, which is decentralized, plural and collaborative where the boundaries between amateur and professional are blurred, as the one between consumer and producer. The other threat is copyright owners who see the Internet not as a technology of freedom but of cultural destruction of its business model, which react by increasing regulations and make it difficult to share. And the last one is what he calls cloud capitalists, the new media moguls (Facebook, Google, Apple, Twitter …), that with the premise that they are services providers, organize the cloud to get economic advantage.

Leadbeater draws a map of the situation and warns that the importance of the cloud is its richness and diversity, so is the true potential of a new global culture commons that will only remain open if it resists the threats to close it and control it.

Recomended reading and visual material

Cloud Culture, The Promise and Threat

Cloud Culture: The Future of Global Cultural Relations

The Pro-Am Revolution

Charles Leadbeater en TED





Cartographies of Complexity

Jorge Macchi. Doppelgänger. 2005.

It is undeniable –McLuhan dixit- that new media is reformatting our cognition and subsequently, our relationships. Social networking services are a literal example of this. Probably, in our era building other connections moves via software design and by taking these new environments into consideration, a critical interaction is opening up new literary horizons.

In “Complex networks”, Ricard Solé presents recent scientific discoveries in network science. The study of the architecture of the network is centred on the analysis of interaction between elements rather than on the isolated parts. The research shows an underlying universal regularity in seemingly different areas: economics, biology, language, electric systems and social interactions, producing a revolution in the cartography of complexity. In Solé’s words, when we combine elements many phenomena that didn’t occur on a lower scale, emerge.

Architectures of interaction are characterized by the structure of the “small-world network”, that is, a certain degree of randomization between the connections within a system provide the creation of short cuts between its components. One of the experiments that confirms the hypothesis is the theory of ‘Six degrees of separation’, proving how close we are to any other person in the world. The simplicity of the spreading of information in “small-worlds” also produces a collapse when one of the connection nodes is damaged. This damage reacts like a cancer on a cellular level in the body and is thus mirrored in informatic viruses. Unlike highly hierarchical systems, they are interdependent systems with a less specialized distribution of functions, which constitutes on one side its strength and, on the other side, its fragility. See ‘global warming’ or ‘global economic’ crisis.

Relating this theoretical frame to social networking services:

“What’s up?” is a typical question when we make a phone call to a person in our inner circle. Although the answer is usually “having a coffee” or just “cooking”, sharing this information makes us feel connected. Both Facebook and Twitter have been designed to fulfil this communication function that we can call phatic. If FB is widely known, the second, perhaps, requires a short presentation. Twitter is a social networking service that enables its users to send and read messages limited to 140 characters. The tweet updates the current state of the user in a synthetic way. One of the differences between the two services lies in the fact that in FB it is necessary to give “friendship” consent to add the other person to our group of contacts, while the default options in Twitter allow it to “follow” and be “followed” by anyone. The basic spreading outline of this kind of program is many-to-many.

The features of Twitter enable users to reach high rates of followers, therefore, increase the spreading capacity. The “whales”- users who have more than 2000 followers- are, in Network Systems, hubs, or nodes that have a high rate of connectivity. These nodes move most of the information, and, at the same time, are those which usually propagate more viruses; a characteristic of both social networks and cellular systems. A transmission failure in one of the hubs or in a cell would produce a ‘cancer’ or a network collapse.

According to Mike Murphy, Vice President of Sales at FB, only a 10% of Twitter’s contents are produced by users. Although it depends on who we follow, that is, on the community of interests we are part of, most of the messages are usually links or retweets: resent messages. Whatever the intention of the user might be the immense spread capacity of tweets has made them a perfect breeding ground for viral marketing, making it difficult to distinguish between publishing and publicity.

The potential to amplify the concept of literature through these interfaces is being explored. The many-to-many of Twitter favours collective creation. The writer Cristina Rivera Garza has been one of the first to use the device as a literary creation tool. The “Tuitnovela” –Spanish for Tweet-novel-, is based on the collective writing through tweets in the same “Timeline” – the interface section kept for a particular conversation between users. For Cristina Rivera Garza what matters in these kinds of worksis the “plural production of a structure”, so these works are based in the bajtinian concept of polyphony and juxtaposition.

An Exquisite corpse? Partly. The chance of group writing in real time, the brevity of space -in a similar way to SMS-, inclines towards aphorism and reminds us of the poetic genre of haiku. According to the theorist Josefina Ludmer, post autonomous writings are those who, outside the confines of the literary, are devoted to the production of present. Ludmer tries to place authors whose works, “don’t admit a literary reading; in other words, that we don’t know or don’t care if they are literature. It doesn’t matter if they are fiction or real”. They place themselves locally in everyday life to ‘make present’ and that is precisely its sense”. These authors assert the economical character of what is cultural and vice versa. In the words of Cristina Rivera Garza, these writings confirm that “reality is already in itself a fiction, and fiction is our daily life”.

One example of the creative opportunities of Twitter explored by the author are the #cuentuitos. The hash key before the name point out that it is a hashtag: a tag that groups all the comments under the same subject. The #cuentuitos are tales written through the aforementioned social network service and therefore limited to 140 characters. Cristina Rivera Garza was the curator of the first Twitter life-writing event. As a theoretical manifesto it’s worth visiting “La producción de presente”.

Regardless of the orientation and quality of the works, most of the literary initiatives that incorporate 2.0 dynamics have the common concepts of connection and collectivity. The socialization of the reading experience, the polyphonic creation and the supranationality (or globalization, in its negative meaning) are a common part of their identity. Even though some of these writings have dissociated themselves from the literary intention –and maybe that could be an advantage-, we can consider whether in front of the profusion and specialization of on-line social networking and publishing services, the “production of virtual present” is equivalent to communication (artistic or not). In view of the interest of these initiatives, it’s worth to endeavoring that the global tweet doesn’t silence singular voices. To find strategies to inhabit creative ecosystems in a small-world involve, according to Solé, collaboration.





Questions on augmented reality

Image created by Mac Funamizu, reproduced with Creative Commons Attribution 3.0 license.

There are certain terms in the cultural sphere that are beginning to be repeated like mantras. From CCCB Lab we have seen, for example, how the concept of innovation is applied indiscriminately in different areas.

With the concept of augmented reality something similar happens. What are we talking about when we talk about augmented reality? A schematic summary tells us that augmented reality is a veil of real-time mixed reality, understood as information generated by 2.0 devices superimposed on a physical environment.

Meaning: to what we know as the outside world and the information it generates, virtual elements are added with a device-camera, phone, etc…- , to maximize our perception of reality.

This has generated numerous debates about the meaning of reality, the limits on it, just as it happened with the emergence of the concept of virtual reality. But what about applications?

At CCCB Lab we will deal with the development of augmented reality, focusing on its possible applications in the field of exhibitions, and artistic practices in general, not to mention the substantive discussions on the “amplification of reality” which, from genetics to astrophysics-is changing our conception of what surrounds us.