The success of podcasts and audiobooks has led to an enormous growth in audio fictions. But this newfound popularity for sound does not only affect literary or audiovisual creation. The use of sound has much to do with a new form of interaction with technology that blurs the boundaries between the physical and the digital.
With over six million downloads since it premiered in 2016, El gran apagón (The Great Power Cut) is very probably the most-listened-to fictional podcast in Spanish of all times. And the two seasons broadcast to date of Guerra 3 (War 3), premiered two years later by the same creators, scriptwriter José A. Pérez Ledo and director Ana Alonso, have already earned in excess of one million downloads. While the first of these two stories conjures up the consequences of an intense solar storm that leaves the entire planet without electricity, the second reconstructs the media, political and military movements that lead to the Third World War.
Perhaps because these excellent tales coincide in that they are both global and conspiracist, I have suddenly become aware that it is not by chance that the boom in podcasts and audiobooks – the context framing the audio fictions phenomenon – has coincided with a growing omnipresence of digital voices in our everyday lives. Chatbots, soundbites, search interfaces, personal assistants, chat devices, wireless headphones and voice recognition systems are only multiplying. The major technology companies plotted one big audio fiction over a decade ago and it has gradually become a unanimous reality.
We are experiencing an authentic creative boom in audio formats that has begun to establish a new canon for art and for entertainment, that of fictional works and documentaries, with the creation of sophisticated soundscapes for their introduction – via our eardrums – directly into our imagination. But artistic and corporate narratives get mixed up. And the deeper reasons for the expansion of the podcast universe are so complex that, by moments, they also seem to be fiction.
Although audio interfaces already existed a long time beforehand, 2011 was the year in which the revolution that we are now experiencing started brewing. On 14 October, Siri was presented as part of the iPhone 4S services. That same year, Spotify reached the United States and Amazon began to work on the so-called “Project D” with the aim of constructing Echo and Alexa, whose first version hit the market in 2014. Then Facebook acquired WhatsApp, and the first seasons were broadcast of two audio series that marked a before and after: Serial (podcast) and Homecoming. The film Her, by Spike Jonze, which imagines a love story between a user and his virtual assistant, had premiered a few months earlier.
Meanwhile in Asia and in Africa, millions of people were discovering daily connection to the Internet directly via their smartphones, without passing through a computer first. In many languages – including Chinese – it was and continues to be much easier to record a voice file than to construct messages with keyboards, so the oral option quickly conquered linguistic and geographical territories to the detriment of writing. At a time when Western personal assistants were costing between one hundred and two hundred dollars, on 5 July 2017 Alibaba launched its Tmall Genie for 73.42 dollars. Xiaomi – the main low-cost smartphones producer in China – began, at the same time, its own revolution with the Mi AI speaker, which is the centre of a smart home ecosystem with devices connected via sensors.
In AI Superpowers, an essential essay for understanding the technological and geopolitical tensions of our moment in history, Chinese-American computer programmer, business investor and communicator Kai-Fu Lee (who, by the way, designed Sphinx, the first independent voice recognition system) affirms that the AI revolution is invading us in four successive waves.
The first two, internet AI and business AI, have already reformatted reality. The fourth is still to arrive: it will mean the automation of thousands of tasks and processes. But what interests me for this essay is the third one: “Perception AI is now digitizing our physical world, learning to recognize our faces, understand our requests, and ‘see’ the world around us.” This is a radical transformation of interactions between humans and machines, blurring the lines between the digital and the physical worlds.
The creative boom in audio fictions and podcasts cannot be understood without that absolutely favourable corporate context. It is not a case of rejecting any language or channel, because in the growth or decline of each of them there are always business interests: the new technological reality challenges us to become increasingly aware and more critical as consumers, viewers, or readers. It is not delusional to think that, because the touch of keyboards and screens continues to be cold, the technologists have found in the ear a strategy for injecting warmth into our relationships with our everyday machinery. And that series for listening to on Spotify or Storytel, or the audio documentaries of the BBC, are a direct consequence of that master plan.
There are millions of solitudes currently in lockdown in every corner of planet Earth. If Google was already, prior to the pandemic, the place where we were searching for the answers to all our questions, in recent months our technological dependence has only increased. Siri and Alexa also offer advice and information on COVID-19 and its consequences. Their voices are not yet as sensual as that of Scarlett Johansson in Her, but give it time. On 14 February, Valentine’s Day, of this year, millions of people all over the world told Alexa that they loved her. The “I love you” function of the Amazon voice assistant, which is only available in English, enables her to vary her answers to declarations of love from her owners. She can even respond “I love you too”. For many people that will be a relief; some will see it as charming; but for myself – I must confess – I find it just a little scary.
Leave a comment