Recent months have seen a turnaround in the public role of the principal media, both traditional and technological. The major platforms have launched a battle against disinformation at different levels that came to a head during the US elections. The big US television networks and newspapers are now aware that we are living in post-truth times, and social networks have stepped into their condition as editors of information and, with it, of reality. Faced with this paradigm shift, it is worth reflecting on the line that separates editing from censorship, and wondering who should be responsible for the ethical regulation of information.
On 5 November, during Donald Trump’s insane press conference in which he complained that the elections were being stolen from him, something historic happened: three major television channels (ABC, CBS and NBC) interrupted the live transmission. In other words, they refused to continue broadcasting fake news. On the same day, Twitter and YouTube cancelled the account and the channel of the controversial creator of conspiracy theories, Steve Bannon, for using them to suggest that the director of the FBI be beheaded and have his head put on a pike. During the previous weeks, the main social networks had proceeded in similar fashion: both Facebook and Twitter deleted contents of Trump’s and his campaign accounts, considering them fraudulent or incitement to hatred.
This is a huge shift in the public role of major platforms. Over the past year they have been accused, on solid grounds, of having changed the world for the worse. There are already well-founded studies that show that Trump and Bolsonaro would not have come to power without them. And they have decided to act accordingly.
If you’ve got a Twitter account, you’ll have noticed. You can no longer retweet a tweet that contains a link; instead, you have the option of accessing the article to read it first or citing it in your own tweet—that is, the retweet now comprises two actions. After so many years of automatic, instinctive, almost animal likes and retweets, Silicon Valley engineers have made “friction” fashionable. Increasing friction in the design of interactive devices and social networks means turning what was once a single click into two or three. This dilated process aims to avoid news being shared compulsively and invite whoever does it to some reflection on what is being spread.
At the same time, that friction goes against the very spirit of social networks during their 15 years of existence. For all of that time, what they did, precisely, was to smooth down anything that prevented users from navigating, surfing, sliding across the surfaces of their interfaces with the greatest possible smoothness and immediacy. The weight of Trump’s victory and guilt at the manipulation that they enabled with those too well oiled mechanisms, were determinant in the radical shift that occurred in the context of the 2020 elections. An opportunity for symbolical reparation. A litmus test for them, because, as Kevin Roose wrote in The New York Times, they had to make their tools and systems worse to make democracy a little better.
TikTok has its own security centre to combat disinformation. Facebook has created an external verification centre, of which the Spanish website maldita.es forms part, along with dozens of media from around the world that make up the International Fact-Checking Network. By the summer, all the major social networks had taken measures to circulate the correct information about COVID-19, to the detriment of fake news and conspiracy theories. The US elections were the second phase of the same battle against disinformation (both interconnected, since the actual US government contributed to spreading hoaxes about the pandemic). Facebook even approved a viral misinformation protocol.
It is a question of updating strategies that some platforms have developed in recent years to reinforce the common good and clean up their image, such as Facebook’s response to emergencies (be it an earthquake or a terrorist attack) or, along the same lines, Google Person Finder. But now applied not to the outside world or natural disasters, but to the more subtle, pixelated disasters brought about by social networks themselves (or sinister characters such as Dominic Cummings, the architect of Brexit, or Steve Bannon, former vice president of Cambridge Analytica and adviser to Donald Trump, among other misdeeds).
During the recent elections, the big US television networks and newspapers became aware that we are living in post-truth times and that the traditional rules of journalism no longer apply. In the world before Trump’s presidency, television channels would have continued to broadcast a president’s delusions. In the world it bequeaths to us, however, they will cease to do so. Social networks themselves, which have been largely responsible for this change of paradigm, have reacted with similar decisions. They are deleting or tagging what certain opinion leaders say on their platforms. In this way, they have stepped into their condition as editors of information and, with it, of reality. Editing means controlling, changing, directing, prescribing and, sometimes, also censoring. Is what the main traditional and technological media in the United States are doing censorship?
The initiative of Pedro Sánchez’s government in Spain, allowing the creation of governmental means of controlling information and disinformation, has been described by legal experts as a probable cause of legal instability. Its raison d’être is the new international context, the imitation of Bannon’s strategies by far-right parties around the world—including Vox—and the proven interference of foreign governments in democratic elections, which have thereby ceased to be entirely sovereign. Over and above criticism of press freedom and freedom of expression (we have to remember that the “Gag Law” is still in force in Spain), it is worth reflecting on whether the ethical regulation of information should be delegated to big media and big platforms, or whether this role should be undertaken by public bodies. Does it make sense for Facebook or Twitter to decide what is true and what isn’t? Should the European Union or the UN do it instead? As Juvenal and Alan Moore asked: Who watches the watchers?