How could technology encourage new forms of exhibition to generate a break in the “broadcast paradigm”? How to provide visitors the opportunity to participate in discursive creation and new physical and cognitive experiences?
This post intends to rethink augmented exhibitions and technology, taking into account some lines of current research in the field of human-computer interaction and its possible applications in the context of museums.
Emotions play a vital role in decision making, perception, learning and determine mechanisms of rational thinking. At least as suggested by Rosalind Picard, who directs the Affective Computing Research Group at MIT: “If we want computers to be really smart and to interact naturally with us, we must give them the ability to recognize, understand and even to have and express emotions“. At first, affective computing aims to get computers to recognize the user’s emotional state and act according to it. It is “implicit interaction” (not necessarily voluntary) with technology, based on physiological information of the user.
Changes in sweating in the palm of the hand, heart rate or facial muscle tension can allow computers to gather information about the user and adapt its behavior in real-time. So, what we call physiological computing is aimed at the creation of interfaces that conform to each user and know something about the context in which interaction occurs. In our imaginary exhibition, for example, visitors could take a guided automatic device capable of adjusting to their mental and emotional states. The device would know if the user is bored, excited or overwhelmed and modify the route and information to provide a unique and enjoyable experience. Through this gadget, you could leave your comments in real time on devices from other users to generate debate and interaction.
BCI. Brain computer interfaces
While entering the field of physiological computing, the BCI deserves special mention. In this case, the brain activity of a user is used to operate on the technological devices. The electrical signals are captured by different sensors (one of the most used in artistic projects is the Emotiv) preprocessed and classified in order to communicate to an external medium, either a computer or other hardware. In October 2007 when a Japanese research group presented a BCI system that allowed control the behavior of an avatar in the virtual world with Second Life, brain-computer interfaces achieved popularity and media exposure. Many studies have given interesting results in this line of HCI that would also open a world of sensations in museum exhibitions. For example, stories that conform to our mental states, digital paintings that can be modified by meditation, concentration and excitement and even bands that sound thanks to the joint brainpower of users, as in the case of Multimodal Brain Orchestra.
Tangible User Interfaces
We talked about TUIs in another post. The need to generate tangible interfaces not only comes from a desire to overcome graphical user interfaces. The computer was created to make computations and typical office work, hence its appearance. However, it is now used for almost all activities that link us to the world of bits. Operating through a keyboard, mouse and screen reproduce a cognitive scheme that only perpetuates the dualism that separates mind from body. As suggested by the theories in the field of embodied cognition there are many possibilities for one’s cognitive processes such as thinking to occur in the physical and sensory interaction with the environment.
How would be the experience if in a Ernest Hemingway exhibition one could pick up a shell resting on a table and listen to parts of The Old Man and the Sea (nothing that can not be done with Arduino!)? What if we approached the typewriter, it sensed our presence and our mood and could write a poem on its keyboard as driven by all the poets who are already gone? The Digital Graffiti Wall illustrates another possibility for collaboration using Tuis.
Virtual reality (VR) is very expensive and in most cases allows us to see ourselves represented in a vector with which we can hardly interact. Exoskeletons and other systems to achieve haptic feedback are still at an experimental stage, and they are expensive and require technical development applications. But who could take away the pleasure of feeling “present” in a 3D representation of the “aleph” Borges imagined? Virtual Reality is investigating the concept of “presence“, the psychology behind the phenomenon by which the brain takes as real a virtual space that we know consciously. The project Beaming of the Universitat de Barcelona, visitors can participate in plays that happen in a remote site and enter the psychedelic dream of the Beat Generation.
Augmented reality (AR)
As this post explains, it is a set of devices that add virtual information to the existing physical information, ie, overprinting computer data to the real world. Augmented reality research explores the application of computer-generated images in real-time video streams as a way to “increase” the real world. In the most developed examples it requires the use of projectors placed at head height, a virtual retinal display for improved viewing, and the construction of controlled environments with sensors and actuators. There are also major creators of mobile AR applications as Layar, which are widely used and are less expensive. As part of an exhibition, users could go to a space where objects could be approached with a camera phone, tell stories found on the Internet and break the univocal discourse towards collective intelligence.
There are many possibilities offered by new technologies to imagine new ways of participation in the narrative construction and aesthetics of the exhibitions. Used creatively, new feats of human-computer interaction allow us to live presentations and modify them in real time. The inclusion of user-generated content, another key point in the museum of the future, deserves a separate post.