A mother travels by subway with her small daughter who pulls at the bag incessantly. Resigned, she unzips and pops up an iPad. The kid picks it up like she was born with the device in her hands and starts to slide her fingers through the screen. The ability to enter cyberspace by touch, to expand pictures using the thumb and middle finger in a spontaneous gesture seem to be the strange result. There’s the girl, now immersed in a world of bits that flow of cyberspace through a peephole rectangular screen, into the physical world composed of atoms where others look at her curiously.
Concepts such as multi-touch and signs as the “two finger zoom” have entered the daily lives of users with the speed of a meteorite, and though still at an early stage, they suggest a series of questions that in the last decade have changed the direction of research in the field of human-computer interaction. These are the first steps toward the massification of interfaces looking to be increasingly tangible, a concept that could emancipate us from the tyranny of the screen, the mouse and cursor, and reconcile bits and atoms in integrated technologies.
From GUI to TUI
Terms like Tangible Interaction (TI) and Tangible User Interfaces (TUI) were coined in 1997 by Hiroshi Ishii at the MIT MediaLab, although research and implementations already existed linked to these concepts since the early nineties. Tangible user interfaces (TUI) combine control and representation in a single physical object or space. With direct manipulation of graphical user interface (GUI), people interact with digital information through their graphical representations (icons, windows, menus) using pointing devices (mouse, keyboard, joystick), while TUI emphasizes the tangibility and materiality, the physical embodiment of the data, bodily interaction and the incorporation of these systems in real physical spaces.
“Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms” is the title of the seminal paper in which Ishii suggests that current interfaces open up an unbridgeable gap between cyberspace and the world of atoms. In his opinion, the ultimate goal of tangible interfaces is to connect the digital information with our everyday material objects and architectural spaces that surround us. The idea is to give physical form to digital information and bits so we can manipulate with our hands and allow the environmental recognition of the information in the periphery of consciousness.
While recognizing that the GUI was crucial to the democratization of access to information, the researcher was inspired by those scientific instruments at museum (with a wealth of aesthetic and performance) which once served to measure the passage of time, predict the movements of the planets or draw geometric shapes to reflect on the future of human-computer interaction. What have we lost to have concentrated all our operations with information on personal computers leaving aside the sense of touch, peripheral perception, tangible objects? Today much of our interactions with others and cyberspace are confined to traditional GUI, entrenched in boxes on desktops, laptops and small screens that require our attention and propose ways of interaction that separate us from the world of atoms.
Ishii used the abacus as a metaphor of what the final tangible interaction should be. Unlike personal computers, pocket calculators, devices that have different input and output (keyboard and screen) and are integrated through an arbitrary convention, on the abacus, the components of input and output match and operations are performed by direct manipulation of the results. In this sense, the TUI would seek to increase the real physical world by coupling digital information in everyday physical objects that users can manipulate, a different approach from Augmented Reality.
Within the scope of tangible interfaces there are different concepts ranging from the famous interactive tables, to the hybrid objects and interactive environments. Here are some of the most significant.
The Marble Answering Machine: Created by Durell Bishop in 1992 is one of the prototypes precursors of what later became known as IT. Voice messages are represented with marbles that the user must pick up and put into a slot so you can listen to them. With the view the repository only, you may know, without having to think, if you have messages or not, whether many or few.
Live wire: forerunner of the ambient devices, is a sculptural instrument created by Natalie Jeremijenko at Xerox PARK, consisting of a plastic cord hanging from a small motor mounted on the roof. The engine is connected to an ethernet network of the company, so each packet that flows through it causes a movement in the cord. The bits that travel through the wires of computers are made tangible through the movement of the cord indicating the workers if the network is very saturated (busy) or not. The sculpture appeals to peripheral perception of users.
Reactable (2003): A table of musical collaboration conceived within the Music Technology Group at Universitat Pompeu Fabra. It was first presented at a concert in 2005, but managed to achieve mass popularity as millions of people watched the demonstration show on YouTube and then Björk used it on her world tour in 2007. Of the whole interfaces that make up the spectrum of TUI, the so-called tabletops are the most popular. Among the pioneers, we can recall the Sensetable or Audiopad, both developed at MIT Medialab in the late nineties. Regardless of the various technologies used (several optical detection systems, electromagnetic, ultrasonic, etc.), interactive tables typically combine tracking control objects on its surface with projection techniques made onto large screens. This type of device allows the user to get information, work with it to create either individually or collectively. The Surface by Microsoft (2007) and the DiamondTouch at Mitsubishi are other famous interactive tables.
Siftables (2008): Pieces of plastic the size of cookies developed by MIT that communicate with each other and the computer allowing very specific interactions. They have motion sensors, tilt (tri-axial accelerometers), proximity (infrared), flash memory, a mini processor, Bluetooth and touchscreen OLED displays. They are used to play and make calculations, among other activities. Siftables probably contain the gene of the project “Intelligent Modeling Physical Systems”, created by architect John Hamilton Frazer in the 70’s, which consists of intelligent cubes which are recognized by proximity and allow prototyping ideas in 3D.
Intelligent environments with walls, ceilings and floors, interactive hybrid objects and tables that react to gestures and promote collaboration mean rethinking the ways we consider interacting with digital information and its counterpart in the world of atoms. The perception of space, the sense of touch and peripheral awareness are some of the cognitive abilities that these technologies attempt to address to overcome the limitations of the GUI.
Tangible bits: towards seamless interfaces between people, bits and atoms. In CHI ‘97: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. H. Ishii y B. Ullmer (1997)
Interactive Surfaces and Tangibles. Tap. Slide. Swipe. Shake. Sergi Jordà, Carles F. Julià, and Daniel Gallardo (2010)