I first came across Dourish's work through his writings on
Our notion of what a computer is, what it does, and how it works hasn't changed for decades.
We're still living with the legacy of a trade off made fifty years ago; computer processing time used to be enormously expensive. It was worth making humans transform their data and instructions into formal, rigid input languages that optimised for the machine's experience, rather than for the human experience. At the time, most computers were used for military or business calculations and no one minded too much.
We now have the odd contradiction that our machines have more power than we're able to leverage - 95% of the time they're doing basic tasks at low computational capacity. While we perch in front of them slowly deciding what we want to do next.
We're stuck in the historical paradigm of 'desktop computing' – the idea computers are a static workstation we plop in the corner of the room, and go to in order to execute specific tasks that occupy the whole of our attention.
The dream of Ubiquitous Computing tries to subvert this notion. Ubicomp is a paradigm of computing where our machines are embedded in everything around us. The point is to get us up and moving around in the world, and bring the computational power with us.
This was what we were promised with the 'internet of things,' which has so far turned out to be the internet of impractical, invasive surveillance objects.
Research institutes like
Dourish proposes the concept of Embodied Interaction
"Embodied Interaction is interaction with computer systems that occupy our world, a world of physical and social reality, and that exploit this fact in how they interact with us." (3)
Traditionally, computational systems are thought of as procedures - step by step models of sequential behaviour. The last two decades have seen us turn towards interaction. We are instead paying attention to the interplay of different components. An ecosystem of interlinked elements instead of a rote sequence of tasks.
The system focuses on many diverse elements with specific roles rather than generalised monolithic processes.
Dourish presents four historical phases of HCI; electrical, symbolic, textual, and graphical
Tangible Computing is when we “distribute computation across a variety of devices, which are spread throughout the physical environment and are sensitive to their location and their proximity to other devices.” (15)
This is the same dream as Ubiquitous Computing and Internet of Things - baking computational logic into everyday objects
Another way to think about this is creating environments where the physical objects in the room act as the interfaces, rather than graphical interfaces and mice. This is the
“Mice provide only simple information about movement in two dimensions, while in the everyday world we can manipulate many objects at once, using both hands and three dimensions to arrange the environment for our purposes and the activities at hand.” (16)
Social Computing is focused on “incorporating social understandings into the design of interaction itself. (16) It focuses on interfaces as conversations, and draws on more social science/anthropological theory – trying to recreate social relations and social meaning in the computer interface.
Both Tangible Computing and Social Computing draw on our familiarity with the everyday world – they're "more than simple the metaphorical approach used in traditional User Interface Design."
Rather than focusing on imitating the physical world of objects in computers, they focus on bringing computing into our social, embodied experience of the world. “They share an understanding that you cannot separate the individual from the world in which that individual lives and acts.” (17-18)
In many ways the Human-Computer Interaction community is stuck in the world of Logical Positivism and Cartesian Dualism.
It's a view that “makes a strong separation between, on the one hand, the mind as the seat of consciousness and rational decision making, with an abstract model of the world that can be operated upon to form plans of action; and, on the other, the objective, external world as a largely stable collection of objects and events to be observed and manipulated according to the internal mental states of the individual” (18)
Dourish argues Embodiment is central to this approach. "Interaction is intimately connected with the settings in which it occurs" - in recent years interactions designers have realised the value of anthropological Ethnography to understand the environment and context of interactions.
Early work in the field tried to create abstract models of the people they were designing for – hypothetical users – rather than exploring interaction design with real people in real contexts.
Human-Computer Interaction is not the only field recently captivated by Embodiment. Across disciplines, more consideration and attention is being paid to Phenomenology.
Dourish argues that over the last three decades, the way we interact with computers has barely changed at all – we use the same physical inputs of mice, keyboards, and screens, and the same digital patterns of dialogue boxes, windows, and files. We still need to go to a desk and input with both hands
This is what
Mark Weiser's dream of Ubiquitous Computing in the 1990's tried to draw focus away from the computing device itself, and spread computational logic out into our existing environments.
Weiser wanted "computationally enhanced walls, floors, pens, and desks, in which the power of computation could be seamlessly integrated into the objects and activities of everyday life" (29). The goal was to make computers invisible - so pervasive they disappeared into the wallpaper.
Xerox PARC developed a strategy known as computation by the inch, foot, and yard.
How is this different to the multiple LCD screens we already have in our microwaves, washing machines, Nintendo switches, smartphones and smartwatches? Ubiquitous computing has already happened to some degree. Smartphones have brought computing into the world
The key difference between this dream and us now is they imagined information would be able to freely move around between the devices. Movement is the difference
Making devices at a wide variety of sizes was not the point of Ubiquitous Computing. It was figuring out how they would operate as part of a holistic system, and fit into the everyday world of activities and interactions. Interoperability was key.
While Xerox PARC was developing Ubicomp, EuroPARC (a satellite research institute) in Cambridge was exploring how to combine the affordances of physical paper and digital documents into one medium. Moving between the two meant you lost a bit of each in the translation process.
Pierre Wellner developed a "Digital Desk" where a camera was positioned above a physical desktop and recorded what was on it – it was able to read documents and make calculations based off it. It could also project digital documents down onto the surface and track user's hand movements.
The two "killer design features" of the Digital Desk were support for manipulation, and the way its electronic and physical worlds were integrated. Interactions with objects were direct interactions with real world objects, not the imitation of them we do in current
Rather than being limited to the inputs of a keyboard and one mouse, you had access to two hands and ten fingers which allowed for more complex inputs
A document could exist both physically and digitally at the same time. Printers and cameras allowed documents to move between the two worlds.
Mark Weiser wanted a computationally augmented reality, as opposed to the Virtual Reality crowd who dream about replacing reality.
Virtual Reality gained popularity in the 1990's as the technological capacity for data gloves and head trackers came in around the same time as Cyberspace. Howard Rheingold wrote a comprehensive history on it.
Ubiquitous Computing and Virtual Reality have fundamentally different approaches to the relationship between people, computers, and the world - it’s the difference between making the world invisible and making the computer invisible. VR is all computer. Ubicomp is all world.
Ubiquitous Computing is “a technology of context; where traditional interactive systems focus on what the user does, ubiquitous computing technologies allow the system to explore who the user is, when and where they are acting, and so on.” (39)
Ubicomp prototypes focus on being reactive - automatically switching modes based on the location of people and things
Hiroshi Ishii at MIT's
So far, the Cultural Narratives we tell about The Computer Revolution focus on turning the physical world into virtual representations. Cash into Bitcoin. Paperless offices. Books into eBooks. We're still caught in the dream of Cyberspace where the laws of physics do not apply.
The Tangible Bits research challenges the assumption that turning atoms into bits is a universal good. While "digital and physical media might be informationally equivalent, they are not interactionally equivalent" (44)
The goal of the Tangible Bits research is to put physicality back into digital experiences that support natural interaction in the real world.
Thinking in 'inputs' and 'outputs' is unhelpful for Tangible Computing. In our everyday environment, these are coupled. They're interconnected and coordinated - movement in a space affects a display on a wall. Moving an object changes information. It's a Feedback Loop.
Tangible Computing differs from Ubiquitous Computing - it doesn’t think the computing should disappear into the world, but instead be present and deeply integrated into artefacts.
“Tangible Bits provides some balance to the idea that a transition from atoms to bits is inevitable and uniformly positive”... “it observes that while digital and physical media might be informationally equivalent, they are not interactionally equivalent” (44)
Traditional interfaces put only one element in focus at a time - one cursor, one window, doing one task. In the embodied real world we achieve things with multiple limbs coordinated together to make stuff. Think about the numbers of physical touch points when someone plays a piano - feet, fingers, arms, eyes, ears.
“Not only is there not a single point of interaction, there is not even a single device that is the object of interaction.” (51)
Social Computing is the application of Cultural Anthropology and Sociology to designing interactive systems. Computers are obviously integrated into the larger social fabric of our lives and civic structures - social sciences help us explore those relationships.
Anthropologists believe we need to do more than simply describe what the members of a culture do. Through Thick Description and Deep Hanging Out, we need to find out what they experience while doing it, why they do it, and how it fits into the fabric of their daily lives.