The interaction between a human and a computer, also called human-machine interaction (HMI) or human-computer interaction (HCI) has changed quite a lot in the past decades. Virtual reality (VR) and augmented reality (AR) have received a revived interest due to the development of devices like the Oculus Rift and Microsoft's Hololens. This considered, HCI will probably change even more radically in the coming years.

Short history

HCI has been a topic of active research for decades; researchers and artists have invented the most exotic technologies, for instance, Char Davies' art project Osmose whereby the user can navigate by breathing and moving her body.

Osmose suit Osmose suit wired The vest is used to measure the breathing of the user

Obviously, not every invention made it to the consumer market, but most technologies we use today have been invented long before they became mainstream. There are for instance striking similarities between Google Glass and EyeTap developed by Steve Mann in the 1980's and 1990's.

Eyetap vs Glass Eyetap development Development of the EyeTap since 1980

We have come a long way since the interaction with punched cards in the early days. In the 1960's the user interaction happened mostly via the command-line interface (CLI) and although the mouse was already invented in 1964, it became only mainstream with the advent of the graphical user interface (GUI) in the early 1980's. GUI's also made it more apparent that HCI is actually a two-way communication; the computer receives its input via the GUI and also gives back the output or the feedback via the GUI.

First mouse First mouse as invented by Douglas Engelbart

NUI and gestures

Speech control became consumer-ready in the 1990's (though very expensive back then). Interesting about speech control is that it is the first appearance of a Natural User Interaction (NUI). NUI roughly means that the interface is so natural that the user hardly notices it. Another example of NUI is touchscreen interaction, though we have to distinguish between using touch events as replacement for mouse clicks, such as tapping on button element in the GUI, and gestures, for instance a pinch gesture to scale a picture. The latter is NUI, the former is a touch-controlled GUI.

Instead of making gestures on a touch screen, you can also perform them in the air in front of a camera or a controller such as the Leap Motion. Gestures can also be made while wearing a data glove

Data glove

Interaction with brainwaves

Wearables such as smart watches are usually a mix between a remote controller and an extra monitor of a mobile device. As a remote controller you can send instructions like on a regular touchscreen, but for instance the Apple Watch has a classic rotary button for interaction as well. Wearables can also communicate other types of data coming passively from a human to the computer, like heart rate, skin temperature, blood oxygen and probably a lot more to come when more types of sensors become smaller and cheaper.

Google Glass is a wearable that can be controlled by voice and by brainwaves. By using a telekinetic headband that has sensors for different areas of the brain, brainwaves are turned from passive data into an actuator. Fields of application are typically medical aids for people with a handicap.

Google Glass with telekinetic headband Showing a headband with 3 sensors on the skull and one that clips onto the user's ear

AR and VR

With AR a digital overlay is superimposed on the real world whereas with VR the real world is completely replaced by a virtual (3D) world. Google Glass and Hololens are examples of AR devices. The Oculus Rift and Google Cardboard are examples of VR devices.

Google Glass renders a small display in front of your right eye and the position of this display in relation to your eye doesn't change if you move your head. Hololens on the other hand actually 'reads' the objects in the real world and is able to render digital layers on top of these objects. If you move your head, you'll see both the real world object and the rendered layer from a different angle.

Hololens rendering interfaces on real world objects Hololens rendering interfaces on real world objects

AR is very suitable for creating a Reality User Interface (RUI), also called a Reality Based Interface (RBI). In a RBI real world objects become actuators; for instance, a light switch becomes a button that can be triggered with a certain gesture. An older and more familiar example of RBI is when a 3D scene is rendered on top of a marker; when you rotate the marker in the real world, the 3D scene will rotate accordingly. Instead of a marker you can also use other real world entities, for instance, Layar makes use of the GPS data of a mobile device.

VR is commonly used for immersive experiences such as games, but it can also be used to experience historical or future scenes like building that have been designed but haven't been built yet.

AR Basketball App Mug An example of a RBI: a marker is used to control a 3D scene

Researching VR for web

We will be looking at two VR devices in the near future: the Oculus Rift and Google Cardboard. In the coming blog posts we will share the results with you.