Tamara Chehayeb Makarem and Jenny Gaudion of Scott Logic will be speaking at a Girl Geek Dinner event on 20th July in Bristol, UK on the complimentary topics of ‘Body as Interface’ and ‘Interpreting the body’. They spoke to InfoQ about the different ways the body can be an interface with technology systems, how our thinking about user experience needs to change and interpreting body-based data.
Thanks for talking to InfoQ. Could you please briefly introduce yourselves?
Tamara: I’m a User Experience Designer at Scott Logic in London, where I work with clients, primarily in financial services, on projects ranging from trading applications to data analytics platforms, financial management tools and intranets. Prior to Scott Logic, I worked in Lebanon and in New York, designing cross-platform applications for clients in e-commerce, healthcare and retail. I hold a Master of Fine Arts (MFA) in Design and Technology from Parsons, The New School, and a Bachelor of Fine Arts (BFA) in Graphic Design from the American University of Beirut.
Jenny: Hello! I’m a software developer at Scott Logic in Bristol. My background is in C#, SQL and the Microsoft tech stack, but this year I’ve been focusing on front end development using Angular and React.
Tamara, your topic is Body as Interface and you’ve written a blog on it. What do you mean when you talk about the body as interface – aren’t we always using parts of our body to interact with technology (my fingers are typing this question)?
Absolutely; we constantly use our body as an interface. I think there are two broad categories in which we do that: the first is within a controlled environment, and the second is when devices become part of, or an extension of, the body. The first category confines us to a location. The interaction is curated and the space is designed to allow us to use our body as an interface. Think about when you clap your hands and sound-activated lights come on, or when you walk towards an automatic door and it opens. We find many examples of this type of interactivity in art installations and video games.
In the second category, we move freely but we are confined by having to wear or carry a device such as a Bluetooth headset, Google Glass or Fitbit. These devices try to blend in with the body and become less intrusive. They can monitor gestural interactions and eye movement. Some even leverage biometric data like the heart rate, blood pressure or bodily fluids to trigger an interaction.
The interfaces we currently use are rather unintuitive because they are not designed to adapt to the user’s natural behaviour; rather, they expect the user to adapt to them. Similarly, our experiences with technology can impose artificial constraints on how we design. We think in terms of screens because we are used to designing for screen-based interfaces. We need to question if this is the best way for users to interact in given situations. Designers need to anticipate user needs, not only cater to their existing preferences.
Going beyond the ‘simple’ interfaces we are used to, what needs to be taken into account when looking at the body as the interface to a device; what are the limitations and what is the potential?
One limitation we face is the level of accuracy the technology has in measuring and interpreting the data it collects. Technology needs to be able to interpret user behaviour. For example, a Fitbit should ideally distinguish between being shaken and the user walking, but there’s still a margin of error. Parameters in the environment can change in ways the design can’t control. One of the challenges I faced when designing an installation that tracked colour from computer vision was that the lighting in the exhibition space changed through the course of the day. This limitation is not exclusive to examples that use the body as interface. Recently, Tesla Motors’ self-driving car had problems with detecting the white side of a tractor because of a “brightly lit sky” and this caused a technology failure that resulted in a fatality.
Also, we’ve not yet achieved a solution that allows us to present the body as a non-contingent interface independent of any device or location. We are either confined to a location where the technology has been set up, or we have to wear a device that allows us to bring the technology with us.
In terms of potential, designing with the mind-set of the body as an interface could fundamentally change the technology we use and how we interact with it. It helps us move away from viewing things in terms of the interfaces we are familiar with. For instance, we were able to provide an alternative to the mouse by introducing touch screens. We then moved from touch screens to more gestural interfaces with the Kinect and virtual reality goggles. We need to build devices that give users greater autonomy to determine where they go with the design. Thinking of the body as an interface and designing with that mindset, lends itself to a more experimental and iterative approach to design.
It appears that much of the work done in this area is in the interaction with art and in gaming – where are the practical business applications, and how will this change the way we work?
We’ve already seen some of its practical applications in strengthening security with facial recognition and touch identification. Similar applications can be used in the financial services sector. For example, you can use biometric data to measure the stress levels of a trader. You can track metrics like temperature, heart rate, blood pressure or breathing rate, to stop traders trading when they’re more likely to make a decision based on emotion. Emotion sensors can allow users to better control their behaviour in emotionally charged situations.
These sensors can also be used to control the environment around the user. Insights about how colour affects a person’s mood can be used to alter the user’s behaviour. For example, the lighting in casinos is set at a level to encourage people to continue gambling, and the warm colours in fast food restaurants boost our appetite. We can apply similar insights to the workplace. For example, the lighting and colours in your office could change to help you relax. This could boost employee productivity and improve decision-making.
Taking your second aspect – what changes when the device becomes part of the body? Are we talking about wearable devices?
With wearable devices, the user’s interaction is more passive than with non-wearable devices. You don’t have to change the way you would normally behave. Smartwatches track how many steps you take and your sleep patterns without you having to adapt your behaviour. Some devices are even embedded within the body. For example, the Southpaw is a compass designed to be worn under the skin and it is activated when the user faces north.
The challenge designers face is finding the balance between making technology accessible, anytime, anywhere, and making it un-intrusive. Wearable and embedded devices bring us closer to meeting this challenge.
From a UX perspective, what needs to be taken into account when designing for this type of device?
When designing for wearable devices and interactions where the body is used as an interface, there are a few things to consider:
We must design for the action the user is undertaking, conscious that it may change. The user’s behaviour drives the design and sets its context. Will the user be walking, running, jumping or sleeping etc? We can’t always predict their behaviour, so we need to create a flexible design that caters to different scenarios. We’re used to creating one design that adapts to different screen sizes. In the future, interface design will mean creating dynamic designs that constantly change to adapt to the user’s behaviour.
The environment the action will be undertaken in must also be considered. You don’t always have control over the number of people in the space, or the lighting etc. You need to test the design in the actual space within which it’ll be used to reduce the error margin. The bodies of users will differ and their unique attributes can alter the requirements. With screen-based interfaces, you might consider the size of the thumb, or adapt the design for colour-blind individuals. When the entire body is used as an interface, the scope of interaction is wider, so there’s more to take into account. For example, when the Apple Watch was first released, its heart rate sensor couldn’t get a reliable reading for people that had tattoos, because the ink pattern and saturation blocked light from the sensor.
You talk about “designing experiences” – what are some examples of new or different experiences that these interfaces will enable?
I think one of the main fields where using the body as an interface will create alternative experiences is in healthcare. Designing with that mindset allows us to cater for people with disabilities. Think about how Stephen Hawking blinks his eyes to write. It’s particularly interesting when we look at how it’s used to enhance our senses or restore those that have been lost. For example, a vest with vibrators that allows hearing-impaired individuals to interpret vibrations as sound has been developed as a form of sensory substitution.
Jennifer, your area of focus is Interpreting the Body – what does that mean, and what type of information can we elicit from body activity?
Interpreting the Body is a discussion into various ways we can use the body to interact with technology. It’s actually the second talk in a series that begins with Tamara's talk Body as Interface. Traditionally, technology has had precisely defined inputs, like using a keyboard to type in a command, or using a mouse to click on a specific item displayed in the user interface. Now there are many companies looking into alternative ways to interact. We’ve already seen voice commands come a long way and now they’re in widespread use, such as with Siri and Ok Google. Some of the options for communication include interpreting gestures, eye position, facial expressions, body chemicals and many more.
In order to interpret an interaction or action, what will our technology need to do, and how will it be accomplished?
To communicate effectively with technology, it will need to be able to interpret our actions with a very high degree of accuracy, and cope with a vast range of subtle differences. As a software developer, I’ve always had to worry more about how best to output information to a user to communicate effectively, rather than about how the software receives input from the user. Machine learning is crucial as there’s such a vast range of ways in which people can perform the same action. For example, to interpret a gesture involving moving a hand, the software will need to be able to recognise various hand shapes and ranges of movement; to interpret a voice command it needs to recognise what a word or phrase sounds like in different pitches, tones and accents. These systems are underpinned by vast banks of data and are improving as the available data grows, and as they calibrate to an individual user.
You’re presenting this talk at a Girl Geek Dinner – please tell us a bit about this community.
Tamara: Girl Geek Dinners (GGD) is an organisation founded in London by Sarah Lamb to promote women in information technology, a field where women are still under-represented. It now runs in different countries including the US, Canada, Australia, and New Zealand. During the event, one or more featured speakers present their talks and attendees get a chance to network. I’ll be presenting my talk, Body as Interface on July 20 at Bristol’s GGD and am very happy to be part of the GGD community. I highly support the effort GGD invests in getting more women interested in STEM and building successful careers in tech.
Jenny: Girl Geek Dinners are a great way to meet fellow women in technology, which is important as we can be sparsely distributed among technology firms; particularly at conferences the percentage of female attendees and speakers is very low. Talks cover a wide range of topics, so anyone who’s interested can come along (men are welcome as guests of female members) and it can be a great way for people to find out more about working in STEM-based industries. I joined the group in Bristol with the encouragement of Scott Logic, which is a keen supporter of community tech groups and am really glad I did. The talks are great and the format really relaxed, so there’s plenty of time to ask questions and network. Software developers are particularly well represented in the Bristol group, so it’s a great way to find a mentor.
Thank you both for taking the time to talk to InfoQ.