Consumer electronics and the future of engaging the senses: an ergonomist’s perspective. | PDD

Consumer electronics and the future of engaging the senses: an ergonomist’s perspective.

By PDD

on November 29 2011

After reading Bret Victor’s provocative post on the future of interaction design I’ve been mulling it over. Bret discusses the current trend for ‘pictures under glass’  interaction, giving a disproportionate focus on visual stimulation while ignoring the wonderful capabilities of the human hand to feel and manipulate things.

Featured image of Multi-touch gestural interaction via Apple.

It is true that, current state-of-the-art devices (think iPhone, iPad) are not taking into  account  the full capabilities and strengths of the human sensorimotor system. Gestural interfaces, prescribing the user to interact by sliding a pointed finger over a flat glossy surface (finger-as-mouse) are not maximising on the humans wildly complex and highly honed senses. Traditionally recognised as five senses, humans are now considered to have at least five additional sensesthat should be considered in the design of products for people.

The human senses: Copyright, PDD.

These inputs are constantly processed, interpreted and integrated by the brain through sensory integration; the neurological process that organises sensory information  from the body and the environment, allowing us to interact effectively.

Different parts of the body hold different weightings in terms of sensory and motor capability and neurological activity. Penfield’s Homunculus gives an interesting picture of how your brain sees your body from the inside; it represents the body in terms of how much of the somatosensory and motor cortices innervate certain body parts; and how sensitive each region consequently is.

Penfield’s Sensory and Motor Homunculi via  Webster’s Online Dictionary.

So, the hands are clearly very important. We can see this from the amazing capability of blind people to read Braille and the incredible manual skills of a concert pianist. But, although the fingers play a key part in current interaction thinking, given the thousands of sensory receptors and sophisticated motor coordination in our hands and fingers, are we being over simplistic and failing to fully empower these capabilities in the latest consumer electronics we create?

So, what’s wrong?

Although they offer many breakthroughs  in terms of intuitiveness, ease of learning and experience, many gestural interfaces offer far from perfect interactions.

Can you remember the days before the touch-screen where you were able to type a text message while walking down the street at record-speed without looking at the keypad? After migrating to an  iPhone  I noticed how the number of text messages I was sending dramatically dropped, due to the pure frustration and concentration required  using the tiny and closely spaced virtual keypad with my relatively cumbersome finger-pads. There are whole websites dedicated to iPhone typing errors – a clear-cut indication of an interaction that is not working 100%.

Then there’s navigating the iPad by dragging and poking it with your finger. It reminds me of children drawing in a sandpit or finger painting – the early stages of learning fine motor skills, that step before those motor skills are refined enough to use a pencil – a tool – and accomplish precision movement and skill. The finger is not designed to select tiny hyperlinks or perform press-and-hold or double-click interactions. These interactions are not smooth, nor often intuitive, and there are even concerns about the risks of iPad-induced  injury  through the non-neutral hand, wrist and neck postures adopted (but that’s another post altogether…). Then there’s ‘smear-screen’ – the inevitable conglomeration of multiple users grime, finger-painted magically across the glass.

Image via  Amazon

Finally, take another current trend – the capacitive button. The new must-have industrial design feature. Why press when you could just touch, eh? Well…pressing is actually quite satisfying. You get immediate proprioceptive and tactile  feedback   from pressing. That small physical movement, exciting nerve endings in the fingertip reassures the sensory cortex that the button-press has registered. This gives the user a sense of confidence and control. The feedback from a capacitive button is largely visual, not tactile – you just see that little flashing LED light up underneath the finger. What affect does this have on our visual attention? Should the eyes not be looking at the interface response, rather than the button itself? Even when capacitive buttons are reinforced with haptic feedback, the interaction and time response is still not natural. You wait: “did it feel me? Did it respond?”

Image of Toggle remote control via  Tuvie

Why does it matter?

I’m not trying to be an ergonomics killjoy… but wondering whether untapped opportunities lie for real innovation in our interactions with tablets, mobile and other devices to capitalise on our immense physical abilities, coordination, and dexterity. Through focussing on human capabilities and sensory integration we are able to, through design, produce interfaces and experiences that bring out the best in us; and on the flipside, avoid interfaces that are unnatural, frustrating or even a risk to our physical wellbeing.

I came across an interesting exercise participants had done at the Design by Fire conference, where they had been asked to reconstruct the human to fit the gestural interfaces that they are required to interact with.

Image of participants’ creations via  For Inspiration Only.

But wouldn’t it be easier to create products to fit people?

What will the future be?

So, in 20 years, how will we be interfacing with and controlling our devices and environments? How do we design more pleasurable and natural product and user interfaces to work in harmony with the human?

Will it simply be an enhanced physical interaction involving more proprioceptive and tactile feedback to the user as well as coordination between multiple fingers/hands? Will other input tools come into existence to enable the hand to make more precise and finely controlled manipulations?

Will we need an input device at all?  Perhaps not. Take a look at the invisible mouse developed by Pranav Mistry at MIT.

Will other senses play more of a part in our future interactions? For example technological developments in visual gaze or voice input. Siri on the iPhone4 has already made a giant leap in improving the latter.

Image via  Apple

Or, could we even interact in future through mere thought? Interfacing through the power of our brainwaves. There has already been some interesting progression in this area – both in terms of brainwave controlled computers,games and apps; and for other purposes including Japanese fashion wear!

These are just a few ideas…. it would be interesting to hear yours.