Main Page Content
Ux Challenges In Touch Interfaces
As mobile devices have been taking over the place of the mobile or home computer for basic apps and web access, developers are struggling with letting go of the mouse as the primary interface device.
On Hover
Think about web sites. So many sites rely on the mouse hovering over an element, such as the site navigation, to initiate some interaction. With a finger, you can't hover. You can't interact with the device without touching it, and then it registers, to use the parlance of the mouse-centric world, as a click. When we flick our fingers we're just clicking and dragging. We can even double-click with a double-tap. We cannot, however, hover. Granted, some advanced interfaces mimic the hover with a tap-and-hold approach, but that's cumbersome for many, especially since hover-friendly interface elements were developed partly for their immediacy. It also steals the one-button mouse experience (thanks, Apple) where a click-and-hold is analogous to a right-click on other platforms. Platforms with more mouse buttons. So how do you know if the tap-and-hold gets registered as a hover or a right-click from the browser or application interface without testing it? And what do you do when you find it's different for each?
On a mobile phone, maybe not a big deal. Although my mobile browser can display fly-out menus (even if there is no convenient way to initiate them), I probably won't be doing too much of that given how awkward it is. But now that tablet devices are falling off trucks, things are changing. As Apple pushes the iPad as a web browsing device, users are faced with web sites that were never designed for interaction from a user who is essentially using a broken mouse. Some menus don't work without a hover (a click on the top menu item results in nothing, because it's not a link), Flash and other JavaScript-powered elements (what you might blithely call HTML5) that rely on a hover (like those awful ad banners that, on one stray swig of the mouse, take over my screen and start blaring video) become useless.
If you are one of those web developers who thinks it's a fun discovery experience to camouflage links in the content as regular text (no underline and no color change), relying solely on the user's mouse to discover links by accident, your days are numbered. That unfortunate interface witticism will now render entire blocks of content flat, removing the likelihood users will ever discover your hyperlinks, costing you clicks and possibly even users who see no depth to your content.
Meatsticks Are the New Mouse
I'm also faced with the dilemma that, even with my dainty girlish hands, my fingers are far too fat to fit within the few pixels occupied by the tip of a mouse pointer. All those years of design predicated on tiny buttons and links with no dead space between them (thank you, jerks, you should have listened to me a decade ago) means I spend a lot of time pecking the stop button after a mis-tap-click-punch.
What happened to copying and pasting? If I have trimmed my finger nails, I can't even get my selection started in the correct spot of the paragraph. Once I have manipulated the selection to just the characters I want, how do I maintain that selection and choose to copy? Some devices immediately pop up a menu which, if missed, may release your selection.
I'm interested in interfaces that don't leave my hand blocking my view as I perform tap-and-hold maneuvers, flick to scroll, or read with my hand the way novice users do with their mouse. I'm not sure we know yet how to do that.
What We've Got Now
Understanding how multi-touch can be leveraged by web sites is still mostly unknown. Flash 10.1 may now support it, but the browsers don't. Even then, the average web developer has only personal experience to guide him/her. This leaves the door wide open for wildly divergent implementations depending on the platform, application wrapper, browser, and developer. Add the ability to hack some of those behaviors, and it gets even more complicated.
The current solution for web developers is just to make a mobile version of the web site. When dealing with lilliputian screens that may even make sense: reducing overall bandwidth by not sending as many images or as much CSS, removing interface elements that require mouse hovering, or removing layout restrictions (probably wrongly) created for screens (display size, font sizes, etc.). The simple argument there is to design for mobile devices first (which you should do anyway, as I outline in Luke Wroblewski on Mobile First). Sadly, that doesn't always account for the interface intricacies of touch screens.
While none of this is really new (we've had touchscreens in one form or another for a long time now), their ubiquity in peoples' pockets makes this an issue that affects average developers (web and otherwise) who are new to this when it previously was only a concern for those trained in developing for the world of touch screens. So few developers are trained in user interfaces, let alone for touch interfaces, that it may be some time before things catch up. This can only lead to confusion as users visit one web site and get one experience, get a different experience on another site, and yet another experience using an app.
When the Screen Taps Back
Many touchscreen devices don't offer tactile response. Sure, your device can mimic simple tactile responses for a ball-in-a-maze game, but this doesn't come close to letting you feel if the on-screen keyboard registered your tap on the correct key. This concept of the need of some sort of feedback is well known. This is why your power window switch in your car clicks when you use it, or why your gas cap clicks when you have put it on all the way. These minor items act as cues, and without them we'll have to look at the window to know immediately if it's going down, or just keep turning the gas cap until our fingers are raw.
Haptic technology (using vibrations to convey information through the sense of touch) is the next step for robust touch interfaces. Some devices already use vibrations to verify user interaction. The next step can focus that vibration to specific locations on the screen, giving users immediate feedback not only that contact with the screen was registered, but where it registered, and even how it was registered (was something highlighted, did you just scroll, did you launch an app?).
In time, this can allow people with physical impairments more control over a device by tweaking its touch sensitivity and using haptic feedback to convey whether or not that customization is calibrated properly. It could potentially even help the blind by providing braille feedback (probably pretty far off) or even other cues about information on the screen via touch.