The Engaging User Experience & the Natural User Interface



In interactive digital signage, natural user interface (NUI) interactions provide a much more engaging user experience.

My definition of the natural user interface (NUI) is really quite simple: providing more natural, more human-like interactions to computer systems through touch, gesture and voice recognition. In many use cases like interactive digital signage, these NUI interactions provide a much more engaging user experience.

NUI: Where We Have Been
But before I jump right in to where we are and where we are going, it makes sense to talk a little bit about the past and where we have been. You see, I have some grey hair. I have been doing “this” for a long time. I am a software developer by trade, and although I don’t get to write production code anymore, I have lived the transformation from green screened computer systems to the magical devices we have today. I was there when the first graphical user interface (GUI) operating systems appeared in the ‘80s. And at that time, I was expected to do the most unnatural thing in the world: removing my right hand from the keyboard to grasp that mouse thing. I’m still a “machine gun” as a typist (forced to learn typing at an all-boys Catholic high school). But I can tell you that was frustrating for me and many at the time. We were told the mouse was a more natural way to use a computer. Sound familiar? Cut to today; the mouse interaction is commonplace and part of our culture. It’s a ubiquitous way to use a computer.

131209_digitalsignage_1
Engaging interactive digital signage driven by gesture

I was also there in late 2006 when Microsoft first introduced touch capability in Windows and provided a beautiful application programming interface (API) for developers to leverage when building touch-enabled applications. That is when my company began building these first generations of NUI. There were two main barriers, though, that delayed the NUI revolution:

  1. Although Windows was touch-capable, it was not designed for touch. Even though we could build beautiful touch-enabled applications, the foundation of the operating system was just too hard to use with touch. Hitting the little start button with your finger was hard enough; hitting the little red X in the top right of a windows app was nearly impossible.
  2. Multi-touch-enabled display screens were just coming on the market, were very expensive and didn’t have the beautiful fidelity of touch we experience in the touch-capable screens of today.

It wasn’t until January 27, 2010 that touch became the norm; over 3 years later. Do you remember that date? That is the day Steve Jobs walked on stage and held the iPad above his head, grasped with two hands like he was presenting the lion king to the animal kingdom of Africa. And suddenly using a browser solely by touch became a norm of computing. Who would have thought those folks at Apple would do such a fantastic job with a touch-enabled Safari browser and an iPad OS in general? I would call that the start of the NUI revolution. And that NUI revolution has challenged us with a new component of the way we design good software: user interaction design.

Engaging User Experiences, User Interaction Design and NUI
In 1998, I worked on a server product team at Microsoft, but next door to me were the Microsoft Research guys that were tackling the first voice recognition systems. Realize this was 15 years ago. They were targeting the first systems not only for Windows, but embedded systems for the automotive industry that are now prevalent in products from companies like Ford Motor Company. What they could never overcome in their prototypes in 1998 was the kid in the back seat shouting commands when it was only supposed to be listening to the parents in the front seat. Today, we can overcome those challenges. With 3D cameras like the $250 Kinect for Windows device, we can engineer software that only listens to the person that is being tracked and ignore (if applicable) the voices and ambient noise of everything else. Additionally in the right environments and with powerful hardware we can make voice recognition “bullet-proof”… in almost any language; all at the same time.

Additionally, this generation of 3D cameras like the Kinect, the Primesense, SoftKinectic, Panasonic and Leap Motion devices allow the skeletal tracking and depth recognition. What does this mean for interactive digital signage? Well, it means you can “wave at it” instead of having to touch it to engage. 3D cameras like the Kinect allow for tracking of full body movement, facial expressions and voice recognition with precision. Coming in 2014 is the Kinect 2.0 device which tracks with such precision it can measure your heartbeat by looking at the pulse on your face.

I predict that within a few years, gesturing at computers is going to become the norm just like touching computers has become the norm. We are starting to see more and more gesture-capable interactive digital signage appear all over the world. The obvious use cases for gesture are environments where bacteria or pathogens make touching a screen unsafe. Public places like airports, subways and bus stations are the obvious candidates. City governments around the world have already begun legislating the elimination of touchscreens from public places, paving the way for gesture.

My definition of NUI is touch, voice and gesture. Within a decade we’ll have neural interfaces too, but thinking at interactive digital signage is way too futuristic for this discussion.

131209_digitalsignage_2
NUI – the natural user interface

Pushing the Laws of Privacy
These 3D cameras and the software we build for them are going to push the laws of privacy. Sure, gesturing at computers is a viable way for engagement in interactive digital signage. But these 3D cameras are also looking at you. And because they are getting so good, demographic identification is becoming pretty rock solid. We can identify age, race and gender pretty easily with these amazing devices. Facial recognition, identification and authorization is also very realistic with 3D cameras. Under much of the world’s current privacy law “opting-in” is necessary for demographic identification. The world’s largest country really doesn’t have any privacy laws. Rhetorical question: Do advertisers want to know who is looking at their ads and for how long so that they can target the right demographic more appropriately? Let me give you three not-so-futuristic use cases to think about.

Use Case 1: Would you agree to be tracked by these 3D cameras in a department store like Macy’s? Tracked throughout the store so we know what you are looking at, what you purchase and what you don’t? Up-selling you based on your profile in a detailed loyalty program of real-time big data? Probably not. But my wife would if it meant she was going to get a 40% off coupon. Realize, if you are reading this you are probably not the consumer. You don’t represent the consumer demographic. You are either the technology elite or an interactive digital signage expert.

Use Case 2: How many times have you stood there staring at the departing airline flights sign in the airport? How much of your time has been wasted looking up at that board waiting to find out your gate number or boarding time? It would make total sense to be able to walk up to it and touch it allowing for quick scrolling through hundreds of flights to get you to your flight. My home airport is LAX. And if you have been to LAX you know touching one of those screens is just not safe. It’s a disgusting place. So, gesturing through the hundreds of flights would seem to make a lot of sense. Even speaking terms like “San Francisco” or “Flight 1214” or “United” and having the system quickly find the appropriate flights would seem to make sense. It would seem to make sense if it wasn’t LAX and you didn’t have 20 other people shoulder-to-shoulder staring at the same screen. Multiple, smaller screens side by side in all that wall real estate in the airports would seem to accommodate gesture and voice in this use case a lot better.

Use Case 3: Consider the use case above and this environment: airline clubs at airports. In the club at the airport you have a much smaller set of frequent, savvy travelers. What if I, as a United 1k frequent traveler, profiled myself on the United web site and opted-in for facial recognition in the United Clubs at airports around the world? On the United site, the web application takes my picture and that picture is run through facial recognition algorithms populated in United’s cloud services. Then when I walk into a United club at LAX I simply stand in front of the departing flights screen, it identifies me and gives me my flight information. Futuristic? Not so much. We have already built it.

131209_digitalsignage_3
Gesture and voice recognition in departing flights information

Summary
I’m excited to come to DSE again and talk about these exciting and engaging technologies, and to speculate about the future of NUI and engaging interactive digital signage. But I’m not just going to talk about it. At DSE 2014 and I will demo many of these technologies live on stage. I hope to see you there and I’m looking forward to discussing these exciting and revolutionary times.


Author Tim Huckaby, founder & chairman of InterKnowlogy, will be presenting Seminar 5 entitled, "The Engaging User Experience & The Natural User Interface,"at Digital Signage Expo on Wednesday, February 12 – 9:00-10:00am. DSE 2014 will be held at the Sands Expo & Convention Center in Las Vegas, February 12-13. For more information about DSE or to register for this or any other educational seminar or workshop and learn about digital signage go to www.dse2014.com


huckaby_tim

Tim Huckaby is chairman and founder of Actus Interactive Software and InterKnowlogy—experts in the natural user interface and engaging user experiences in software. Tim has worked on and with product teams at Microsoft for many years, has authored books & several publications, and is a frequent conference speaker. You can reach him at TimHuck@InterKnowlogy.com

Share and Enjoy:
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • TwitThis