The Next Big Thing: Gesture in Wearable Technologies

Smartphones have dramatically changed the way consumers interact with technology. What kind of new application will impact our daily life this year?

“Wearable technologies,” also known as body-borne computers or wearable computers, are gaining momentum. According to recent research from IMS Research, a subsidiary of IHS, the market for wearable wireless devices is expected to achieve minimum revenues of $6 billion in the next four years. Wearable technologies are miniature electronic devices that are worn by a user under, with or on top of clothing. Demand for real-time data, including personal health information, is driving the market for wearable technology that is expected to grow from 14 million items in 2012 to as many as 171 million in 2016. These wearable technologies could be grouped into three categories: wrist wear, eyewear and head-mounted display (HMD) in terms of form factor. Of course, different styles of wearable technologies exist and are used for specific domains. However, these three types of form factors in wearable are more easily adaptable to a wide range of end users from an industry domain to a consumer domain.

Types of Wearable Technology
Currently, some types of wearable technology stem from the healthcare and fitness sectors. Real-time data demands create the need for sensors to transmit vital signs or track performance in simple and convenient products. Basically, wearable devices integrate multiple passive or smart sensors to capture the user’s environment with sensor hubs such as gyro, accelerometer, temperature, pressure, 9-axis, global positioning systems (GPS) and more. These sensors can be used to deliver friendlier user interfaces (UIs) or user experiences (UXes) along with other control methods. As the popularity of wearable devices continues to increase, the development of these technologies will eventually evolve to include smart biosensors to understand and even predict users’ feelings, decision-making processes and vital information.

Figure 1: Wearable glasses

Thrill-seekers who enjoy active sports such as skiing, biking, surfing, skydiving, paragliding and others always want to record unique and dynamic scenes from associated activities. Wearable devices in this domain have more room for UI/UX in terms of form factor, but because of dynamic activities of this use case, voice command and basic gestures are preferred over embedded control devices. For casual travelers, video recording or still image-capture function is a must-have feature to record scenic views and personal memory moments. For travelers, more frequent UI/UX like location information and language translation is requested. Gesture will be widely used on ecosystem and platform applications along with voice command and even natural-language processing (semantic translation beyond limited token word recognition).

Speech-recognition technology has remarkably improved with advanced algorithms and noise-reduction technology such as beam forming. Its reliability becomes more comfortable with artificial intelligence servers, smart semantic analysis and parsing technology. In wearable, voice command is one of the mandatory control methods. Voice command is being adopted from key token words recognition in a client-only computation, but it will quickly evolve to more personal and natural language processing combined with cloud server’s connectivity. Voice and gesture will go hand-in-hand for most wearable devices since the microphone and camera are default peripheral devices in wearable.

The most common form factors include:

  • Wrist wear

Wrist wear is a watch-like form factor, but with more functionality and connectivity it could be simply called a smart watch. Wrist wear’s user interaction is most likely based on touch and buttons. Because of form factor and limitation in display and user interaction, wrist wear can deliver a low level of computing and UI/UX in wearable. Gesture could be considered for specific use cases but would be optional.

  • Eye wear

Eye wear is glasses-like form factor that includes a see-through display interface and a camera function with a single display or even a dual display. Eye wear is the hottest application in wearable. It is driven by major players because its use cases fit well into our daily life. Eye wear delivers mid-level computing and UI/UX in wearable. There are several different views of how eye wear will appeal to users such as fashion and style (portable and personal teleconference, ubiquitous small office at any time and more), a new form factor of digital camera, sporty gadget and even a personal security device for identification and protection all the time.

  • Head-mounted display (HMD)

HMD is grouped into two subdomains: industrial/military and consumer. The industrial use case is more flexible in form factor and UI/UX. In industrial HMD, its specified purpose of usage is more important than comfort. It may include multiple displays (see-through or blocked, or two combined) and multiple UI/UX methods to process urgent and critical information. Voice command is one of the mandatory features for reliable and fast processing of the user’s input. Regarding gesture in industrial HMD, the accuracy and fast response is a critical factor. A 3D sensor-based gesture solution is likely adopted with the sacrifice of form factor, power budget and cost. Consumer HMD’s use cases are mainly to watch a video on a large screen with privacy and portability. Thus, it has a similar limitation as eye wear does such as style, size and comfort, but with a little bit more flexibility for UI/UX. The gesture feature could be adopted with voice command, but its usage would be limited to basic movie controls until its use case is extended to a pilot-cockpit-like user experience with total immersion in virtual reality such as immersion gaming, virtual shopping and virtual fitting . HDM can deliver a high level of computing and UI/UX in wearable devices.

Gesture Recognition in Wearable Technology
No matter what the form factor is, wearable technology is typically easy to wear and trendy. Embedded designers need to keep style, design, weight and size in mind because all are key to a successful product. As a result, eye wear is the most challenging application from an engineering point of view. Because of the limitation in form factor, even minimized input control such as buttons and track pad would block wide adoption of eye wear to general users. In this use case, gesture recognition could be used as a probable UI or UX without extra control devices embedded in it. Gesture recognition can remove all plastic buttons and track pads and can allow skin-based graphics UI/UX on display, with only touch and/or slide type control method on a metallic side frame. Absolutely no buttons are needed.

Figure 2: User interface (UI) button selection with AR

The most important factor for wearable’s success in a mass volume market like smartphone would be its form factor, design and comfort. This brings up many engineering challenges, which should be solved or reduced. Because of this limitation, gesture-based UI/UX can remove conventional buttons or peripheral-based control methods.

Most use cases of wearable technology combines a live view directly through glass or through a camera, which is technically a video plane or a real view with a graphic layer, which will show additional information on a see-through display. In this case, augmented reality (AR) technology can deliver a new experience to users by merging real world with virtual graphic objects. Virtual objects over AR will be virtually touched, moved or interacted with using a hand gesture. Figure 2 shows how virtual buttons on AR are displayed and controlled by hand gestures. Swiping gestures can be used for basic UI/UX. Figure 3 shows basic swiping gesture in wearable technologies to control basic UI/UX.

Challenges for Embedded Designs
Wearable technology has been around for a few years but only recently has it gained traction when processors were able to acquire enough computing power, wireless connections became more prevalent and materials were cost effective enough to keep the bill of materials (BOM) low. There are several challenges, however, that designers need to consider when developing a wearable device.

Figure 3: Swiping (photos, charts, volume up, etc.)
Figure 4: Focus area selection
  • Camera’s field of view (FOV) and focus

For the active sports fan or fitness enthusiast, gesture recognition can be used with an embedded camera, which is originally targeted for video recording or still-image capture. Thus, the field of view and the focus of the camera are configured for general usage, and not for a gesture, which causes a big challenge in wearable technology. Spatial region for gesture in wearable is much limited compared to general gesture usage in television, gaming, tablet and smartphones because of a camera’s viewing angle and short covered depth range. A camera’s focus should be dynamically changed depending on detection of gesture activation or a hand detection in a viewing angle.

  • Video/image stabilization

Gesture algorithms are very sensitive to motion, and in many cases; motion changes over frames are recognized as an activation gesture. Because wearable technology is always on the move, whether moderately or dynamically, image stabilization is a specific problem in addition to other challenges in gesture technology where a camera is fixed. Delivering stabilized video and images is the starting point before main gesture algorithms kick in.

  • Camera facing a user from behind

Until now, gesture technology has focused on the use case of a camera facing a user in a front angle. Gesture in wearable has exactly the opposite purpose. It focuses on facing a user from behind. So many approaches analyzing the user’s hand or body skeleton in gesture won’t fit in wearable without modification of existing gesture analysis methods or developing new analysis algorithms targeting use cases in wearable.

  • Palm-based gesture versus finger-based gesture

Palm-based gesture and finger-based gesture each have an advantage and disadvantage. Thus, both approaches are widely but exclusively selected depending on characteristics of background technology driving gesture UI/UX. In wearable, a new challenge arises due to its unique limitation in the camera’s field of view. There is some possibility that a finger-based gesture could be preferred in wearable because a palm gesture may block the user’s viewing angle more than a finger. A palm-based gesture might limit the flexibility in UI/UX layout specially designed for wearable gesture because of a possible false selection or interference over AR associated virtual buttons.

Figure 5: Finger-based gesture
  • Waving gesture

Hand tracking is usually triggered by user’s activation motion like waving, especially on 2D sensor input. This is not user friendly and is inconvenient for the user when the activation gesture fails to trigger hand tracking. In wearable, knowing the potential region of a hand movement and approximate estimation of the shape and the size of a user’s hand within the region potentially allows the removal of the activation gesture like waving. It could possibly detect a user’s hand position and shape just like the 3D-sensor-based gesture system on depth information does. Once a hand tracking is enabled, a gesture system can map a user’s hand position against UI/UX.

  • Gesture + sensors

Wearable basically integrates various sensors such as gyro, accelerometer, temperature, pressure and GPS. Some sensors could improve gesture experience and reliability. For example, a proximity sensor can help trigger a hand-tracking system without an activation gesture by detecting a hand near wearable or at least can wake up gesture software engine from a sleep mode. Gyro or accelerometer such as nodding can allow a secondary clicking event leading to a hand-tracking gesture.

Wearable has long been used in military and some industrial applications. Recent technology has solved many engineering problems in wearable devices. For example, innovative display technology and low-power technology allow more compact and comfortable wearable applications and are opening the door for unlimited possibilities in the consumer market. Technology innovation can deliver total immersion experience to wearable users and could open a door to another mobile application next to smartphone and tablet with more natural and intuitive UI/UX for gesture, voice and even emotion/mind detection.


Dong-Ik Ko is a principal system engineer and technical lead in the vision business unit (3D vision, gesture, wearable and robotics) at Texas Instruments, Incorporated. He has more than 19 years of experience in the industry and academy research on embedded system design and optimization methods. He has published 23 IEEE papers, articles and U.S. patents. He has a doctorate degree in computer engineering from University of Maryland College Park.

Share and Enjoy:
  • Digg
  • Sphinn
  • Facebook
  • Mixx
  • Google
  • TwitThis
Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.