What is Gesture Recognition in Simple Terms
Gesture recognition is a technology aimed at providing live-time data to a computer to execute commands the user wants.
Users do not need to type anything with keys or tap on a touch screen to perform a specific action.
The device's motion sensor can perceive and interpret the person's movements as the primary source of data input.
For example, Banuba's Touchless Interface System.
It's a prototype of the touchless interface system which uses our hand tracking and gesture detection technology.
Users can interact with a screen without touching it.
All they need is to show specific human gestures which allow them to:
- browse items
- select a particular option
- take and confirm actions
- tab and hold virtual AR buttons.
How Hand Motion Detection Works
Let's now discuss how a gesture recognition system works technically taking Banuba's technology as an example.
So, we've developed our hand tracking solution based on neural networks which consist of a stack of two deep-learning technologies.
The first is a neural network that is trained and configured to identify hands in images taken with 2D mobile device cameras.
The second is a technology that precisely detects 11-21 hand points applied for the real-time precise detection of 11-21 hand points and further hand tracking.
Both networks are trained on manually annotated datasets. At runtime, the input frames are scaled-down and sent to the first type of network which detects and localizes a user’s hand.
If the hand is detected, the Hand Gesture Model is activated to predict key points and track the position and attitude of the hand.
Additional regression-based models may be applied to predict gesturing like fists or palms. The detected human gestures may be used as a part of the user interface in mobile applications.
Here is a step-by-step description of how Banuba's gesture recognition algorithm works.
Step 1: Detecting a hand
In the first stage, the neural network responsible for identifying hands in images is used to detect a user’s hand in a region of interest.
The Encoder neural network is activated to predict key points if the hand is detected.
Step 2: Detecting a position
A separate algorithm detects the position and attitude of the hand and regression models are utilized to predict hand gestures, e.g. fist or palm.
Step 3. Executing commands
Finally, the detected gestures are used as application control elements.
The hand gestures performed by a person are compared to the gesture library stored in the computer and once the match is found, the computer executes the command correlated to that specific gesture.
Our hand tracking system can recognize hand gestures like:
4 Use Cases of Hand Gesture Recognition
There were times when many of us tried to force the vending machine to accept your rumpled dollar bill to buy a bottle of mineral water.
Long and unpleasant experience. Was it convenient to pay for your coffee when you needed to take your gloves off and back on to look for coins in your wallet during the cold winter days?
Well, those days have passed thanks to the intelligent vending machines.
These types of devices are a great example of where hand gesture software is being used.
Controlling such a device doesn't require human computer interaction with its screen or buttons. All the user has to do is to show specific hand gestures we've discussed in the previous chapter or wave his hand either to the left or right, up or down.
Payment can be easily made by simply waving the user's smartphone in front of the reader, solving the problems we've described at the beginning of this chapter.
This solution is especially useful in today's times when customers often don't want to touch the screens of the devices that might have been used by other individuals before.
Example: Banuba Touchless Vending Machine Prototype
Our gesture recognition system can classify different types of gestures based on our internally trained hand gesture model.
There is a possibility of assigning any commands to these hand signs. The hand tracking and gesture recognition algorithm can be taught to identify any number of gestures within a couple of weeks to satisfy your requirements.
The former is also capable of recognizing a person's presence, his or her gestures, and face within a distance of up to 2 meters under real-world conditions such as lighting, face occlusion, and different skin colors giving a possibility of enjoying comfortable remote interaction with the screen.
Apart from implementing the technology in the vending machines, there are other cases where it's also being applied. Take an example of a smart house where hand recognition software allows us to track a hand’s 3D position, rotation, and gesture.
This allows smart house owners to operate the lights without ever touching the switch, simply by waving their hand in front of it to activate the lights.
Another example is the Smart TV. In their case, the human face and natural hand gestures are the main components to interact with the smart TV system.
The face recognition system is used for identifying the user and human gesture recognition controls the actual TV, for example, changing the channels and adjusting the volume.
Virtual In-Store Displays
Last but not least, hand gesture software is used in commercial in-store displays which can be found in shopping malls to attract more visitor traffic.
The retail business is being increasingly digitized. This includes an introduction of multiple smart devices working together on a single IoT platform to deliver hyper-personalized, adaptive, and context-specific experiences.
While much of the technology is to be invisible to the consumer, shoppers will have the opportunity to interact digitally within the physical store environment to find out the information they are interested in and sometimes for entertainment purposes.
Hand tracking and gesture recognition can change the way your business operates.
Vending machines, smart homes, shopping malls, smart TV, virtual try-on s, and other niches are already using hand gesture software to thrive.