Blog
Face AR SDK Release

Face AR SDK v0.38: Gesture Recognition & Performance Improvements

The headlining feature in the new update for Banuba Face AR SDK is hand gesture recognition on mobile devices. However, it is mostly focused on improving the performance across all platforms. Thanks to extensive optimization, the users of Windows PCs, Android, and iOS will feel a major uptick in quality. Read the article to see the details of how we did it and the specific benefits!

Banuba Face AR SDK v0.38

Desktop, Android, and iOS: gesture recognition

Now Banuba Face AR SDK can recognize several gestures: Palm, Like, Victory, OK, Rock. The software tracks the hand, creates a skeletal model and uses it to detect specific gestures.

While unassuming at first glance, this feature has a lot of potential applications. The hand tracking can let people interact with AR items (e.g. picking them up, throwing them, etc.). Virtual objects can be attached to the hand and react according to the movements, similar to the AR avatars. Or the gestures can serve as a contactless interface, allowing the user to give commands by waving their palm and making various signs - just like in sci-fi.

This feature is currently available on mobile devices and desktop computers but will soon be expanded to other platforms as well.

Android: better device detection

Banuba Face AR SDK is compatible with the vast majority of Android-based smartphones. While this is clearly advantageous, it also creates an issue of ensuring the best performance on both cheap and high-end devices. To address it, we separated the devices into three tiers (low, middle, and high) and had the SDK adapt to each tier.

Before we used to do it by detecting the CPU of the smartphone. Now we have begun to focus on the GPU instead, as the testing showed this allows for more reliable separation and better performance.

iOS: lighter neural networks

We have conducted additional tests and found out that the more complex neural networks that we use for high-end iOS devices don’t really outperform the more simple ones. So we have cut them from the build and decided to only use the lighter neural networks. 

E.g. for background segmentation, instead of 4 of them, the SDK only includes two, as they are used in pairs: one for landscapes and one for portraits. This decreases the size of the SDK and the apps it is integrated with.

Windows: MSMF camera

We have rewritten the code responsible for managing the camera and its output. The new one is based on Chromium rather than OpenCV that was used before. As a result, the camera now loads the CPU by 50% less. This should let users run more applications simultaneously without loss in performance.

Bug fixes:

  • tflite_runner assorted fixes
  • iOS: White eyelashes when making photo
  • iOS: Time range issues in the video player
  • macOS: Crash on M1
  • CubemapEverest test effect autorotation
  • WebAR: broken SIMD support
  • WebAR: Fixed creation of MediaStreamCapture
  • Android: Multitouch crash in the demo app
  • Unity: min version of Android SDK
  • Unity: Android build on Windows
  • Win32: Background NN work
  • Incorrect lips shine work

Just a friendly reminder: trying out Banuba Face AR SDK costs nothing for two weeks, so seize the opportunity!

Start Free Trial

Top