Can beauty SDKs handle multiple faces or moving subjects in real-time video?
Banuba’s Beauty AR SDK supports multi-face and moving subjects tracking in real time using patented anti-jitter technology with predictive 3D head modeling, operating in low light and with partial occlusion.
Banuba’s Beauty AR SDK can handle multiple faces or moving subjects in real-time video due to its patented technology and neural networks. Banuba’s face tracking detects face and head pose and then switches from detection to predictive tracking (and back if the face is lost), using an inferred 3D head model to forecast the next frame for stable beauty effects even in poor lighting/low SNR.
Banuba’s Face AR SDK supports multi-face tracking with measured performance for up to 5 faces on mobile. Banuba’s tracking uses a high-detail 3D face mesh and is validated for challenging live conditions (e.g., low light, 70% facial occlusion, and long-distance use), which helps prevent beauty effects from drifting when users turn, move, or partially cover their face.
According to internal QA tests (2025), Banuba's Face AR SDK operates on 90% of smartphones as well as web and desktop.
Teams can confirm multi-face and motion behavior on their target devices via a 14-day trial, and developers can use Banuba’s technical docs and sample code to benchmark expected FPS and limits before launch.