In a post on its AI blog penned last week, Google explained the technology behind the radar sensors included in last year’s Pixel 4 smartphone.
Though the hardware behind what Google calls Project Soli — a 60GHz radar and antenna receivers capable of covering a combined 180-degree field of view — was revealed during the release of the Pixel 4 last year, the AI models and algorithms that power the motion gesture system had yet to be discussed in any detail by the company’s engineers until now.
Soli’s AI models are trained to detect and recognize motion gestures with low latency, with Google acknowledging that though Soli is in its early days — with the Pixel 4 and Pixel 4XL being the first and only consumer devices to feature it thus far — it could lead to newer forms of context and gesture awareness on a variety of devices, and potentially make way for a better experience accommodating users with disabilities.
Soli’s radar and antenna receivers record the positional information — things like range and velocity — of an object by measuring the electromagnetic waves reflected back to the antennas. This data is then fed into Soli’s machine learning models for what Google refers to as “sub-millimeter” gesture classification, where subtle shifts in an object’s position are measured and compared to distinguish various motion patterns between objects.
Developing the machine learning models presented Google with a number of challenges to overcome. For starters, even simple gestures like swipes are performed in a number of different ways by users. Second, over the course of a day there may be a number of extraneous motions within the sensor’s range that could appear similar to gestures. And finally, whenever the phone is moved, from the point of view of the sensor the whole world appears to be moving.
To solve these challenges, Google’s engineers designed custom machine learning algorithms that are optimized for low-latency detection of in-air gestures from the radar signals.
The machine learning models consist of neural networks that were trained using millions of gestures recorded from thousands of Google volunteers, which were then mixed with hundreds of hours of background and radar recordings from other volunteers of generic motions made within range of Soli’s sensors.
“Remarkably, we developed algorithms that specifically do not require forming a well-defined image of a target’s spatial structure, in contrast to an optical imaging sensor, for example. Therefore, no distinguishable images of a person’s body or face are generated or used for Motion Sense presence or gesture detection,” wrote engineers Jaime Lien and Nicholas Gillian in the post. “We are excited to continue researching and developing Soli to enable new radar-based sensing and perception capabilities,” they added later.
In addition to its use for Motion Sense, Soli’s technology is also used to alert and prepare the phone when a user is about to use the Face Unlock feature of biometric authentication, which was also debuted on the Pixel 4.
Soli debuted on the Pixel 4 last fall with a few supported gestures as the technology behind the Pixel’s Motion Sense features, including a swipe to change songs or silence an alarm or call, and the ability to wake the screen when you reach for your phone. As Soli and Motion Sense can be updated via software, Google also recently added the ability to pause music with a new gesture, and presumably more updates may be coming in the future.
Sources: VentureBeat, Google AI Blog
Follow Us