New research suggests that smartphone apps could replace all the different systems and technologies currently needed to perform motion capture, the process of converting body movements into computer-generated images. Masu.
Dubbed MobilePoser, the app uses data obtained from sensors already built into a variety of consumer devices, including smartphones, earbuds, and smartwatches, and combines this information with: artificial intelligence (AI) Tracks a person’s whole body pose and position in space.
Motion capture is often used in the film and video game industries to capture the movements of actors and transform them into computer-generated characters that are displayed on screen. Perhaps the most famous example of this process is Andy Serkis’ performance as Gollum in the Lord of the Rings trilogy. But motion capture typically requires a dedicated room, expensive equipment, a large camera, and an array of sensors, including a “motion capture suit.”
watch on
Scientists say running this kind of setup could cost more than $100,000. Alternative products like the discontinued Microsoft Kinect relied on fixed cameras to observe body movements, which, while cheaper, require the action to occur within the camera’s field of view. , not practical on the go.
Instead, these technologies can be replaced with a single smartphone app, scientists said in a new study published on October 15th. 2024 ACM Symposium on User Interface Software and Technology.
Related: Playing with Fire: How VR is being used to train the next generation of firefighters
MobilePower uses machine learning and advanced physics-based optimization to achieve high accuracy, study authors say Karan AhujaA computer science professor at Northwestern University said: statement. This opens the door to new immersive experiences in gaming, fitness and indoor navigation without the need for specialized equipment.
The team relied on an inertial measurement unit (IMU). The system is already built into smartphones and uses a combination of sensors such as accelerometers, gyroscopes, and magnetometers to measure body position, orientation, and movement.
However, the fidelity of sensors is typically too low for accurate motion capture, so the researchers used a multi-step machine learning algorithm to enhance the sensors. They trained their AI using a publicly available dataset of synthesized IMU measurements generated from high-quality motion capture data. As a result, the tracking error was only 3 to 4 inches (8 to 10 centimeters). A physics-based optimizer refines the predicted movements to match actual body movements and eliminates impossible body movements, such as a joint bending backwards or a user’s head rotating 360 degrees. will not run.
“Accuracy improves when you wear multiple devices, such as a smartwatch on your wrist and a smartphone in your pocket,” Ahuja said. “But the important part of this system is that it’s adaptive. Even if you don’t have a sundial and only have a cell phone, it can adapt and figure out your whole body posture.”
The technology could have applications not only in health and fitness, but also in entertainment, such as more immersive games, the scientists said. The team is Publish AI models and related data It is at the heart of the app so that other researchers can build on their work.