The AI-powered app captures full-body motion using just your smartphone—no suits, special cameras, or equipment required.

(Image credit: gorodenkoff/Getty Images)

New research shows that a mobile app could replace all current systems and technologies needed for motion capture, the process that converts physical movements into computer images.

The app, called MobilePoser, uses data from sensors already built into various consumer devices such as smartphones, headphones and smartwatches and combines that information with artificial intelligence (AI) to track a person's full-body posture and location in space.

Motion capture is often used in the film and video game industries to record the movements of actors and transform them into computer-generated characters that appear on screen. One of the most famous examples of this process is Andy Serkis' performance as Gollum in the Lord of the Rings trilogy. However, motion capture typically requires specialized rooms, expensive equipment, bulky cameras, and multiple sensors, including “motion capture suits.”

MobilePoser: Whole-Body Pose Estimation and 3D Human Translation Using Inertial Measurement Units in Mobile Consumer Devices – YouTube

See more

Such systems can cost more than $100,000 to operate, scientists say. Alternatives like the discontinued Microsoft Kinect, which used stationary cameras to track body movements, are more affordable but ineffective for mobile use because actions must occur within the camera's field of view.

Instead, we can replace these technologies with a single mobile app, according to scientists, according to new research presented Oct. 15 at the 2024 ACM Symposium on User Interface Software and Technology.

MobilePoser achieves high accuracy through machine learning and advanced physics optimization, said study author Karan Ahuja, a professor of computer science at Northwestern University. This could open up new possibilities for immersive experiences in gaming, fitness, and indoor navigation without the need for specialized hardware.

The team used inertial measurement units (IMUs). This system, which is already integrated into smartphones, combines sensors such as accelerometers, gyroscopes and magnetometers to measure the body's position, orientation and motion.

However, the accuracy of the sensors is often insufficient to reliably capture motion, so the researchers added a multi-stage machine learning algorithm. They trained the AI on a publicly available dataset of synthesized IMU measurements derived from high-quality motion capture data. The result was a tracking error of only 3 to 4 inches (8 to 10 centimeters). A physics-based optimizer refines the predicted movements to ensure they match real body movements and prevents impossible actions, such as bending the joints backward or turning the head 360 degrees.

“The accuracy is better when a person has more than one device, such as a smartwatch on their wrist and a smartphone in their pocket,” Ahuja said. “But the key aspect of the system is its adaptability. Even if you don’t have a watch on one day and only have a phone, it can adapt to detect your full-body posture.”

The scientists believe the technology could be used in entertainment, such as creating more immersive games, as well as in health and fitness. The team has published the AI models and accompanying data behind the app so that other researchers can build on the work.

Sourse: www.livescience.com

Leave a Reply

Your email address will not be published. Required fields are marked *