Multimodal AI can charge smart glasses. (Image credit: Meta/Ray-Ban)
Smart glasses may not have taken off yet, but the integration of artificial intelligence (AI) could be a major factor in creating truly breakthrough wearable technology.
In the U.S. and Canada, Ray-Ban Meta glasses have begun using multimodal AI technology with software called the Meta AI virtual assistant. With multimodal AI — that is, generative AI that can handle requests that include multiple types of data (such as both audio and images) — the device can respond more effectively to requests based on where the user is looking.
“Let’s say you’re on a trip and trying to read a menu in French. Your smart glasses can use the built-in camera and Meta AI to translate the text, giving you the information you need without requiring you to take out your phone or look at the screen,” Meta explained in an April 23 statement.
First, the device takes a picture of what its owner sees, and then the AI taps into cloud processing to provide an answer to a spoken query, such as, “What plant am I looking at?”
Meta first explored the possibility of integrating multimodal AI into Ray-Ban Meta smart glasses, which were released in a limited edition in December 2023.
When testing the AI functionality of the device, a reporter for The Verge noticed that it was mostly accurate when asked to identify a car model. For example, it could also describe the breed of cat and its characteristics in an image captured by the camera. However, the AI had trouble accurately recognizing the species of plants owned by one reporter, as well as identifying a groundhog that was in a neighbor’s backyard.
Sourse: www.livescience.com