In order to have more realistic video chats when their face is partially hidden, users of Apple’s Vision Pro headset can build a digital avatar.
The tech behemoth claimed it employs “advanced machine learning” to accurately simulate a user’s face and hand motions during FaceTime discussions.
To generate 3D, incredibly lifelike digital avatars, users must have their faces scanned by the headset’s front-facing cameras.In a video introducing the function, Apple claimed that it would enable viewers to see a user’s “eyes, hands, and true expressions” during video conversations.
The company’s first significant new product in eight years was unveiled by Apple on Monday at its Apple Worldwide Developers Conference. The headset was much anticipated and received generally favourable reviews from reviewers despite its premium price.
Mike Rockwell, the team leader for Apple’s AR/VR project, stated during the conference that customers’ constant eyewear made videoconferencing one of the “most difficult challenges” the team had to overcome when building Vision Pro.In contrast to some of Meta’s earlier attempts at virtual-reality avatars, the avatars for Apple’s augmented-reality headset seem to be hyper-realistic from the example used in the movie.
Following the sharing of caustic memes in reaction to a selfie of Mark Zuckerberg’s early metaverse avatar, the CEO of Meta was extensively ridiculed.On Facebook last year, Zuckerberg posted a photo of his avatar in front of the Eiffel Tower.
Due to its simple aesthetics, this was instantly embraced by social media users. Later, the CEO appeared to respond to the criticism by posting an improved avatar on Instagram.