Meta, the company that owns Facebook, is betting heavily on artificial intelligence as one of the main pillars of the development of its Metaverse. As explained today by its CEO during an online event, Mark Zuckerberg, and several of the company's directors, "AI is the key to unlocking many of the advances" that the company's great future project requires.
Among these advances, two stand out, both related to the use of language. The first, the dream of a universal translator, which allows users of the Metaverse to communicate without the barriers established outside of it… and on the other, the creation of virtual worlds through mere voice commands. Let's see what each of them consists of:
ZAO, the Chinese MOBILE APP that through DEEPFAKE turns you into DICAPRIO in SECONDS
The dream of a universal translator
"The ability to communicate with anyone in any language is a superpower that humanity has always dreamed of… and artificial intelligence is going to make it possible for us to see it," Zuckerberg said.
"Removing language barriers would be a significant achievement that would make it possible for billions of people to access information online in their favorite language."
Meta believes that speakers of the most popular languages (such as English, Mandarin Chinese, or Spanish) are already adequately supported by currently available translation tools.
But the 20% of the world's population that does not know any of these languages finds itself in a very different situation: fragmented between multiple minority languages, none of which have a large catalog of texts to train AI models with.
To do this, there will be two principles pursued by Meta AI research applied to translation:
The first is called 'No Language Left Behind' (no language is left behind): it will seek to develop AI models that require less input data than the current ones to 'learn' to translate.
The second has been baptized as 'Universal Speech Translator' (Universal Speech Translator), which aims to develop systems that translate from spoken language to spoken language, without the need for the usual intermediate layer of transcription to text.
"Imagine a market where speakers of different languages can communicate with each other in real time using a phone, watch, or [smart] glasses," Meta AI's blog proposes…
…however, Meta should learn to walk before running: its still flagship product —Facebook— suffers from serious problems in detecting 'hate speech' in any language other than English, as the leak of the users already made clear. 'Facebook Papers'.
Create worlds with your voice
We mentioned earlier the possibility of creating virtual worlds by mere description. In his presentation, Zuckerberg has created live ("all generated by artificial intelligence", he specified) a 3D 'cartoon' landscape with water and sand after entering the voice command "let's go to the beach" in his 'Builder Bot'.
Said software is also capable of adding highly specific details (such as the type of clouds known as altocumulus, or different types of objects) as well as generating ambient music according to descriptions (such as "tropical music"), according to what we have been able to see today.