The GPT-3 language model has been one of the last great examples of how artificial intelligence is evolving today. The use and creation of this type of tools is becoming a fundamental part of the development of technology, and in this sense Google has taken another giant step by showing the world its new language model: PaLM.
Through an entry on its official blog, the company wanted to reveal a bit of what its LLM (Large Language Model) is capable of, and some examples have been most interesting.
PaLM wants to surpass GPT-3, and has succeeded in many aspects
PaLM comes from 'Pathways Language Model', a language model that has up to 540,000 million parameters, exceeding the 175,000 million that GPT-3 had and reaching a milestone in human understanding through artificial intelligence. In a rigorous document written by the Google Research team, they have explained the multiple functionalities of PaLM, a language model that surprises for the precision of its answers and 'reasoning'.
According to Google, PaLM has been trained using a combination of English and a multilingual data set that includes 'high quality' documents. Examples of this data include books, Wikipedia, conversations, or GitHub code. In addition, the company claims that they have created a 'lossless' vocabulary for PaLM, preserving all whitespace for code, and splitting Unicode characters not found in the vocabulary into bytes. They have also divided the numbers into individual tokens, one for each digit.
This AI surprises in its ability to deduce natural language
This language model has been shown to be quite consistent with its answers, performing quite well in natural language deduction, common sense reasoning, and reading comprehension in context. Some quite surprising examples are collected on the blog, such as the one shown below.
As an example of what this AI is capable of, Google asks it a series of questions, from choosing between two options to something as complex as identifying which movie the string of written emojis describes. In addition, we also see how it finds synonyms and even 'reasons' with multiple assumptions.
PaLM also handles simple mathematical problems well, even describing why it arrived at the stated mathematical solution. The AI has managed to solve 58% of the mathematical problems presented in GSM8K (Grade School Math), even surpassing GPT-3, which managed to solve up to 55% of them.
His capacity for conceptual understanding and 'reading between the lines' is quite amazing. And if not, take a look at how he even explains the jokes introduced by the Google team.
Although it is logical to think that here its creators have chosen the best answers, omitting the multiple failures that this AI may have, it is still surprising to what extent an artificial intelligence can be trained and how tremendously useful it can become. in the future.