Google subsidiary coaches 280 billion Parameter AI Model Gopher

 AI Language ModelsAI was dominated by images. AI language models are the next edge for artificial intelligence.

A language model or AI model is a statistical tool used to predict words. The AI Language models try to find patterns in the human language. The AI Language models are used to predict the spoken word in an audio recording as well. Images have dominated the world of AI for four long years and the AI language models are the next edge for artificial intelligence. Language being complex, it’s tough for modelers to decode the AI language models smoothly. Google subsidiary, DeepMind, recently came out with the 280 billion-Parameter AI natural language processing (NLP) model. The model is based on the transformer architecture and is trained on a 10.5 TB Corpus named Massive Text; Gopher surpassed the current state-of-the-art on 100 of 124 evaluation tasks.

Google researchers have developed and benchmarked numerous techniques to train the AI language models, which contained of trillion parameters. They even stated that their 1.6-trillion parameter model accelerated over their previously launched AI language model (T5-XXL) by 4 times. As a part of the general AI research effort, the team of the Google subsidiary, DeepMind team coached Gopher, and other numerous small models to explore the strengths and weaknesses of the large language models or the LLMs. The team also examined Gopher on a large number of NLP benchmarks, which include Massive Multitask Language Understanding (MMLU), Big-Bench etc, and they have also compared Gopher to models like GPT-3, an autoregressive AI language model with 175 billion machine learning parameters. Just after the release of Gopher, Google also came up with a Generalist Language Model or GLaM, a trillion-weight AI language model that uses sparsity. Other notable AI language models include Jurassic-1, which was released by AI21Labs, consisted of 178 billion parameters. Companies like Microsoft and NVIDIA also introduced the Megatron-Turing Natural Language Generation (MT-NLG) model with amazing 530 billion parameters. Earlier in this year, Google also came up with its Switch Transformers, a uniquely built technique to train AI language models, and consisted of over a trillion parameters.

Google subsidiary DeepMind stated that in the journey to explore and build new AI language models, they have successfully trained a series of transformer language models of different sizes, which ranged between 44 million parameters to 280 billion parameters. They have successfully investigated the strengths and weaknesses of those different-sized AI language models and highlighted those areas where the increasing scale of a model, can continue to boost its performance, for instance, in areas like reading comprehension, fact-checking, and so on. Through vivid research, the researchers of the DeepMind team were able to notice that the capabilities of Gopher exceeded all existing language models for a number of key tasks. This includes the Massive Multitask Language Understanding or MMLU benchmark. After quantitative research, it was found that Gopher is prompted towards a dialogue interaction. Gopher has the capability of discussing cell biology and can also provide a correct citation, without any specific language fine-tuning.


Must see news