Google introduces a new 27 billion parameter model to its Gemma LLM family

On the occasion of its Google I/O conference, the giant announced a multitude of announcements. If the main announcement concerns the large-scale deployment of generative artificial intelligence within its search engine, the fact remains that the firm has worked in parallel on the improvement of its family of large search models. Gemma language. A second generation is therefore appearing – Gemma 2 – alongside the existing first generation models in versions with 2 billion parameters and 7 billion parameters.

“With 27 billion parameters, Gemma 2 offers performance comparable to that of Llama 3 70B at half the size,” boasts Google. The model family has also been optimized to reduce deployment costs. “The 27B model is optimized to run on NVIDIA GPUs or can run efficiently on a single TPU host in Vertex AI, making deployment more accessible and cost-effective for a wider range of users.”


Gemma 2 neck and neck with Llama 3

To report on the performance of its models, Google took care to base itself on Hugging Face's open LLM leaderboard. The published graph shows that Gemma 2 trails Llama 3 70B on various tests such as MMLU (tests both knowledge of the world and the ability to solve problems), HellaSwag (assesses advanced understanding of natural language and common sense reasoning in AI models) and GSM8K (test on solving math problems).

By comparison, the Grok-1 model with 314 billion parameters is less efficient. It obtains a score of only 73% on the MMLU test compared to 75% for Gemma 2 and 79.2% for Llama 3. On the GSM8K test, Grok-1 obtains a score of 62.9% compared to 75% for Gemma 2 and 76.9% for Llama 3. Of course, results may still vary until the model family is published, as Gemma 2 is still in the pre-training phase.

PaliGemma, a vision-oriented model

At the same time, the Gemma family is also growing with PaliGemma, a vision language model inspired by PaLI-3. This model is built on open components, including the SigLIP vision model and the large Gemma language model. It can perform many vision tasks such as image captioning, short video captioning, text reading, object detection and segmentation.


PaliGemma is available on GitHub, Hugging Face, Kaggle, Vertex AI Model Garden and with easy integration via JAX and Hugging Face Transformers.

Do you want to stay up to date on the latest news in the artificial intelligence sector? Register for free to the IA Insider newsletter.

Selected for you

OpenAI co-founder Ilya Sutskever announces his departure