Meta aims to defeat OpenAI using Llama 3 and Meta AI

Its release had been awaited for weeks. Meta has (finally) unveiled Llama 3, the latest iteration of its open source Llama family of templates. Two pre-trained and fine-tuned models with parameters of 8B and 70B have been released with the ability to support a wide range of use cases. “The text templates we are releasing today are the first in the Llama 3 template collection. Our goal in the near future is to make Llama 3 multilingual and multimodal, have longer context, and continue to improve performance comprehensive in the core functionalities of the LLM such as reasoning and coding”, specifies Meta.

Meta does not hesitate to boast of the performances of its two models, specifying that they are “the best models in existence today at the 8B and 70B parameter scale. Improvements to our post-training procedures have significantly reduced false rejection rates, improved alignment, and increased the diversity of model responses. We “We also saw significantly improved performance such as reasoning, code generation and following instructions, making Llama 3 more manageable.”

Advertisement

Performance that surpasses that of Mistral AI and Google models

To judge its performance, the giant relied on an evaluation set that contains 1,800 prompts that cover 12 key use cases: requesting advice, brainstorming, classification, answering closed questions, coding, creative writing , extraction, inhabiting a character/persona, answering open questions, reasoning, rewriting and summarizing. Meta has chosen to compare the Instruct version of its Llama 3 8B model to Gemma 7B-It and Mistral 7B Instruct. Meta's version explodes the counters with scores well above its competitors in all tests.

Regarding Llama 3 70B Instruct, the choice was made to compare it to Gemini Pro 1.5 and Claude 3 Sonnet. It appears that the model developed by Google outperforms that of Meta on two tests, namely GPQA and MATH. The pre-trained version further widens the gap with the Anthropic and Google models.

Meta attributes this to having pre-trained Llama 3 on over 15 trillion tokens collected from publicly available sources. “Our training dataset is seven times larger than the one used for Llama 2 and includes four times more code.”

Advertisement

Llama 3 soon to be multilingual and available on different platforms

Meta also plans to orient its LLM towards multilingual use cases. To achieve this, over 5% of Llama 3's pre-training dataset is high-quality non-English data covering over 30 languages. The firm admits, however, that it does not expect “same level of performance in these languages ​​as in English”.

Its ambitions don't stop there as in the coming months, Meta plans to introduce other features, including longer pop-ups, additional template sizes and improved performance. At the same time, training for Llama 3 on a 400 B model began.

Llama 3 400B Meta

Note that the Llama 3 models will soon be available on AWS, Google Vertex AI, Hugging Face, Kaggle, IBM WatsonX, Microsoft Azure, Nvidia and Snowflake, with support for hardware platforms offered by AMD, AWS, Dell, Intel, NVIDIA and Qualcomm.

With Llama 3, Meta pushes its AI assistant into all its social networks

Facebook, Instagram, Messenger, WhatsApp. No social network will escape the integration of the AI ​​assistant, Meta AI, based on Llama 3. Completely free, Meta AI is already available to UNITED STATES and the firm plans to push it to more countries, through its applications: Australia, Canada, Ghana, Jamaica, Malawi, New Zealand, Nigeria, Uganda, Pakistan, Singapore, South Africa, Zambia and Zimbabwe. If Meta is still far from reaching all of its users, the giant is nevertheless getting closer to its objective, namely to ultimately reach its more than 3 billion daily users.

In detail, Meta AI is available in threads, conversations or even search bars of different applications – identified by @MetaAI – to accomplish its tasks and access information in real time, without having to leave its application. For example, the assistant can be used to plan a dinner with the ingredients available in your refrigerator, to study for an exam or even plan a weekend.

At the same time, the firm even offers to generate images with its AI assistant. It is now possible to create album covers, decoration inspirations or even personalized animated GIFs.

Meta fines OpenAI

If any doubt still persisted as to Meta's ability to renew itself and evolve all of its applications, this has now been resolved. With its Meta AI assistant, the firm takes a major step in the development of its technological empire and fines OpenAI and its flagship tool ChatGPT.

The assistant even integrates search results from Bing and Google, proof of the tool's ability to respond to all of its users' requests. They therefore no longer need to switch from one application to another and even less open a browser, all their answers can now be found on Meta applications.

Do you want to stay up to date on the latest news in the artificial intelligence sector? Register for free to the IA Insider newsletter.

Selected for you

With the open Mixtral 8x22B model, Mistral AI does... almost as well as Llama 3

Advertisement