A clear analysis from a highly respected source.
IBM. How Meta’s Llama 3 will impact the future of AI.
- Zuckerberg also announced substantial investments in training infrastructure. By the end of 2024, Meta intends to have approximately 350,000 NVIDIA H100 GPUs, which would bring Meta’s total available compute resources to “600,000 H100 equivalents of compute” when including the GPUs they already have. Only Microsoft currently possesses a comparable stockpile of computing power.
- Zuckerberg’s announcement video emphasized Meta’s long-term goal of building artificial general intelligence (AGI), a theoretical development stage of AI at which models would demonstrate a holistic intelligence equal to (or superior than) that of human intelligence. “It’s become clearer that the next generation of services requires building full general intelligence,” says Zuckerberg. “Building the best AI assistants, AIs for creators, AIs for businesses and more—that needs advances in every area of AI, from reasoning to planning to coding to memory and other cognitive abilities.”
February 26, 2024
In January of 2024, Meta CEO Mark Zuckerberg announced in an Instagram videothat Meta AI had recently begun training Llama 3. This latest generation of the LLaMa family of large language models (LLMs) follows the Llama 1 models (originally stylized as “LLaMA”) released in February 2023 and Llama 2 models released in July.
Though specific details (like model sizes or multimodal capabilities) have not yet been announced, Zuckerberg indicated Meta’s intent to continue to open source the Llama foundation models.
Read on to learn about what we currently know about Llama 3, and how it might affect the next wave of advancements in generative AI models.
When will Llama 3 be released?
No release date has been announced, but it’s worth noting that Llama 1 took three months to train and Llama 2 took about six months to train. Should the next generation of models follow a similar timeline, they would be released by sometime around July 2024.
Having said that, there’s always the possibility that Meta allots extra time for fine-tuning and ensuring proper model alignment. Increasing access to generative AI models empowers more entities than just enterprises, startups and hobbyists: as open source models grow more powerful, more care is needed to reduce the risk of models being used for malicious purposes by bad actors. In his announcement video, Zuckerberg reiterated Meta’s commitment to “training [models] responsibly and safely.”
Will Llama 3 be open source?
While Meta granted access to the Llama 1 models free of charge on a case-by-case basis to research institutions for exclusively noncommercial use cases, the Llama 2 code and model weights were released with an open license allowing commercial use for any organization with fewer than 700 million monthly active users. While there is debate regarding whether Llama 2’s license meets the strict technical definition of “open source,” it is generally referred to as such. No available evidence indicates that Llama 3 will be released any differently.
In his announcement and subsequent press, Zuckerberg reiterated Meta’s commitment to open licenses and democratizing access to artificial intelligence (AI). “I tend to think that one of the bigger challenges here will be that if you build something that’s really valuable, then it ends up getting very concentrated,” said Zuckerberg in an interview with The Verge (link resides outside ibm.com). “Whereas, if you make it more open, then that addresses a large class of issues that might come about from unequal access to opportunity and value. So that’s a big part of the whole open-source vision.”
Will Llama 3 achieve artificial general intelligence (AGI)?
Zuckerberg’s announcement video emphasized Meta’s long-term goal of building artificial general intelligence (AGI), a theoretical development stage of AI at which models would demonstrate a holistic intelligence equal to (or superior than) that of human intelligence.
“It’s become clearer that the next generation of services requires building full general intelligence,” says Zuckerberg. “Building the best AI assistants, AIs for creators, AIs for businesses and more—that needs advances in every area of AI, from reasoning to planning to coding to memory and other cognitive abilities.”
This doesn’t necessarily mean that Llama 3 will achieve (or even attempt to achieve) AGI yet. But it does mean that Meta is deliberately approaching their LLM development and other AI research in a way that they believe may yield AGI eventually.
Will Llama 3 be multimodal?
An emerging trend in artificial intelligence is multimodal AI: models that can understand and operate across different data formats (or modalities). Rather than developing separate models to process text, code, audio, image or even video data, new state-of-the-art models—like Google’s Gemini or OpenAI’s GPT-4V, and open source entrants like LLaVa (Large Language and Vision Assistant), Adept or Qwen-VL—can move seamlessly between computer vision and natural language processing (NLP) tasks.
While Zuckerberg has confirmed that Llama 3, like Llama 2, will include code-generating capabilities, he did not explicitly address other multimodal capabilities. He did, however, discuss how he envisions AI intersecting with the Metaverse in his Llama 3 announcement video: “Glasses are the ideal form factor for letting an AI see what you see and hear what you hear,” said Zuckerberg, in reference to Meta’s Ray-Ban smart glasses. “So it’s always available to help out.”
This would seem to imply that Meta’s plans for the Llama models, whether in the upcoming Llama 3 release or in the following generations, include the integration of visual and audio data alongside the text and code data the LLMs already handle.
This would also seem to be a natural development in the pursuit of AGI. “You can quibble about if general intelligence is akin to human-level intelligence, or is it like human-plus, or is some far-future super intelligence,” he said in his interview with The Verge. “But to me, the important part is actually the breadth of it, which is that intelligence has all these different capabilities where you have to be able to reason and have intuition.”
How will Llama 3 compare to Llama 2?
Zuckerberg also announced substantial investments in training infrastructure. By the end of 2024, Meta intends to have approximately 350,000 NVIDIA H100 GPUs, which would bring Meta’s total available compute resources to “600,000 H100 equivalents of compute” when including the GPUs they already have. Only Microsoft currently possesses a comparable stockpile of computing power.
It’s thus reasonable to expect that Llama 3 will offer substantial performance advances relative to Llama 2 models, even if the Llama 3 models are no larger than their predecessors. As hypothesized in a March 2022 paper from Deepmind and subsequently demonstrated by models from Meta (as well as other open source models, like those from France-based Mistral), training smaller models on more data yields greater performance than training larger models with fewer data.[iv] Llama 2 was offered in the same sizes as the Llama 1 models—specifically, in variants with 7 billion, 14 billion and 70 billion parameters—but it was pre-trained on 40% more data.
While Llama 3 model sizes have not yet been announced, it’s likely that they will continue the pattern of increasing performance within 7–70 billion parameter models that was established in prior generations. Meta’s recent infrastructure investments will certainly enable even more robust pre-training for models of any size.
Llama 2 also doubled Llama 1’s context length, meaning Llama 2 can “remember” twice as many tokens’ worth of context during inference—that is, during the generation of context or an ongoing exchange with a chatbot. It’s possible, albeit uncertain, that Llama 3 will offer further progress in this regard.
How will Llama 3 compare to OpenAI’s GPT-4?
While the smaller LLaMA and Llama 2 models met or exceeded the performance of the larger, 175 billion parameter GPT-3 model across certain benchmarks, they did not match the full capabilities of the GPT-3.5 and GPT-4 models offered in ChatGPT.
With their incoming generations of models, Meta seems intent on bringing cutting-edge performance to the open source world. “Llama 2 wasn’t an industry-leading model, but it was the best open-source model,” he told The Verge. “With Llama 3 and beyond, our ambition is to build things that are at the state of the art and eventually the leading models in the industry.”
Preparing for Llama 3
With new foundation models come new opportunities for competitive advantage through improved apps, chatbots, workflows and automations. Staying ahead of emerging developments is the best way to avoid being left behind: embracing new tools empowers organizations to differentiate their offerings and provide the best experience for customers and employees alike.
Through its partnership with HuggingFace, IBM watsonx™ supports many industry-leading open source foundation models—including Meta’s Llama 2-chat. Our global team of over 20,000 AI experts can help your company identify which tools, technologies and techniques best fit your needs to ensure you’re scaling efficiently and responsibly.
Learn how IBM helps you prepared for accelerating AI progress