Select Language:

On Saturday, Meta Platforms (META.O) unveiled the latest iteration of its large language model (LLM), named Llama 4 Scout and Llama 4 Maverick.
According to Meta, Llama is a multimodal AI system, meaning it can analyze and combine data in various formats such as text, video, images, and audio, and can navigate seamlessly between these different types of content.
In its announcement, Meta described Llama 4 Scout and Llama 4 Maverick as its “most advanced models to date,” claiming they set a new standard in the field of multimodality.
Meta also revealed that both Llama 4 Maverick and Llama 4 Scout will be available as open-source software. Additionally, the company provided a preview of Llama 4 Behemoth, branding it “one of the smartest LLMs globally and the most robust to date,” which will serve as an educational resource for the new models.
Following the success of OpenAI’s ChatGPT, which dramatically transformed the tech industry and spurred significant investment in machine learning, major tech companies have been ramping up their investments in AI infrastructure.
Reports from The Information indicated on Friday that Meta had postponed the launch of its LLM’s latest version due to underwhelming performance in technical evaluations, particularly in areas such as reasoning and mathematics.
There were also concerns that Llama 4 was not as adept as OpenAI’s models at engaging in conversations that mimic human speech, as noted in the report.
This year, Meta intends to invest up to $65 billion to bolster its AI infrastructure, prompted by investor demands for tech companies to demonstrate returns on their innovations.



