Meta has officially released the first models in its latest large language model family, Llama 4. The newly launched versions, called Llama 4 Scout and Llama 4 Maverick, are now available for download and integration. According to the company, these models are designed to handle a range of tasks across text and image processing, coding, and long-context understanding.
The release of Scout and Maverick is part of Meta’s broader expansion into multimodal AI. These models are already integrated into Meta AI systems, including platforms like WhatsApp, Messenger, and Instagram direct messages, allowing users to interact with them in everyday chat experiences.
Llama 4 Maverick, the more robust of the two models, is positioned for general-purpose assistant use cases. It’s tailored for tasks that combine image and text understanding, aiming to support everything from conversational AI applications to customer service interactions. Maverick is built with 17 billion active parameters and 128 experts, a configuration that Meta says provides a balance of performance and scalability. Internal benchmarks suggest it competes with or surpasses other major AI systems on tasks involving multilingual processing, code generation, reasoning, and long-context input.
On the other hand, Llama 4 Scout is a lighter model but with significant capabilities. Also based on 17 billion active parameters but using just 16 experts, Scout is designed for more targeted applications. These include multi-document summarization, complex data parsing, and processing personalized tasks from extensive user data. Notably, it has an extended context window of up to 10 million tokens, which enables it to handle large-scale information with fewer interruptions or truncations. It’s also been engineered to operate efficiently on a single GPU, making it accessible to more developers and researchers without requiring advanced infrastructure.
While these two models are already in the wild, Meta has also teased the arrival of Llama 4 Behemoth, a large-scale model currently in training. With a planned 288 billion active parameters, it’s being positioned as a high-performing foundation model for complex reasoning and computation tasks. Another model, Llama 4 Reasoning, is expected to be revealed within the next month, possibly aligning with Meta’s upcoming developer-focused event, LlamaCon.
With Scout and Maverick now available for use on platforms such as Hugging Face and the official Llama site, Meta continues its push to provide developers and users with more advanced tools for AI integration. As the race to build more capable, multimodal models intensifies, Meta’s Llama 4 suite signals an effort to stay competitive in a crowded AI landscape.