Meta released Llama 4.
What is different compared to their previous models?
From Meta’s website:Meta's Llama 4 models are cutting-edge multimodal AI systems designed to handle both text and image inputs seamlessly. They employ a mixture-of-experts (MoE) architecture, which activates only a subset of parameters for each input, enhancing efficiency and scalability.
Basically, what this means is that only a smaller set of the model’s parameters are activated at a single request. The activated parameters represent one or more experts. These experts to be activated are chosen from an internal router. This is something really interesting, especially to that extent. You can request access to the models from https://www.llama.com/llama-downloads/. Leave a link in the description. Also, something that we have not seen before, Llama 4 Behemoth pushes the parameters to 2 trillion. This is an outstanding number for the industry.
Source: https://ai.meta.com/blog/llama-4-multimodal-intelligence/
Let’s discuss the key differences between the models.
Key Features of Llama 4 Models:
* Models:
* Llama 4 Scout: A 17-billion active parameter model with 16 experts and a massive 10-million-token context window. It excels at tasks requiring long context analysis, such as multi-document summarization and codebase reasoning.
* Llama 4 Maverick: Also a 17-billion active parameter model but with 128 experts and a 1-million-token context window. This model balances multimodal capabilities (text/image) and creativity, making it ideal for chatbots and enterprise applications.
* Training Data: Both models were trained on a mix of publicly available data, licensed sources, and interactions from Meta platforms like Instagram and Facebook.
* Supported Languages: Arabic, English, French, German, Hindi, Indonesian, Italian, Portuguese, Spanish, Tagalog, Thai, Vietnamese.
* Release Date: April 5, 2025.
Differences Between Scout and Maverick:
Source: Screenshot from my table from Meta’s blog post
I can’t wait for my access to these models to be approved so I can try them out. Let me know if you would like me to do a video or podcast episode testing the Llama 4 models.
Source: Created with Imagen3 and Gemini Flash 2.0 using the prompt: Can you generate a cute image with 4 llamas?
Google News
This week Google surprised us by releasing for free Gemini 2.5. This is their best model yet and is up to 1 million tokens, with plans to expand to 2 million tokens in the near future. This is an amazing model and if you don’t want to spend the money on OpenAI or Anthropic subscriptions you can consider this option.
Google also added ImageGen3 in Google Slides. So now you get to generate AI images for your slides without leaving the page. Super cool feature and really useful. If I don’t have a picture that I can use for my slides, I used to go to other pages to generate them. Please if you do generate images always cite the source.
OpenAI plans to go open source?
OpenAI surprised everyone this week with its announcement on April Fools’ Day. They are releasing Image Generation for free users. In their article, they mention that this might cause delays across users since they are predicting an overload of requests to their servers. Extraordinary is also their new form for open model feedback. OpenAI plans to release its first open model from the era of GPT2. They are requesting feedback from engineers and developers on that. The form, as well as all other information, will be linked below. In addition, OpenAI released OpenAI Academy. So this is an educational platform with resources on how to use OpenAI tools. I will leave a link below.
Apple Intelligence
For me it is really exciting since I am based in Europe. Apple Intelligence rolls out officially in the EU and for multiple languages like French, German, etc. Vision Pro with Apple Intelligence has the ability to proofread, rewrite, generate Images and Emojis with Genmoji. That could be cool to play with.
I quote from Apple’s blog post:Apple Intelligence marks an extraordinary step forward for privacy in AI and is designed to protect users’ privacy at every step. It starts with on-device processing, and for requests that require access to larger models, Private Cloud Compute extends the privacy and security of iPhone into the cloud to unlock even more intelligence.
Privacy and Security are the backbone of Apple Products, and I am happy to see that the company keeps its promises, even if the features take longer to release.
Claude Rolls Out AI for Education.
The new feature from Claude is supposed to act as a tool to help students solve problems and develop critical thinking. Instead of giving direct access to homework, it is planned to be used by students to help them study. This makes it a really good initiative, in my opinion, to support the younger generation with AI and assist them in learning and possibly retaining knowledge. I am excited to see the results.
Disclaimer:
In order to not claim I found all of these myself. I am also following creators in the space, and I have found most of the sources from the Future Tools page here: https://www.futuretools.io/news
This has been an extended episode. If you liked it, please like and share it with your friends on any platform you want. Thank you, and I hope you are having an incredible week.
If you found any of the things I shared helpful today, please subscribe.
References:
* Llama 4 Access: https://www.llama.com/llama-downloads/
* Llama 4 Blog: https://ai.meta.com/blog/llama-4-multimodal-intelligence/
* Hugging Face Llama 4: https://huggingface.co/meta-llama/Llama-4-Scout-17B-16E-Instruct?inference_provider=novita
* Google Gemini 2.5: https://arstechnica.com/gadgets/2025/03/googles-new-experimental-gemini-2-5-model-rolls-out-to-free-users/
* Google Slides Update: https://blog.google/products/workspace/workspace-slides-visuals-ai-updates/
* OpenAI releases Image Generation GPT: https://techcrunch.com/2025/04/01/sam-altman-says-that-openais-capacity-issues-will-cause-product-delays/
* OpenAI Academy: https://academy.openai.com/home
* OpenAI Open Model Form: https://openai.com/open-model-feedback/
* Apple Intelligence: https://www.apple.com/newsroom/2025/03/apple-intelligence-features-expand-to-new-languages-and-regions-today/
* Apple VisionOs Update: https://www.apple.com/newsroom/2025/03/apple-intelligence-comes-to-apple-vision-pro-today-with-visionos-2-4/
* Anthropic Claude Education: https://www.anthropic.com/news/introducing-claude-for-education
* Future Tools: https://www.futuretools.io/news
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit kivroglouparaskevi.substack.com