Welcome to Jumble, your go-to source for AI news updates. This week, a massive source code leak at Anthropic has birthed the fastest-growing GitHub project in history. Meanwhile, Google is raising the stakes for local AI with the launch of the powerful Gemma 4 model family. Let’s dive in ⬇️

In today’s newsletter:
🥷🏻 Massive source code exposure fuels developer frenzy
📱 Google brings flagship intelligence to your smartphone
🚀 SpaceX prepares for a record-breaking public debut
🎬 OpenAI sets a sunset date for Sora video
💰 Weekly Challenge: Give Gemma 4 a try

🔓 The Fastest Growing Project in GitHub History

Inspired by Anthropic's 500K lines of leaked code, developer Sigrid Jin quickly launched Claw-code, claiming the platform's fastest growth record with 100,000 stars in a day. A missing configuration file caused the accidental release of 500,000 lines of Claude Code source code, the company’s premier AI agent.

Anthropic confirmed the leak stemmed from human error during the packaging process. Despite thousands of takedown notices, the genie is out of the bottle.

Which AI company do you trust the most with your data?

Login or Subscribe to participate

🕵️ Inside the Leaked Internal Architecture

The exposure offered an unprecedented look at the internals of the Claude Code tool, revealing 44 unreleased features. These include a "Kairos" daemon for 24/7 operations and an "undercover mode" hiding AI contributions from version history.

Sigrid Jin created a clean-room rewrite of the leaked code using AI assistance to recreate logic in Rust and Python. This allowed the community to begin analyzing the massive source code leak without hosting stolen TypeScript files.

💻 The Future of Open Source Agents

Early reviewers testing the capabilities of the new agent noted high complexity and missing tests. The event has shifted conversations around proprietary AI harnesses and complex coding tasks.

🧠 Google Delivers Flagship Power to the Edge

Google has officially launched Gemma 4, its most advanced family of open-weight AI models to date. This release marks a significant milestone in accessible AI, offering multimodal capabilities that handle text, image, video, and audio across four distinct model sizes. The family includes models ranging from a 2B effective version to a massive 31B dense variant.

The smallest models are specifically optimized for mobile use, allowing developers to build apps running locally on an Android smartphone without requiring a constant internet connection.

⚡ Unprecedented Intelligence Per Parameter

This release is being hailed as the most advanced open AI model family for reasoning and agentic workflows. By providing context windows of up to 256K tokens, Google is enabling new capabilities for global developers to process vast amounts of information locally.

🌍 A New Standard For Open Weights

Early benchmarks show that the 31B and 26B variants are already showing strong performance on the AI Arena leaderboard, frequently outperforming much larger closed-source rivals. Industry analysts suggest Google is rethinking the global AI model race by prioritizing accessibility and edge-device efficiency over sheer parameter count.

Weekly Scoop 🍦

🎯 Weekly Challenge: Run Google's Gemma 4 Yourself

Challenge: Google just dropped Gemma 4, its most powerful open-source AI model. It's free, it's multimodal, and you can run it on your own computer. This week, take it for a spin.

Here's what to do:

📥 Step 1: Pick your path If you want to run AI locally, download Ollama from ollama.com (Mac, Windows, or Linux). If you'd rather skip the install, head to aistudio.google.com and sign in with your Google account.

💻 Step 2: Fire it up Ollama users: open your terminal and type ollama run gemma4. That's it. Google AI Studio users: start a new prompt and select Gemma 4 from the model dropdown.

🧪 Step 3: Give it a real task Don't just say "hi." Try something like: "Explain how a neural network learns, step by step, like I'm 12." See how it handles reasoning compared to ChatGPT or Claude.

📸 Step 4: Test the multimodal Upload a photo, screenshot, or chart and ask Gemma 4 to analyze it. This is where open-source models are catching up fast.

🤔 Step 5: Decide if local AI is for you Running a model on your own machine means no subscriptions, no data leaving your device, and no usage limits. The tradeoff? You need decent hardware and a little patience.

From record-breaking code leaks to flagship models running on your phone, the line between "closed" and "open" AI is blurring faster than ever. What do you think about the Claude Code leak? See you next time! 🚀

Stay informed, stay curious, and stay ahead with Jumble!

Zoe from Jumble

Keep Reading