• Jumble
  • Posts
  • DeepSeek-R1, China's AI Challenger Levels Up

DeepSeek-R1, China's AI Challenger Levels Up

Welcome to this week's edition of Jumble!

Welcome to this week's edition of Jumble, your go-to source for the latest in AI. This week, we’re diving deep into DeepSeek’s impressive new R1 model that’s shaking up the reasoning AI landscape, and Google’s quiet release of Gemma 3n, bringing powerful multimodal AI directly to your mobile device. It’s a packed week in AI, and here’s what you need to know ⬇️

In today’s newsletter:
🧠 DeepSeek impresses yet again
📱 Gemma 3n brings multimodal AI to your pocket
📹 Adobe enters the generative AI video conversation
🪫 Tesla to roll out full self-driving (FSD) in Austin
🎬 Weekly Challenge: Master AI video generation

🧠 DeepSeek Impresses Yet Again

Chinese tech startup DeepSeek has once again made waves in the AI world with the release of its updated R1 reasoning model, DeepSeek-R1-0528. This latest iteration, built upon the DeepSeek V3 Base model, showcases a significant leap in reasoning and inference capabilities, positioning it as a formidable competitor to leading international models. The update has garnered global attention, intensifying the rivalry with established players like OpenAI and Google.

📈 Benchmarks & Breakthroughs

DeepSeek-R1-0528 has demonstrated outstanding performance across various benchmark evaluations, including mathematics, programming, and general logic. According to the company, its overall performance is now approaching that of top-tier models such as OpenAI's o3 and Google's Gemini 2.5 Pro. This advancement is particularly notable given the ongoing global competition in AI development.

One of DeepSeek's most compelling advantages continues to be its cost-effectiveness. 

DeepSeek-R1 operates at approximately 5% of OpenAI o1's costs, making it a highly attractive option for large-scale deployments and cost-sensitive projects. While OpenAI o1 holds a slight edge in programming benchmarks, DeepSeek-R1 has shown superior performance in mathematical reasoning, scoring 97.3% on MATH-500 compared to OpenAI o1's 96.4%. This indicates its strong capabilities in complex problem-solving.

The model's open-source nature further enhances its appeal, allowing for extensive customization and adaptation to specific requirements or regional regulations. Its release on platforms like Hugging Face makes it readily accessible to developers worldwide. 

On the LiveCodeBench leaderboard, DeepSeek's updated R1 reasoning model ranks just slightly behind OpenAI's o4 mini and o3 reasoning models on code generation, but impressively, it's ahead of xAI's Grok 3 mini and Alibaba's Qwen 3. This consistent performance underscores DeepSeek's growing influence in the global AI landscape and its potential to reshape the industry.

🌐 Global Impact & Future Outlook

DeepSeek's continuous innovation challenges the notion that certain regions are lagging in AI advancements, especially in the face of export controls. The company's ability to deliver high-performance models at a fraction of the cost is a game-changer, potentially democratizing access to advanced AI capabilities. As DeepSeek continues to refine its models and expand its reach, it will undoubtedly play a crucial role in shaping the future of artificial intelligence, fostering greater competition and driving further innovation across the industry.

📱 Gemma 3n Brings Multimodal AI to Your Pocket

Google quietly shipped Gemma 3n in preview, touting it as a “mobile‑first” sibling to Gemini Nano—but open weight and multimodal. The 4B‑parameter variant runs in 2 GB RAM thanks to DeepMind’s Per‑Layer Embeddings trick, handling text, images, audio, and even low‑res video on‑device.

🚀 Why Developers Are Excited

Latency drops to sub‑100 ms on Pixel‑class silicon, enabling offline transcription, private visual search, or live translation without pinging the cloud. Early benchmarks show Gemma 3n beating Llama 3 8B on MMLU while sipping half the power. Google says the same architecture will power the next Gemini Nano update across Android and Chrome.

⚠️ Caution Flags

Because it’s preview‑only, the current license bars commercial release. And unlike Gemini Nano, Gemma 3n doesn’t include a policy enforcement layer—developers must bolt on filters themselves. Still, its open weights mean OEMs can finally ship fully private assistants without paying a per‑token fee.

What’s Next: Google’s roadmap hints at “Gemma 3n‑Audio,” a speech‑first variant, and “SignGemma” for sign‑language translation. If the rollout sticks, 2025 could be remembered as the moment smartphones became capable multimodal models in their own right.

This Week’s Scoop 🍦

🎯 Weekly AI Challenge: Master AI Video Generation

Challenge: Create a short AI‑generated clip (5 – 15 seconds) using one of the latest publicly available video models—Google’s Veo 3, OpenAI’s Sora, or Runway Gen‑3 Alpha.

1. Pick your tool

  • Google Veo 3 (Gemini Pro & Ultra): Adds native audio, dialogue, and sound FX support with lip‑sync. Generates up to 10‑second, 1080p@30 fps clips and is rolling out through Google Labs & Vertex AI Flow invites. 

  • OpenAI Sora (ChatGPT Plus/Teams/Pro): Available to U.S. subscribers; Pro tier enables 1080p video up to 20 seconds with watermark‑free downloads. Praised for realistic physics and scene coherence. 

  • Runway Gen‑3 Alpha: Web/iOS access with 5‑ or 10‑second duration, 1280 × 768 output, Turbo mode for low‑credit jobs, and built‑in style presets.

2. Draft a prompt

Apply the Subject · Style · Camera · Motion · Lighting formula:

“A bustling cyberpunk street market, Blade‑Runner style, slow dolly‑in, neon reflections, nighttime drizzle.”

3. Generate & refine

Run at least two takes, tweaking camera movement, speed, or lighting. Use Runway’s Video Extension or Sora’s alternate seed to iterate. In Veo 3, experiment with the new Flow timeline to layer audio cues.

4. Share & reflect
Post your favorite clip on X/Twitter, Instagram , or in a group chat! 

Need inspo? Watch Veo 3’s “Surf’s Up” demo, Sora’s papercraft city reel, or Runway’s “Origami Forest” showcase on their official pages.

Want to sponsor Jumble?

Click below ⬇️

That’s it for this week! Thank you for being a valued reader of Jumble! From the latest AI model to weekly challenges that expand your experiences, wer’re here for you. Stay tuned for more updates, and as always:

Stay informed, stay curious, and stay ahead with Jumble!

Zoe from Jumble