Welcome to Jumble, your go-to source for AI news updates. This week, OpenAI shipped GPT-5.5 with Images 2.0 baked in, and hours later DeepSeek dropped its biggest model in over a year, open-source and running entirely on Chinese chips. Let’s dive in ⬇️
In today’s newsletter:
⚡ GPT-5.5 drops, already flagged a cyber risk
🧬 DeepSeek V4 arrives open-source, 1M tokens
🍎 Gemini set to power Apple Intelligence and Siri
🗡️ Microsoft joins Meta in slashing thousands of jobs
🏆 Weekly Challenge: GPT-5.5 vs DeepSeek V4 on Arena
🛬 GPT-5.5 Lands as OpenAI's Biggest Model Yet
OpenAI released GPT-5.5 on April 23, seven weeks after GPT-5.4 and its fastest release cadence to date. It's live now for Plus, Pro, Business, and Enterprise users in ChatGPT and Codex, with API access rolling out soon.
Have you tried GPT-5.5 yet?
🧠 It Handles the Whole Task, Not Just the Next Step
GPT-5.5 can take a messy, multi-step task and plan, use tools, and check its own work without prompting at every step. Biggest gains are in agentic coding, computer use, and scientific research. Plus, the new ChatGPT Images 2.0 model is now integrated directly into GPT-5.5's reasoning pipeline.
🚨 Not Everyone's Impressed
The API price doubled to $5 and $30 per million tokens, the biggest jump in the GPT-5.x series. Independent testing put GPT-5.5 at 86% on AA-Omniscience vs 36% for Claude Opus 4.7, meaning it's much more willing to confidently make things up. Claude also outperforms it on real-world coding benchmarks like SWE-Bench.
🐳 DeepSeek V4 Drops the Same Day, Open Source and Built on Huawei Chips
Hours after GPT-5.5, DeepSeek released a preview of V4 on Hugging Face. It comes in two flavors: V4-Pro (1.6 trillion parameters, 49B active) and V4-Flash (284B total, 13B active). Both are MIT-licensed and support a one million token context window.
🔧 No Nvidia Required
V4-Pro beats all rival open models for math and coding, trailing only Gemini 3.1 Pro on world knowledge. Huawei confirmed its Ascend 950 chips fully support V4 out of the box, making it deployable at scale on domestic Chinese hardware without touching Nvidia.
📉 The Gap Is Closing, Not Gone
V4 still trails GPT-5.4 and Gemini 3.1 Pro by about three to six months on benchmarks. But research out of Stanford this year concluded Chinese companies have "effectively closed" the performance gap with US rivals. Inference costs are expected to land well below GPT-5.5, which has always been DeepSeek's strongest argument.
Diskless, Kafka-Compatible Streaming That Runs in Your Cloud
WarpStream BYOC is a diskless, stateless Kafka-compatible streaming platform. No local disks, no inter-AZ fees, no broker rebalancing. Your data stays in your own cloud. Agents auto-scale automatically.
Robinhood uses it for logging. Cursor runs AI telemetry on it. Grafana Labs streams at 7.5 GiB/s with zero cross-AZ fees. Change one URL, keep all your existing clients. Learn more, or sign up for free.
Get $400 in credits that never expire. No credit card required to start.
Weekly Scoop 🍦
🎯 Weekly Challenge: GPT-5.5 vs DeepSeek V4
Challenge: Two flagship models dropped the same day. This week we're using Arena.ai to find out which one actually wins in a blind fight.
Here’s what to do:
🥊 Step 1: Open Battle Mode Head to arena.ai and select Battle Mode. Two models appear side by side, anonymously, so you judge the output before either is revealed.
📋 Step 2: Run three prompts Use one from each category: coding ("write a Python script that summarizes news headlines"), reasoning ("a train leaves Chicago at 9am..."), and creative ("write the opening of a thriller set inside an AI lab"). Keep the prompts identical across rounds for a fair comparison.
🔍 Step 3: Vote blind, then reveal Pick the better response before clicking reveal. Note where each model pulled ahead: speed, depth, or writing quality.
📤 Step 4: Rinse and Repeat Choose other models you want to test head-to-head before paying a monthly subscription.
Will DeepSeek overtake the likes of OpenAI one day soon? And, are the AI model wars just a preview for the real life wars they’ll be used in? See you next time! 🚀
Stay informed, stay curious, and stay ahead with Jumble!
Zoe from Jumble



