Welcome to Jumble, your go-to source for AI news updates. This week, OpenAI makes a massive move in the agent race by hiring the creator of OpenClaw to lead its personal AI efforts. Meanwhile, a high stakes military operation in Venezuela reveals that Anthropic’s Claude is already on the front lines of classified missions. Let’s dive in ⬇️
In today’s newsletter:
👤 Startup founder joins Sam Altman's team
🚁 Special forces deploy intelligence models abroad
🕵🏻♂️ Foreign actors target high level systems
⚖️ Large studios fight back against unauthorized generation
🧭 Weekly Challenge: Catch a vibe with Google Mixboard
OpenAI just hired Peter Steinberger, the Austrian developer who vibe coded OpenClaw into existence as a weekend project and accidentally created the fastest growing GitHub repo of all time.
The project went through three names in a month - Clawdbot, Moltbot, then OpenClaw - partly because Anthropic's legal team sent what Steinberger described as "love letters from legal" over the name sounding too much like Claude.
🏆 Everyone Wanted This Guy
Meta, Microsoft, and OpenAI all came calling. Satya Nadella reached out personally. Zuckerberg tested OpenClaw himself and messaged Steinberger on WhatsApp. OpenAI won because they agreed to keep the project open source, which was his non-negotiable.
🔥 The Internet Is Not Impressed
Hacker News is dragging this hire. Top comments point out Steinberger admittedly never read most of the code his AI wrote for him. Cisco's security team straight up called OpenClaw "an absolute nightmare." A one-click remote code execution exploit was found that takes milliseconds to compromise an instance.
SecurityScorecard discovered over 135,000 exposed instances on the open internet, with 50,000+ vulnerable to known RCE attacks. Gartner told enterprises to block OpenClaw downloads immediately. One researcher called it "the largest security incident in sovereign AI history."
🧠 What OpenAI Actually Gets
It's not about the code. OpenAI's enterprise share dropped from 50% in 2023 to 27% by end of 2025 while Anthropic climbed to 40%. Steinberger proved the market for always-on agents is real, not theoretical. The irony that OpenClaw was one of the biggest drivers of API traffic to Anthropic isn’t lost on anyone. Anthropic's legal threats may have literally pushed their most valuable ecosystem builder to their biggest competitor.
🛡️ The Security Mess Is Now OpenAI's Mess
The skill store is riddled with malware. Infostealers are already targeting OpenClaw config files. Exposed instances have leaked plaintext API keys, credit card numbers, and full conversation histories.
One researcher compared giving OpenClaw system access to hiring someone with a history of identity theft who takes instructions from anyone. Steinberger says he wants to build an agent his mom can use. Security experts say the average user should stay away entirely.
🎖️ Claude AI Was Used in the Maduro Raid
The US military used Anthropic's Claude during the January operation that captured Venezuelan president Nicolas Maduro and his wife. The raid included overnight strikes across Caracas, explosions at the Fuerte Tiuna military complex, and ended with Maduro being flown to New York to face narcoterrorism charges.
Cuba confirmed 32 of its soldiers were killed. Claude was deployed through Palantir and used during the active operation, not just preparation.
😬 Anthropic's Safety Brand Just Took a Direct Hit
This is the company that built its identity around responsible AI. Its usage policy explicitly bans facilitating violence, developing weapons, and surveillance. After the raid, an Anthropic employee reportedly asked Palantir whether Claude had been used. According to a Pentagon official, the question implied they might disapprove. That single inquiry may have triggered the entire fallout.
⚔️ The Pentagon Wants Unconditional Access
Defense Secretary Hegseth said in January the Pentagon won't "employ AI models that won't allow you to fight wars." The Pentagon is pushing all four major labs to agree to "all lawful purposes" with zero restrictions.
One of the three non-Anthropic companies has already agreed. Anthropic's two hard lines: no mass surveillance of Americans, no fully autonomous weapons. The Pentagon says those terms create unworkable gray areas.
🏛️ The $200 Million Contract Is on the Line
Hegseth is reportedly weighing designating Anthropic as a "supply chain risk," which would force every Pentagon contractor to certify they don't use Claude. Given that eight of the ten largest US companies use Claude, that designation would send shockwaves way beyond defense.
🤫 The Part the Pentagon Won't Say Out Loud
Claude is currently the only AI model on their classified networks. Officials privately concede competing models are "just behind" for specialized government work. The head of Anthropic's Safeguards Research Team also just resigned. They need Anthropic more than the tough talk suggests - at least for now.
Weekly Scoop 🍦
🎯 Weekly Challenge: Learn How to Use Google Mixboard
Challenge: This week, we’re mastering Google Mixboard, an AI mood board tool perfect for interior designers visualizing renovations, marketers mocking up branding in situ, or event planners pitching concepts without a photoshoot.
Here is your 4-step guide to mixing realities:
1 🧱 Generate the Venue Type a prompt for a specific environment to serve as your base layer, such as "Minimalist concrete coffee shop with empty walls."
2 🛋️ Isolate the Asset Generate a distinct object (e.g., "Neon pink logo sign"), click the image, and select Remove Background to turn it into a transparent asset.
3 🔀 The Mixboard Merge Hold Shift to select both your Venue and your Asset, then prompt Mixboard to "Mount the neon sign on the back wall."
4 🎨 Curate and Export Refine the result by clicking Regenerate if the perspective needs adjustment, then download your polished concept art.
Will personal AI agents finally replace our traditional apps, or are the security risks too high? And, should safety focused AI companies be forced to support military operations? See you next time! 🚀
Stay informed, stay curious, and stay ahead with Jumble!
Zoe from Jumble





