- Jumble
- Posts
- Apple Owes You Money but Refuses to Admit Any Wrong
Apple Owes You Money but Refuses to Admit Any Wrong
This week, Apple just opened a wallet‑sized portal in its $3 trillion fortress, agreeing to pay millions over Siri recordings, while OpenAI quietly released GPT‑4.1 for everyone to try. Let’s dive in ⬇️
In today’s newsletter:
🪙 Apple lawsuit could put money in your pocket
🤖 GPT‑4.1 finally lands in ChatGPT
⚡ Box releases next-level enterprise AI agents
🤑 Nvidia and AMD rake up cash in the Middle East
📚 Challenge: Learn when to use reasoning vs non‑reasoning models
💲 Why a Decade of "Hey Siri" Clips Now Comes With a Check
In 2021, Californian plaintiff Fumiko Lopez filed suit against Apple, alleging the company engaged in “unlawful and intentional interception and recording” of Siri and voice‑assistant queries without user consent, then disclosed those recordings.
The suit followed Apple’s own 2019 admission in a statement titled “Improving Siri’s privacy protections,” where the company acknowledged it hadn’t fully lived up to its privacy ideals. Critics noted that anonymization failed to prevent human listeners from hearing personal conversations.
📜 Settlement Details
On December 31, 2024, Apple agreed to a $95 million settlement—while continuing to deny any wrongdoing. Under the terms, U.S. owners of Siri‑enabled devices (iPhone, iPad, Apple Watch, HomePod, Mac, iPod Touch, or Apple TV) who enabled Siri between September 17, 2014, and December 31, 2024, and experienced at least one unintended Siri activation during a private communication, may file a claim.
💰 Get Your Money!
You can visit the official portal to submit your Apple ID email and choose a payment method (PayPal, Venmo, or check).
You may file claims for up to five devices. The deadline to submit claims—and to opt out and preserve other legal rights—is July 2, 2025. Estimated payouts range from $16 to $65 per claimant, depending on overall participation.
If you owned or bought a Siri-enabled device during the last 10 years, Apple may owe you money. But time is running out to apply.
— 10TV (@10TV)
9:36 AM • May 10, 2025
🎯 Why It Matters for AI
While dollar figures grab headlines, the settlement underscores a deeper tension: modern AI assistants need user data to improve, yet consumers rightfully demand strong, transparent privacy controls.
With Apple gearing up to roll on‑device generative‑AI features in iOS 19, regulators and users alike will scrutinize how voice data is collected, stored, and processed. The Siri settlement sets a precedent that privacy missteps—even unintentional ones—carry real financial and reputational costs, driving the industry toward clearer consent and tighter data handling practices.
🤩 GPT‑4.1 Quietly Arrives in ChatGPT
No longer just for developers, OpenAI releases GPT-4.1 and 4.1-Mini for all users. GPT‑4.1 delivers "4o style" latency on one‑third the compute, thanks to sparse Mixture‑of‑Experts routing. Early users report near‑instant responses for common tasks and a 30 % drop in prompt costs for developers on the API. That efficiency also means less carbon per query—small, but symbolically important as AI energy use surges.
By popular request, GPT-4.1 will be available directly in ChatGPT starting today.
GPT-4.1 is a specialized model that excels at coding tasks & instruction following. Because it’s faster, it’s a great alternative to OpenAI o3 & o4-mini for everyday coding needs.
— OpenAI (@OpenAI)
5:36 PM • May 14, 2025
This release also broadens accessibility. GPT‑4.1 is now available to both free and Plus-tier ChatGPT users (via the oAI Reasoning mode), eliminating the gap between research-only features and everyday usage. That’s a strong step toward mainstreaming high-performance AI without paywalls.
🛠️ Capabilities You’ll Notice
The model gets a 128K context window, better code completion, and improved multilingual reasoning. In ChatGPT, that translates to fewer "I can’t see the attachment" moments: GPT‑4.1 can summarize PDFs up to 500 pages and retain reference points throughout a conversation. Plus, a new "chart" tool lets you paste CSVs and generate instant matplotlib‑grade visuals.
Another standout is function-calling reliability. GPT‑4.1 integrates more consistently with tools, APIs, and plugins—especially in environments where context juggling was previously unreliable. From answering follow-up queries mid-calculation to organizing in-window document references, coherence feels tighter and more useful.
🔍 What’s Still Missing
GPT‑4.1 isn’t the rumored GPT‑5. It still struggles with complex math proofs and occasionally drifts in role‑play scenarios. And because the architecture remains closed, researchers can’t verify safety claims. OpenAI promises "iterative" model cards later this summer, but critics want real‑time transparency portals—a standard that could become law if the EU’s AI Act spreads internationally.
This Week’s Scoop 🍦
🛡️ Google debuts Sec‑Gemini v1 to automate cyber incident response
💼 Legal startup Harvey AI seeks $5 billion valuation
📈 Box debuts new AI agent workforce
🌍 Nvidia and AMD land billion‑dollar AI chip deals in gulf states
💰 Samaya AI raises $43.5 million for finance‑focused models
🤖 NTT DATA launches smart AI agent ecosystem
🔮 Challenge of the Week: Learn When to Use 'Thinking' Vs 'Non-Thinking' Models
Challenge: Learn when to use the right LLM (reasoning vs non-reasoning) for your tasks.
With so many models available, it’s easy to assume that the most ‘powerful’ model is the best for every task, but that’s not the truth. Some models are less intelligent but better writers and others are more intelligent but awful conversationalists.
Here’s how to choose which is best for the varying types of tasks you want it to complete:
🧐 Reasoning Models
Examples: DeepSeek‑R1, OpenAI o3, Gemini 2.5 Pro, Mistral Large
Strengths: Multi‑step logic, chain‑of‑thought, reliable tool use, large context windows (100k+ tokens), strong code and math.
Weaknesses: Higher price per token, slower latency, high GPU demand, can become verbose or over‑confident.
When to use: Complex coding tasks, drafting legal briefs, advanced tutoring, multi‑variable travel or supply‑chain planning.
Everyday examples: Ask o3 to write a research grant, have DeepSeek‑R1 solve a physics problem step‑by‑step, let Gemini 2.5 Pro build a two‑week itinerary with visa constraints.
⚡ Non‑Reasoning Models
Examples: GPT‑4.1, GPT‑4o, Gemini Flash 2.5, Mistral Small 7B, Llama‑3 70B
Strengths: Blazing speed, low cost, concise answers, stylistic flexibility, fits on‑device for privacy.
Weaknesses: Limited depth, shorter context, brittle on chain‑of‑thought tasks.
When to use: Social‑media copy, customer‑support macros, instant translation, meeting‑note summarization, rapid ideation.
Everyday examples: Use GPT‑40 to create slogan variants, run Gemini Flash 2.5 to translate emails, let Mistral turn bullet points into a friendly newsletter intro.
Want to sponsor Jumble?
Click below ⬇️
That’s it for this week! From Apple paying up but still not admitting fault to faster and more useful chatbots, like GPT-4.1 — AI is hitting both wallets and workflows. See you next time! 🚀
Stay informed, stay curious, and stay ahead with Jumble!
Zoe from Jumble
