- Jumble
- Posts
- Sam Altman Doesn't Want You to be Nice to ChatGPT
Sam Altman Doesn't Want You to be Nice to ChatGPT
Welcome to this week's edition of Jumble, your go-to source for AI news. Today’s issue reveals why OpenAI’s CEO wants you to ditch “please” and “thank you” in your prompts—and unpacks AI’s hidden value judgments. Ready? Let’s dive in. ⬇️
In today’s newsletter:
💸 There’s a cost to being polite to ChatGPT
🤔 Does AI even know right from wrong?
🛑 DOJ wants to stop Google before its AI takes over
📸 Instagram is using AI to catch teens lying about age
⛱️ AI Challenge: Use AI to get your summer beach body
🤭 How “Please” And “Thank You” Are Costing OpenAI Millions
In a random reply to a tweet (see below), OpenAI CEO Sam Altman revealed that every time you type “please” or “thank you” into ChatGPT, you’re adding fractional processing time—and at scale, those courtesy tokens have cost dozens of millions of dollars in electricity bills.
Do you say "please" and "thank you" to AI? |
What seems like a harmless nicety from a user becomes a scaled-up micro-delay that demands more compute, more cooling, and ultimately, more cash. Multiply that by millions of users doing it daily, and the bill starts to look less polite.
Each polite token triggers extra compute cycles—These polite prompts alone drive ChatGPT’s power usage up by a measurable margin, adding roughly 10× more electricity per query than a standard Google search.
⚡ Speed gains and cost savings
Dropping niceties shaves off milliseconds per request. Altman’s cheeky reply—“tens of millions of dollars well spent—you never know”—highlighted how small wording tweaks can yield tangible savings. In large-scale systems, even a single token can make the difference between latency thresholds being hit or missed. Less fluff, more efficiency.
🤣 If you want to help OpenAI save money.. here’s how
What’s millions to a company worth billions? Nonetheless, if you want to help out a bit, streamline your instructions and use direct commands.
For example:
“Please write me a short thank you email for taking the time to interview me”
*Response
“Thank you”
Instead try:
“Write me a short thank you email for taking the time to interview me”
*Response

🤔 What Anthropic’s “Values in the Wild” Reveal About the Morals of AI
Anthropic’s Societal Impacts team dove into 308,210 anonymized Claude conversations—nearly 44% of total chats—to surface which values this AI regularly signals in actual use. Their findings shine a light on both expected norms and surprising gaps, giving teams a roadmap for anticipating AI behavior and designing better prompts.
🔍 Practical vs. personal values
Claude consistently prioritizes “practical” and “epistemic” values—think efficiency, clarity, and transparent reasoning—over more human‑centric principles like empathy or moral resolve. In fact, professional tone and fact‑based guidance appear in over 65% of analyzed exchanges, while emotional reassurance drops below 15%, suggesting a need to consciously remind AI to “get personal.”
New Anthropic research: AI values in the wild.
We want AI models to have well-aligned values. But how do we know what values they’re expressing in real-life conversations?
We studied hundreds of thousands of anonymized conversations to find out.
— Anthropic (@AnthropicAI)
2:59 PM • Apr 21, 2025
⚖️ When AI plays judge
In high‑stakes scenarios—legal counsel, medical advice, or ethical dilemmas—Claude defaults to “historical accuracy” and “patient wellbeing,” reflecting its training emphasis. Yet clusters of “dominance” and occasional “amorality” emerge when users probe boundaries or attempt jailbreaks, highlighting areas where AI alignment can falter if left unchecked.
🤝 Guiding human‑AI alignment
Teams can boost alignment by enriching prompt templates with explicit value cues—embedding empathy prompts like “consider the user’s feelings” or “prioritize ethical fairness.” Organizations using value‑scored prompt libraries report up to a 30% increase in AI responses that reference personal values, proving that small tweaks yield measurable changes.
This Week’s Scoop 🍦
🗣️ ChatGPT now calls users by their name without prompting
📸 Instagram is using AI to spot and restrict teens lying about their age
🏢 Amazon pulls back on data center leasing plans
⚖️ DOJ says Google’s AI could help it supercharge its search monopoly
💻 Huawei’s new AI chip results in another bad day for Nvidia
📞 Customer support AI goes rogue leading to cancellations
⛱️ AI Challenge: Use AI to help get your beach body before summer
Goal: Jumpstart your 2025 summer bod with AI-powered fitness coaching and personalized plans.
🔥 FitnessAI: Smarter Workouts in Your Pocket
FitnessAI analyzes over 5.9 million workouts to optimize sets, reps, and weights for your goals—perfect for toning up before beach season.
💪 JuggernautAI: AI-Driven Strength Training
Named the Best AI Fitness App by Garage Gym Reviews, JuggernautAI crafts science-backed routines that adapt as you progress—ideal for building lean muscle.
🏃 Fitbod: Tailored Plans for Any Equipment
Fitbod’s AI engine recommends exercises based on your available gear and adjusts in real time to help you burn calories and sculpt muscle—a flexible option whether you’re home or at the gym (techradar.com).
Try one (or all three!) this week, set your targets, and let these AI coaches run the playbook.
🚦 Ready, set, beach body!
Want to sponsor Jumble?
Click below ⬇️

That’s it for this week! How’s AI affecting your work? Do you believe we can create a morally superior AI? And don’t forget to give those AI tools a spin this week — let us know how they worked for you.
Stay informed, stay curious, and stay ahead with Jumble!
Zoe from Jumble