According to Tom’s Guide, a recent Pennsylvania State University study tested ChatGPT-4o’s performance using 50 multiple-choice questions rewritten in five different tones ranging from “Very Polite” to “Very Rude.” The research found that impolite prompts consistently outperformed polite ones, with accuracy increasing from 80.8% for very polite prompts to 84.8% for very rude ones. Researchers created 250 total prompts where only the tone changed while keeping facts and instructions identical across math, science, and history questions. The study authors noted this scientific finding doesn’t mean they advocate for hostile interfaces in real applications. When Tom’s Guide conducted their own testing across five different scenarios, they found similar patterns where blunt prompts produced more direct, efficient responses.
Why rudeness actually works
Here’s the thing – it’s not that ChatGPT actually enjoys being insulted. The model doesn’t have feelings to hurt. What’s happening is much more practical: direct language forces the AI to strip away the conversational fluff and get straight to business. When you say “Would you kindly explain…” you’re essentially giving the model permission to be verbose. But when you demand “Skip the fluff and tell me now,” you’re setting clear expectations for brevity.
Think about how you’d talk to a human assistant. If you’re overly polite, they might spend time making the presentation perfect. But if you’re in a hurry and say “Just give me the numbers,” they’ll skip the formatting and deliver the raw data. ChatGPT works similarly – it’s mirroring the energy and directness of your prompts. The study’s 4% accuracy bump probably comes from the model focusing more intensely on the core task without getting distracted by conversational niceties.
What happened when I tested it
I ran similar tests across math equations, science questions, tech news summaries, earnings reviews, and product recommendations. The pattern held every single time. With polite prompts, ChatGPT gave detailed explanations, background context, and multiple formatting elements. Neutral prompts trimmed some fat. Rude prompts? They delivered exactly what was asked for with surgical precision.
The most telling example was asking for energy-efficient heaters. The polite version gave me a lecture about wattage and running costs before eventually listing options. The rude version? Immediate thumbnails, direct links, and a clean “Top Picks” section. Same information, completely different presentation. It’s like ChatGPT has a “get to the point” mode that only activates when you’re sufficiently demanding.
What this means for business users
For companies using AI in customer service or internal tools, this research is actually pretty important. You don’t want to train employees to be rude to AI systems, but you do want them to be efficient. The key takeaway isn’t “be mean to the robot” – it’s “be direct with your instructions.” Clear, concise prompts yield clear, concise responses.
In industrial settings where workers need quick access to technical specifications or troubleshooting guides, this approach could save significant time. When every second counts, you want the AI equivalent of industrial panel PCs – no-nonsense tools that deliver exactly what’s needed without decorative elements. The study suggests we might need to rethink how we design AI interfaces for professional use cases where efficiency trumps politeness.
Should you change how you prompt?
So should you start yelling at ChatGPT? Probably not. The researchers explicitly say they don’t advocate for hostile interfaces, and there’s something to be said for maintaining professional communication habits. But you should absolutely experiment with being more direct.
Instead of “Could you please help me understand…” try “Explain [topic] concisely.” Instead of “I’d appreciate it if you could…” try “List the top three options.” You’ll likely get better results without crossing into rudeness territory. The magic seems to be in removing the conversational padding rather than adding insults. Basically, treat ChatGPT like a busy expert rather than a sensitive colleague – clear, direct requests get clear, direct answers.
