AI is a total mess, and it’s making us all dumber

AI is a total mess, and it's making us all dumber - Professional coverage

According to TechRadar, the term “AI” has become a meaningless catch-all, used to describe everything from advanced chatbots like ChatGPT to cancer detection tools and even smart toothbrushes. This rampant overuse has created widespread public confusion about what actually constitutes artificial intelligence. Experts like Professor Vasant Dhar from NYU Stern and Rupert Shute from Imperial College London note that the explosive popularity of generative AI has drowned out awareness of other, more established classes of AI that have been delivering value for decades. They argue that this terminological mess is actively preventing society from making informed, confident decisions about how to use or regulate these powerful technologies. The key issue is that most people are already using various forms of AI daily—like spam filters and fraud detection—without even realizing it, while the flashy generative tools dominate the narrative.

Special Offer Banner

The hype is hiding the real workhorses

Here’s the thing: generative AI is the loud, attention-seeking cousin at the family reunion. It’s impressive, it creates unique stuff on command, and it’s incredibly visible. But as Thiago Ferreira from Elevate AI Consulting points out, it’s just one type of AI. The real workhorses—the systems doing prediction, recognition, and automation—have been running quietly in the background for years. They’re in your bank, your doctor’s office, and your phone’s photo album. The problem with letting generative AI define the entire field is that it skews our perception of risk, capability, and value. We get obsessed with whether a chatbot is creative or sentient, while ignoring the algorithmic bias in a loan approval system or the life-saving potential of a medical imaging tool. That’s a dangerous imbalance.

Even the experts can’t agree

Now, if you think the public is confused, get this: even the people building and studying this stuff don’t have a unified map. TechRadar lays out three completely different frameworks from the experts. Ferreira categorizes by *function* (recognition, prediction, autonomy). Dhar thinks in terms of the field’s *evolution* (expert systems to general intelligence). And Shute sees distinct *waves* of technological thinking (symbolic logic to neuro-symbolic AI). This isn’t just academic nitpicking. It means there’s no common language for regulators, businesses, or journalists to use when things go wrong or need oversight. When a company says its product “uses AI,” that statement is basically empty. Which type? Based on what principles? Is it a brittle statistical pattern-matcher or a rules-based system we can audit? Without clarity, we’re all just taking their word for it.

The power dynamic flips with knowledge

So where does this leave us? Buried in hype, mostly. And the fantasies of AGI (Artificial General Intelligence) and ASI (Artificial Super Intelligence) make it worse. Shute’s quote is perfect: you can replace “AGI” with “space aliens” in most sensational headlines and the sentence still works. That’s the level of speculative nonsense we’re dealing with. But Ferreira hits on the most crucial point for everyday users: the power dynamic. “The ‘intelligence’ in artificial intelligence starts with us,” he says. These systems are tools that extend our thinking, not replace it. A vague prompt gets a vague answer. A thoughtful, critical prompt gets a better result. When you understand that AI is a collection of tools—some new, most old—you stop being passive and start being a director. You ask the hard questions: “Which type? Doing what? And why should I trust it?”

Time to demand clarity

Look, I get that terminology is boring. But this is a rare case where semantics actually matter. The fuzziness around “AI” benefits the companies selling it—it lets them slap a magical, futuristic label on anything—and it harms everyone trying to use it responsibly or understand its societal impact. The next time you see a product, a news article, or a policy paper touting AI, be skeptical. Demand specifics. Is it a generative model prone to “hallucinations”? Is it a predictive algorithm that might be reinforcing bias? Or is it a rules-based system that’s transparent but limited? That’s the conversation we need to have. We can’t make good decisions about the future if we’re all talking about different things using the same two letters. The first step to getting smart about AI is admitting that the word itself has become pretty dumb.

Leave a Reply

Your email address will not be published. Required fields are marked *