Thirty years ago, while teaching a robotics lab course at the University of Illinois, my senior colleagues gave me a piece of advice that stayed with me: quit AI and robotics, because the breakthroughs were always “twenty years away.” Decades later, looking at the massive wave of hype surrounding programs like ChatGPT, I wondered if the future had finally arrived. It hadn’t. Instead, the tech industry simply got better at hiding the same old limitations behind clever marketing.
The cracks in the illusion appear the moment you look past the corporate press releases and examine how this technology works in daily life. Recently, my sister started a recipe company in the United States called Ladle, proudly branding it as an “AI-driven” business. But when I looked at the actual website, it felt completely hollow and poorly put together. This is the open secret of the modern tech boom: thousands of new startups don’t actually own any special technology. They are just renting a generic text generator from a giant company like Google or OpenAI, slapping their own logo on it, and pretending they invented something revolutionary. Because the software cannot actually taste food or understand nutrition, the final product feels lifeless.
When I decided to test the software myself, the illusion fell apart completely. I started using the free version of ChatGPT and was initially very impressed by how fluidly it could write. That amazement lasted until I asked it about current events. The system confidently lied to my face, telling me that Kash Patel was not the FBI Director and that Pete Hegseth was not the US Defense Secretary.
Switching to Google’s AI didn’t solve the problem. While it seemed slightly more updated, it made constant mistakes. Worse, it was a “yes-man.” If I made an incorrect statement, the software would simply agree with me and mislead me further.
This happens because these tools do not actually “think” or know facts. They are simply highly advanced versions of the autocomplete feature on your smartphone. Your phone guesses the next word you want to type based on what you wrote before. These AI systems do the exact same thing on a massive scale—they guess what word should come next based on billions of pages of old internet text. If the internet data is outdated, or if you guide the conversation in a certain direction, the machine will happily generate a plausible-sounding lie just to keep the conversation going. It values a smooth-sounding sentence over the truth.
We see this same failure in the physical world. The tech industry loves to share highly edited videos of sleek humanoid robots doing chores or working in factories. But if you see these robots in person, away from the Hollywood-style editing, they are clumsy, incredibly slow, and completely impractical. You cannot solve the hard laws of physics and mechanics just by feeding a computer more internet text.
The current AI buzz isn’t heading toward a science-fiction future where machines conquer humanity. Instead, it is heading somewhere much more disappointing. We are being pushed to use deeply flawed software that makes regular mistakes, simply because it is cheaper for big corporations than paying human workers. The real danger of the AI hoax isn’t that the machines are becoming smarter than us—it is that we are willing to lower our standards of truth and quality just to accommodate them.
END OF ARTICLE