Anybody who thinks the AI takeover of the world is imminent has never used autocorrect. From Tyler Durden at zerohedge.com:
“The intelligence of AI systems is being overhyped and, while we could get there eventually, we are currently nowhere near achieving artificial general intelligence (AGI).”
Those are the words of Gary Marcus, Professor Emeritus of Psychology and Neural Science at New York University, as he pours cold water on the ‘AI Boom’ that has almost single-handedly supported the entire stock market for the last month.
We have seen these hype-cycles before…

Source: Bloomberg
In a conversation with Goldman Sachs’ Jenny Grimberg, Marcus explains how generative artificial intelligence (AI) tools actually work today?
At the core of all current generative AI tools is basically an autocomplete function that has been trained on a substantial portion of the internet.
These tools possess no understanding of the world, so they’ve been known to hallucinate, or make up false statements.
The tools excel at largely predictable tasks like writing code, but not at, for example, providing accurate medical information or diagnoses, which autocomplete isn’t sophisticated enough to do.
Contrary to what some may argue, the professor explains that these tools don’t reason anything like humans.