AI may end up doing wonderful things, but right now it’s at the center of a massive financial bubble. When it pops, the economic repercussions will be severe. The whole situation seems like a repeat of the Internet bubble in the early 2000s. From Charles Hugh Smith at oftwominds.com:
The lines of dominoes being toppled run through every nook and cranny of the economy.
As we all know, the problem with euphoria is the inevitable collision with reality and the resulting disillusionment. But wait–it gets worse.
The new love of your life, your savior who is going to make everything right again, is not just impossibly flawed–they’re a con artist. Now that really hurts. They not only stole your heart, they stole your money.
Which brings us to the AI Boom / Bubble. The euphoria is literally immeasurable, but the disconnect from reality is easily visible and can be broken down into measurable bits:
1. AI revenues are orders of magnitude lighter than the sums being invested (capex, i.e. capital investment). The euphoria is based on the idea that revenues will catch up, but the second date is raising doubts about Prince Charming’s non-flim-flammed revenues and prospects.
This report has raised eyebrows, and the real question is: OK, so let’s say it underestimates revenues by 50%. That means we’re at 3% of revenues needed to justify the capex rather than 2%. Maybe this is why Prince Charming invites his amour to poorly lit bistros–he’s had, um, work done and he’s wary of bright lighting.
$2 trillion in new revenue needed to fund AI’s scaling trend (Bain & Company)
2. AI tools are inherently untrustworthy and lend themselves to generating “going through the motions” slop that gives the superficial appearance of value but actually has negative value as it’s incomplete, misleading and/or incoherent. Sorting the wheat from the chaff actually takes more time because AI is so adept at generating a superficial gloss. In other words, AI generates time sinks rather than productivity.
AI-Generated ‘Workslop’ Is Destroying Productivity (Harvard Business Review)
People Overtrust AI-Generated Medical Advice despite Low Accuracy.
Add in that AI slop looks similar to authentic research and that AI tools have a measurable preference for AI-generated content (i.e. AI slop), and we have a toxic cocktail of untrustworthy output.
Potemkin is not an operating system and can’t last forever.
WAR and economic wipeout will be the reason for the transhuman techno communism rollout?