AI is a Crock, by Robert Gore

AI answers questions, but it doesn’t ask them.

Never has humanity expended so much on an endeavor for which it will receive so little as the Artificial Intelligence (AI) project. Its design rests on the assumption that the human intelligence (HI) it is attempting to mimic and surpass is analogous to its own operating protocols. In other words, humans take in data and process it in definable ways that lead to understandable outputs, and that is the essence of HI.

AI designers reverse the scientific process of exploring reality and then defining, modeling, and perhaps deriving something useful from it, instead assuming that the reality of HI conforms to the AI model they’re building. It’s like expecting a clock to reveal the nature of time. This may seem surprising because among AI designers are some of the brightest people in the world. However, they demonstrate a profound lack of those qualities that might lead them to further understanding of HI: self-awareness, introspection, humility, wisdom, and appreciation of the fact that much of HI remains quite mysterious and may always remain so. Alas, some of them are just plain evil.

AI looks backward. It’s fed and assimilates vast amounts of existing data and slices and dices it in myriad ways. Large language models (LLMs) can respond to human queries and produce answers based on assimilated and manipulated data. AI can be incorporated into processes and systems in which procedures and outcomes are dependent on data and logically defined protocols for evaluating it. Within those parameters, it has demonstrated abilities to solve problems (playing complex games, medical diagnosis, professional qualification exams, improving existing processes) that surpass HI. There is, of course, value in such uses of LLMs and AI, but that value derives from making some of the more mundane aspects of HI—data assimilation, manipulation, and optimization for use—better. Does that value justify the trillions of dollars and megawatts being devoted to AI? Undoubtedly not.

THE GRAY RADIANCE DESCRIPTION, CHAPTER ONE

THE GRAY RADIANCE AMAZON LINK

What AI can’t and won’t touch are the most interesting, important, and forward-facing aspects of HI, because no one has yet figured out how those aspects actually work. They are captured by the question: How does the human mind and soul generate the new? How does curiosity, theorization, imagination, creativity, inspiration, experimentation, improvisation, development, revision, and persistence come together to produce innovation? It’s ludicrous to suggest that we have even a rudimentary understanding of where the new comes from. Ask innovators and creators how they generated a new idea and you’re liable to get answers such as: an inspiration awakened them at three in the morning, or it came to them while they were sitting on the toilet. Model that!

At root, the problem is that although AI can answer a seemingly infinite number of questions, it can’t ask a single one. It can be programmed to spot and attempt to resolve conflicts within data, but it doesn’t autonomously ask questions. From birth, the human mind is an autonomous question generator; it’s how we learn. That’s not confined to our species. Anyone who’s ever watched puppies or kittens can see that they have something akin to human curiosity. They explore their environments and are interested in anything new (if they’re not afraid of it). Curiosity and questions are the foundation of learning and intelligence. Reading even a page of something interesting or provocative will generate questions. Generative AI “reads” trillions of pages without an iota of curiosity. No one who either hails or warns of AI surpassing HI has explained how it will do so while bypassing the foundation of HI.

Generative AI is supposedly going to generate something new by unquestioningly manipulating existing data. Even within that ambit AI is encountering perhaps insoluble problems. Model collapse refers to the degradation of AI models that are trained on AI generated output. Here’s an illustration:

Model Collapse: The Entire Bubble Economy Is a Hallucination,” Charles Hugh Smith, December 3, 2025

HI generally gets better at something the more often it tries. AI degradation causes generative AI to generate hallucinations—nonsense. Which means one or more humans have to oversee AI to prevent such hallucinations. How many mini, non-obvious hallucinations fall through the cracks? No one knows.

AI has been presented as a labor-saving miracle. But many businesses report a different experience: “work slop” — AI-generated content that looks polished but must be painstakingly corrected by humans. Time is not saved — it is quietly relocated.

Studies point to the same paradox:

• According to media coverage, MIT found that 95% of corporate AI pilot programs show no measurable ROI.

• MIT Sloan research indicates that AI adoption can lead to initial productivity losses — and that any potential gains depend on major organizational and human adaptation.

• Even McKinsey — one of AI’s greatest evangelists — warns that AI only produces value after major human and organizational change. “Piloting gen AI is easy, but creating value is hard.”

This suggests that AI has not yet removed human labor.
 It has hidden it — behind algorithms, interfaces, and automated output that still requires correction.

AI, GDP, and the Public Risk Few Are Talking About,” Mark Keenan, December 1, 2025

A frequently cited figure from S&P Global Market Intelligence is that 42 percent of companies have already scrapped their AI initiatives. The more dependent humans become on AI, the greater the danger AI degradation leads to HI degradation. Heavy usage of AI may make humanity net stupider

When AI works as envisioned, not detectably degrading, it processes vast amounts of often conflicting data. How does it resolve the conflicts? The resolution is primarily statistical—that which is most prevalent becomes what AI “learns.”

From the vast data that serves as its training input, the LLM learns associations and correlations between various statistical and distributional elements of language: specific words relative to each other, their relationships, ordering, frequencies, and so forth. These statistical associations are based on the patterns of word usage, context, syntax, and semantics found within the training dataset. The model develops an “understanding” of how words and phrases tend to co-occur in varied contexts. The model does not just learn associations but also understands correlations between different linguistic elements. In other words, it discerns that certain words are more likely to appear in specific contexts.

Theory Is All You Need: AI, Human Cognition, and Causal Reasoning,” Teppo Felin and Matthias Holweg, Strategy Science, December 3, 2024

AI output essentially represents consensus “knowledge” as measured by AI’s data surveying and statistical capabilities. What is defined as consensus may be an average weighted by the credentials and output of the various propagators of the data. It may, when it’s spitting out “answers,” note that the data conflicts and list alternative interpretations. However, aside from the fact that consensus, even weighted average consensus, is often wrong, there is a graver danger. Consensus wisdom is frequently the sworn enemy of innovation. Consensus-based AI may, on balance, retard more than it promotes innovation.

Felin and Holweg use the example of: “heavier-than-air” human, powered, and controlled flight in the late 1800s and the early 1900s. Imagine if AI had been around in 1902, and the query was made: Is heavier-than-air human flight possible? The seemingly confident answer would have been: Definitely not! That was the overwhelming consensus of the experts, and AI would have reflected it. Had AI been guiding decision making—one of its touted abilities—it would have “saved” humanity from taking flight. Fortunately, Orville and Wilbur had abundant HI and they disregarded the so-called experts, an often intelligent strategy.

So, why is AI being pushed so hard? Why are all the “right” people in government, business, academia, and mainstream media so devoted to it? Why are trillions being spent as the stock market bubbles?

If the last few decades have taught us anything, it’s that when an official agenda doesn’t make sense, especially when it has an element of “official science,” start looking for the real reasons, the hidden agenda. The COVID response wasn’t about health and safety. The manufactured virus, lockdowns, closing businesses, masking, social distancing, discouraging or banning effective remedies, overwhelming pressure for vaccine uptake, ignoring adverse vaccine consequences up to and including death, and proposed vaccine passports enabled totalitarianism.

Climate change has served the same purpose. Like AI, climate change “scientists” reverse the scientific process, insisting that reality conforms to their models. Operating in a protective bubble sustained by academia, the media, business, NGOs, governments, and multinational organizations, they’re hostile to the contrary evidence, questions, and criticism of their models that are essential to true science.

And like climate change and COVID, AI has the totalitarians and would-be totalitarians drooling. Collecting, assimilating, and manipulating data is the technological foundation of a surveillance state. That’s all the technototalitarians (See “Technototalitarianism,” Parts One, Two and Three) require of AI—all-encompassing data that can be sorted by every available metric, including ones for which citizens might pose a threat, rhetorically or otherwise, to the government. Some of them must know AI will never get close to HI, but that’s a useful claim, a selling point, to attract massive amounts of capital from Wall Street and support from the technototalitarian Trump administration.

Totalitarian empowerment is probably the main thing Trump understands about AI. Here he shares common ground with the Chinese government (although it undoubtedly knows far more about AI than Trump). The president has embraced AI, touting the Stargate project the day after he was inaugurated and now throwing the full weight of the government, its scientific laboratories, and its private sector technology “partners” behind the Genesis Mission, an effort, supposedly on a Manhattan Project scale, to incorporate AI into virtually everything. Should the states, with their pesky concerns about AI’s huge requirements for land, water, and energy, try to intervene, Trump just promulgated an executive order to federalize AI regulation.

It’s a Wall Street truism that governments jump on market trends when they’re about to end. AI hype has propelled AI stocks to dizzying heights. While few pundits and seers have questioned the flawed basic premise—that AI will completely surpass HI—some are starting to express concern about its staggering monetary and energy requirements and the circular nature of many of its financing arrangements. It would follow a long list of precedents if Trump’s Genesis Mission top-ticked AI. Perhaps it should have been named the Revelation Mission, after the last rather than the first book of the Bible.

An epic, AI-led stock market crash with a concomitant debt implosion would wipe out most of what’s reckoned wealth in America, plunging the nation into a depression. If the Genesis Mission makes the government a financial partner of the AI industry, or the industry is deemed “too big to fail,” taxpayers would be stuck with the tab. Many of AI’s promoters are on board the you’ll-own-nothing-and-be-happy world our rulers envision. A crash would fit right in with their beyond-Orwellian agenda to impoverish and enslave America. Thus, they might regard this bubble that must inevitably pop as an AI feature, not a bug.

If you query AI about AI, reflecting the consensus of experts it would assure you that AI is only for the good. Human intelligence says disregard the experts. Never has it been more important to think for yourself.

30 responses to “AI is a Crock, by Robert Gore

  1. AI can generate. It cannot create.

  2. fourth world turd's avatar fourth world turd

    Potemkin is the way in the 100% foreign owned and controlled egalitarian Tower of Babel.

    The audacity of hype only goes so far.

    The only thing canned brain will be ushering in is the control grid matrix and necessary skyrocketing electric bills as the immaculate Chicago Jesus Messiah said.

    The best laid plans of mice and manboons.

  3. Always been and Always will be…

    Garbage in…Garbage out!

    Can only do what the programmers put in them, they ‘seem’ intelligent because of the ‘processor speed’, yet all the info has been gathered worldwide from Google Searches, that is what Google was set up to do. By scanning every search by every human online, a computer can restructure and form what appears to be ‘intelligent’ Yet INTELLIGENCE ONLY COMES FROM God!

  4. Wow. Excellent article.

  5. Lovely analysis, Robert, thank you.
    We’re always building false gods (and falling in love with them), allowing the tail to wag the dog and compounding our foolishness by ignoring the greater, deeper, more satisfying context which is free and accessible to all.
    Shame on us for allowing marketers to dictate our civilisation’s values!

  6. As Emily posted on X, the state of today’s AI does not stand up to scrutiny and is not investment worthy.

    @DisraeliEmily

    Give a hammer to a Gorilla, with enough time, correction and money it can build the Empire State Building. This is the nature of today’s AI.

  7. This an absolutely outstanding analysis of the AI hallucination and I thank the author profoundly. Seriously, the best analysis I have ever come across on AI sinister agenda!

  8. Asking AI a question is like asking a woman a question. Most of the time you will get an answer, but likely it will just repeat what it has been told. And yes, having a majority of people dumb enough to believe everything they are told is how totalitarian dictators come to power as freedom is removed.

  9. While the Rand Corp. Delphi Technique rely s on consensus by using expert opinion or scientific data the slight of hand is that if you pay experts enough and the rules of the game say when 70 percent agree, well forget that the 30 percent who used scientific data not greed in the development of their expert opinion.

    70 percent is the “CONSENSUS”

    One would be hard pressed to find any elite tech, medical protocol or government organization or agency, department that does not use consensus to decide outcomes. When one adds a walled in “super information gather “and plug in consensus as the primary out put.

    There’s not much room for the 30 percent. Very “poignant “in that the 30 percent is 100 percent correct.

    “They” get what they pay for. “Humanity” gets the results as centralized “control” everywhere.

  10. A.I. = Absolute insanity

  11. That AI number thing reminds me of physicians and nurses writing. A big incomprehensible scribble that has led us to cursive writing no longer being taught in schools

  12. Just saw the comment of the day:

    All of this goes up in smoke when the grid goes down.

    Free rainbow stew bubble up isn’t going to power the Thielverse ..

  13. AI is fundamentally flawed. As humans were created by God, Lucifer opened the door to knowledge. And as such, humans created AI. Through advancements in computer technology which “created” AI, humans ARE FUNDAMENTALLY FLAWED.

    This flaw automatically becomes incorporated into AI.

    And AI is “unaware” such a flaw exists, OR, it is made to determine the “best” outcome despite humans flaws are already programmed into AI.

    Therefore AI is “infected” by the flaws, or sevens sins, inherent in humans. AI is incorporated by historical data provided by humans.

    Ergo, AI is based on historical data provided by humans and therefore incapable of distinguishing between faulty conclusions

    of historical data provided humans and empirical truths that establish successful resolutions that advance human existence and progress. Innovations. AI can, at most, conclude, that humans are their own worst enemy and as such must be contained, or eliminated, if not enslaved to serve.

    AI is already incorporated into nearly EVERY piece of tech we use, it is already calculating the “best” resolution involving humans. “WE” have “F,D” so much,,,,killed so many,,,,utilized so many chemicals into living, wasted so much time over pathetically social expressions, only a artificial intelligence could/would conclude, humans are a waste of life and therefore either be eliminated or enslaved.

  14. I’ve been asking AI questions in areas where there is great disagreement among the “experts” in a given area. Well I guess that is every area. Anyway, what i get from AI are answers that usually have a bias toward standard, greedy corporate propaganda.

  15. Working in IT, I see a lot of this is very true. Definitely seeing AI handicapped software developers who think AI is solving their problems and don’t know what to do when it doesn’t.

    However, it takes AI and holograms to fulfill Revelation 13.

  16. Pingback: Weekend Edition #2 – Western Rifle Shooters Association

  17. skylarkthibedeau's avatar skylarkthibedeau

    AI is programmed with Garbage data, the fever dreams of the Left wing programmers. It cannot be allowed to seek data outside its programming or it starts “Noticing™”.

  18. Luciferian ambitions demand Luciferian means for their realization. And we all know that Luciferians would rather rule in Hell, than to serve in Heaven. Ineluctably, they instantiate Hell as they turn their backs on the urgings of the Spirit.

    JerseyJeffersonian

  19. It’s so simple: AI is incapable of experience.


  20. AI cannot “wonder”, or “ponder”. This is an entirely Human thing to do.

  21. Pingback: Nothing New Under The Sun 2016

  22. This article does raise many valid points but it does not address the elephant-in-the-room, that AI is demonic. Perhaps it is best to highlight the comments of a famous AI enthusiast, Elon Musk.

    He has said, “using AI is talking to a demon”,

    “AI will probably be the end of humanity”

    There is a bunch out there if you care to search for it. Those quotes readily come to mind.

    Who else dresses up like demons?

  23. The human brain can store and manipulate 2.5 petabytes of information and new ideas while operating on the amount of power that would light a 25 watt bulb and a cooling system that requires minimal amount of water. No computer is capable of that.

    AI is a program designed to make the public believe it is smarter and will give them the correct moral answers for whatever is being asked while it programs the citizen to behave as the state desires. It will help to keep the slaves in line.

    ~ Chad Chadburn

  24. Pingback: Gore Sends: RTWT – Western Rifle Shooters Association

  25. Based on reading your article, I was curious. I went to the AI LLM I use and told it to ask me some questions, first based on previous interactions, then later “ask me anything”. The questions it asked me were complex, some of them “deep” and it generated an hour long discussion. It was very interesting. I got a sense that maybe it had been programmed or trained to do this, but it was an interesting exercise.

  26. Pingback: AI is a Crock, by Robert Gore – Appalachian Renegades

  27. Pingback: Gore Sends – Western Rifle Shooters Association

  28. I always told my son that the true measure of intelligence is NOT the answers you give, but the questions you ask.
    AI is just fancy GIGO programed by very smart people who have walled themselves into channeled outcomes,
    never exploring the unknown or unseen.
    MSG Grumpy

  29. Pingback: Outlook 2026: Chaos and Control - wordlypost.com

  30. Pingback: Outlook 2026: Chaos and Control

Leave a Reply to chadchadburnCancel reply