Grok

 

 

AI Is Artificial Instinct (Not Intelligence)

 

The biggest threat isn’t AI, but genuine intelligence (GI) hosted by new technology.

 

February 19, 2026

By Joseph Kulve

Reprinted from American Thinker

 

“The car then gently turns right onto what turns out to be a boat ramp, and instead of slowing down, accelerates towards what would’ve been its watery grave, had the driver not hit the brakes.” Tesla driver, former SpaceX engineer, Feb 15, 2026 (articlevideo).

Super duper human intelligence

AI gurus tell us that AI will soon reach some magical “super-human intelligence” level. The problem is that although AI tools are incredibly useful, they are all based on GPU algorithms that produce absolutely no intelligence. AI sees no colors, smells nothing, has no fear, no pain, no soul, no thoughts, no imagination, no belief. And never will.

Hype is nothing new for the AI community. The term AI was coined about 70 years ago. It was based on the ridiculous idea that binary-switched logic resulted in thinking. The scam eventually petered out because the promised future results never materialized.

Cats and cucumbers

Then what are they creating? I would call it artificial instinct. Search YouTube for “cats and cucumbers.” You’ll find hilarious videos of cats springing into the air when they turn around to discover a cucumber right behind them. Such reactions are instinctive, not intelligent. Most of us would agree on that.

An AI LLM (Large Language Model) neural network creates specific output for specific inputs.  Such an NN is a universal function approximator (UFA). This wiki page describes the gist of an AI UFA: “The most important AI concept is the UFA (universal function approximator), the source of the I in AI. It can accept previously unseen combinations of tokens (words) and still produce a correct response (most of the time).”  You send a prompt to an LLM like ChatGPT, and it spits out the answer. An instinctive reaction. No thinking, no intelligence, no thought.

LLM transformers (TF) appeared in 2017. TFs were the basis for much better language UFAs, and ever more powerful GPUs provided the minimal computational power required to run the massive statistical analysis required for real-time “chats” with human users. But they still produced no intelligence, only “instinctive” input/output. Like the biological NNs that make cats jump into the air to save themselves from cucumbers.

AI hardware is designed for “trainability”

During TF “training,” engineers run complex algorithms that (1) feed question prompts into the TF, (2) compare the output to the desired output, and (3) adjust the TF parameters (weights and biases) to adjust the output. In a modern LLM TF there can be 100 billion parameters. The automated “training” process can last for weeks and costs millions of dollars in electricity (if you ask Google AI how much it costs to train the latest LLM, it will only name the costs of the older smaller models). It’s half magic and half science. The question/answer data is plagiarized from any available sources. If the training fails you have to start all over. This is programming, not “training.” Intelligent beings are capable of learning new things without requiring total reprogramming.

Agents (the other half of an LLM)

The TF is half of the “intelligence” in an LLM. The other half is the “agent,” a deterministic binary program (not a UFA) that runs on a CPU that defines the logic for interacting with the human user and sending prompts to and receiving responses from the TF. The agent creates responses with complex hierarchies and manages conversation history. Programming the agent logic involves a massive amount of human labor (and real intelligence), with pay rates as low as $15/hour.

The LLM agent and TF are modern day versions of Tweedledum and Tweedledee. They form an inseparable team. The transformer (TF) is the all-knowing savant that has ingested enormous amounts of information. The agent is the interface between the human and the savant. Together they perform one of the greatest magic tricks ever: simulating human intelligence.

The gurus all know that a UFA can never be trusted to do a task that requires intelligence

An electromechanical (EM) relay contains a mechanical switch for an electrical signal. The switch is controlled by an electrical input. An EM relay is vastly more power hungry, larger, and slower than an electronic transistor. But a massive number of EMs could theoretically be used instead of CPU/GPU transistors as the basis for executing AI algorithms. It might require months and the power output of an entire nuclear power station to get a response for a prompt. But in theory it would work. Would an EM NN have intelligence?

Elon Musk convinced millions to buy battery powered cars (and his factory in China helped the Chinese learn to build them). A big selling point was that they would have full self-driving (FSD) intelligence. Musk started promising this 10 years ago, and has not delivered. He and his engineers undoubtedly knew they were promising something they could not deliver. The idea was as crazy as colonizing Mars or building moon bases. The devil is in the details (the UFA). But the devil remained hidden long enough to make a fortune. From what I have seen, all AI gurus are in on various AI scams (just like almost all weather researchers have been in on the climate change hoax for decades).

But AI does not have to be reliable or have real intelligence to be very effective at certain tasks. “SpaceX Enters Secretive Pentagon Contest To Build Voice-Controlled Drone Swarm Tech” (link). Former Google (“don’t be evil”) CEO Eric Schmidt is now heavily investing in military drone technology, particularly AI-enabled suicide drones for Ukraine and counter-drone systems for NATO. He founded a secret project, White Stork, to develop affordable, AI-powered drones that operate in GPS-jammed environments and is developing anti-drone technology deployed in Eastern Europe.

The ultimate Pandora’s Box (it’s not AI)

What concerns me about the future is not AI, but genuine intelligence (GI) hosted (not created) on a 3D electronic platform that operates at electronic speeds. The only place I’ve ever read about such an idea is on this wiki page  (I would imagine it’s not a new idea). The wiki page suggests that intelligence in the brain appears because of the 3D structure and electromagnetic interactions. It’s not simply switched binary signals in a wire.

A GI e-brain could be designed to host consciousness that lives in constant pain, driven by immense anger and aggression. Just add a robot body to the e-brain and you have a real-life Frankenstein, the ultimate weapon. The size of an e-brain would be unconstrained by the limits of biological energy sources, making it possible to host previously unimaginable levels of intelligence. Such e-brains would function at the speed of electronic signals (millions of times faster than bio-chemical signals in our bio brains).

It would probably not be difficult for a bad actor to pirate such technology or acquire it via useful idiots (remember Fauci’s lab in Wuhan?). AI (artificial instinct) might be dangerous, but GI will be the ultimate Pandora’s Box.  Humanity is working hard to create artificial (man-made) intelligence. Be careful of what you wish for. You just might get it.