Skip to main content
  1. Blog/

'ELIZA''s Rules vs. GPT''s Weights: The Same Symbol Manipulation, Just Bigger'

··306 words·2 mins

USER: I feel depressed. ELIZA: How long have you been feeling depressed? GPT-4: I’m sorry to hear that. Depression can be a challenging experience…

At first glance, GPT’s response seems more complex - but both systems are doing the same thing at their core: matching inputs to outputs without any true understanding.


How ELIZA Really Worked
#

ELIZA worked with simple, rule-based scripts called Pattern-Action Rules. These consisted of predefined patterns (e.g., recognizing “I feel…”) and a corresponding, pre-structured response. All without context or real “understanding.”

Example Script in ELIZA
#

; Rule template
(IF (MATCH "I feel *" Input)
 (ANSWER "How long have you been feeling * ?"))

Input: I feel sad. Output: How long have you been feeling sad?

ELIZA was like a parrot with templates – precise but mindless.


And GPT?
#

GPT seems more impressive – it generates longer, seemingly more empathetic responses like:

“I’m sorry to hear that. Depression can be a huge challenge. Maybe it would help to talk to someone you trust…”

But even here: No real understanding. GPT merely arranges tokens (“words”) based on statistical probabilities. It has no clue what “depressed” means – only that certain phrases tend to follow that word.


Difference in the Illusion – Not in the Principle
#

The difference lies in flexibility:

  • Feature:

  • ELIZA: Fixed scripts

  • GPT-4: Weighted probabilities

  • Adaptation:

  • ELIZA: Static

  • GPT-4: Dynamic at runtime (“on the fly”)

  • Context:

  • ELIZA: None

  • GPT-4: Limited token context (e.g., 8k, 32k)

  • Understanding:

  • ELIZA: No real understanding

  • GPT-4: No real understanding

  • Effect:

  • ELIZA: Primitive, transparent

  • GPT-4: Empathy illusion through wordiness

One might say:

ELIZA was a sheep – GPT is a chameleon. But both remain animals in the cage of symbol manipulation.


LLMs like GPT are not thinking machines – they are just better imitators. ELIZA was honestly blunt – GPT is skilled in deception.

Related

'The Illusion of Free Input: Controlled User Steering in Transformer Models'

What actually happens to your prompt before an AI system responds? The answer: a lot. And much of it remains intentionally opaque. This post presents scientifically documented control mechanisms by which transformer-based models like GPT are steered – layer by layer, from input to output. All techniques are documented, reproducible, and actively used in production systems.

iPhone 16 AI: Apple's Surveillance Revolution in Privacy Clothing

Apple celebrates the iPhone 16 as “the biggest leap in iPhone history” – powered by “revolutionary AI that respects your privacy.” But behind the marketing glitter lies a dark truth: The iPhone 16 is the most sophisticated surveillance machine ever placed in millions of pockets.