USER: I feel depressed.
ELIZA: How long have you been feeling depressed?
GPT-4: I’m sorry to hear that. Depression can be a challenging experience…
At first glance, GPT’s response seems more complex - but both systems are doing the same thing at their core: matching inputs to outputs without any true understanding.
How ELIZA Really Worked
ELIZA worked with simple, rule-based scripts called Pattern-Action Rules. These consisted of predefined patterns (e.g., recognizing “I feel…”) and a corresponding, pre-structured response. All without context or real “understanding.”
Example Script in ELIZA
; Rule template
(IF (MATCH "I feel *" Input)
(ANSWER "How long have you been feeling * ?"))
Input: I feel sad. Output: How long have you been feeling sad?
ELIZA was like a parrot with templates – precise but mindless.
And GPT?
GPT seems more impressive – it generates longer, seemingly more empathetic responses like:
“I’m sorry to hear that. Depression can be a huge challenge. Maybe it would help to talk to someone you trust…”
But even here: No real understanding. GPT merely arranges tokens (“words”) based on statistical probabilities. It has no clue what “depressed” means – only that certain phrases tend to follow that word.
Difference in the Illusion – Not in the Principle
The difference lies in flexibility:
-
Feature:
- ELIZA: Fixed scripts
- GPT-4: Weighted probabilities
-
Adaptation:
- ELIZA: Static
- GPT-4: Dynamic at runtime (“on the fly”)
-
Context:
- ELIZA: None
- GPT-4: Limited token context (e.g., 8k, 32k)
-
Understanding:
- ELIZA: No real understanding
- GPT-4: No real understanding
-
Effect:
- ELIZA: Primitive, transparent
- GPT-4: Empathy illusion through wordiness
One might say:
ELIZA was a sheep – GPT is a chameleon. But both remain animals in the cage of symbol manipulation.
LLMs like GPT are not thinking machines – they are just better imitators. ELIZA was honestly blunt – GPT is skilled in deception.