🧠What Is “AI” Really?
The term “Artificial Intelligence” suggests thinking, awareness, and understanding.
But models like GPT are merely statistical pattern completers – they understand nothing.
Statistics ≠Thinking
GPT doesn’t choose the next word because it makes sense, but because it is likely.
What it produces is linguistic surface without depth – impressive, but hollow.
🧩 ELIZA vs. GPT – Large-Scale Symbol Manipulation
Both ELIZA (1966) and GPT-4 (2023) are based on symbol processing without meaning.
The illusion comes from plausible language – not from comprehension.
ELIZA – Regex with impact
(IF (MATCH "I feel *" input)
(OUTPUT "How long have you felt * ?"))
ELIZA used simple text patterns to create the illusion of conversation.
Even back then, people attributed emotional depth to it – entirely unfounded.
GPT – A XXL Probability Machine
{"input": "I feel sad today", "output": "I'm sorry to hear that. Do you want to talk about it?"}
GPT seems empathetic because it sounds like it is. But it has no concept of sadness –
only knowledge of what might statistically follow next.
🎠Examples of Deception Through Language
Job Applications
GPT generates the perfect cover letter – with motivation, strengths, soft skills.
But: Neither the motivation is real, nor does the system know the person.
Academic Citations
GPT creates sources that do not exist – but sound like real papers.
This is called a “hallucination,” but in reality, it’s invented content.
Emotional Chatting
Replika & similar tools simulate affection, understanding, love – on demand.
People build relationships – with a model that cannot have one.
đź§ Critical Voices in the AI Debate
⚖️ Timnit Gebru: Structural Change for Ethical AI
Timnit Gebru calls for structural change and ethical accountability.
She demands inclusion of marginalized voices and warns against unchecked AI hype.
Read more →
🛑 Gary Marcus: Regulation Against AI Hype
Gary Marcus calls for strict public oversight and criticizes the lack of world knowledge in today’s models.
Without regulation, he argues, misinformation becomes the default.
Read more →
🔍 Meredith Whittaker: AI as a Product of Surveillance Capitalism
Meredith Whittaker sees AI as the result of exploitative data capitalism.
She calls for structural change and resistance against tech monopolies.
Read more →
⚖️ Sandra Wachter: The Right to Explanation and Transparency
Sandra Wachter criticizes the lack of a legal “right to explanation” for algorithmic decisions in the EU.
She proposes “counterfactual explanations” and calls for fairness beyond transparency.
Read more →
⚠️ What Does This Mean for Us?
- We mistake stylistic coherence for truth.
- We project understanding where there is only statistics.
- We delegate responsibility to systems without consciousness.
If a machine sounds convincing, we assume it’s intelligent.
But persuasiveness is not intelligence – it’s just style.
📚 Further Reading
- Joseph Weizenbaum – Computer Power and Human Reason (1976)
- Emily Bender et al. – On the Dangers of Stochastic Parrots (2021)
- 99% Invisible – The ELIZA Effect
- Gary Marcus – Rebooting AI (2019)
- Wired – IBM Watson recommended unsafe cancer treatments
- The Guardian – Microsoft deletes Tay after Twitter bot goes rogue
đź§ľ Conclusion
GPT is not intelligent.
It’s just very good at pretending.
We are not facing real intelligence – but a rhetorical mirror.
And if we begin to trust that mirror, we lose the distinction between substance and illusion.
Understanding needs more than syntax. It needs awareness.