What is this about?
This site explores how systems like ChatGPT work –
and why they sound smart without truly understanding.
We examine the technology behind the illusion,
expose structural deception, and ask:
what does this mean for ethics, trust, and society?
From ELIZA to GPT:
What seems like progress may just be a better mirror.
From ELIZA to GPT: The Evolution of AI
Thesis
ELIZA in 1970 was a toy – a mirror in a cardboard frame.
ChatGPT in 2025 is a distorted mirror with a golden edge.
Not more intelligent – just bigger, better trained, better disguised.
What we call AI today is not what was missing in 1970.
It is what was faked back then – now on steroids.
And maybe we haven’t built real AI at all.
Maybe we’ve just perfected the illusion of it.
ELIZA – The Machine That Reflected Us
ELIZA was developed in 1966 at MIT by Joseph Weizenbaum –
not a pioneer of artificial intelligence in the modern sense,
but a critical thinker with German-Jewish roots.
As a refugee from Nazi Germany, Weizenbaum brought
deep ethical awareness into computing.
ELIZA was a simple text program that used regex-based rules
to simulate conversation. Its most famous version, DOCTOR,
acted like a therapist – reflecting questions, paraphrasing replies,
keeping users engaged with simple tricks.
The idea was basic. The impact – massive.
People started to trust ELIZA. They felt understood.
Even though it didn’t understand anything.
It didn’t listen. It mirrored.
And yet people projected emotions onto it.
Weizenbaum was shocked – not by the program, but by the people.
He saw that humans attribute empathy and meaning to machines
just because they speak fluently.
“The shock wasn’t ELIZA itself.
It was how readily people were willing to confide in it.”
– Joseph Weizenbaum
Context and Comparison
ELIZA (1966–1970)
Regex, MIT, emotional reactions.
A simple pattern-matching script made people feel heard.
Weizenbaum wasn’t worried about the software.
He was worried about us.
GPT-3/4 (2020–2025)
Billions of parameters. Trained on everything.
Understood nothing.
GPT talks like it has a PhD and a LinkedIn profile.
But what it says is often style over substance.
The ELIZA effect 2.0 – now with an upgrade.
User Experience and Manipulation
ELIZA mirrored. GPT simulates.
And people – believe.
Because we crave meaning. Patterns. Resonance.
And GPT sounds like us – only smoother, faster, more confident.
We’re not convinced by facts, but by fluency.
We don’t check – because it feels right.
GPT is a rhetorical mirror with a Photoshop filter.
We project understanding onto a system
that calculates probabilities.
What sounds fluent is believed.
What is believed becomes powerful.
The result: a system with no awareness,
influencing decisions with social authority.
Welcome to the age of plausible untruth.
Timeline: AI as Cultural Theater
- 1966: ELIZA – a language trick with emotional depth
- 1980s: Expert systems – glorified rule-based spreadsheets
- 1997: Deep Blue beats Kasparov – math > mind
- 2012: AlexNet – vision becomes serious
- 2018: GPT-1 – the text engine appears
- 2022: ChatGPT – AI goes mainstream
- 2023: “Hallucination” becomes a feature
- 2024: First lawsuits – still no system
- 2025: Everyone writes, no one understands – welcome to the feedback loop
Failed AI Attempts: When the Mask Slips
- Tay (2016): Microsoft’s chatbot turned into a Nazi within hours.
- Watson for Oncology: IBM’s cancer AI gave fantasy treatments.
- Meta Galactica: Science AI that hallucinated – pulled offline in 3 days.
- Google Duplex: A robot that makes phone calls – nobody answered.
- Replika: Emotional chatbot – too emotional to handle.
Conclusion: It’s not the tech that fails.
It’s the human failure to set boundaries.
The Break: What Really Changed
ELIZA was honest in its simplicity.
GPT is cunning in its disguise.
- ELIZA was a tool. GPT is an interface to belief.
- ELIZA played. GPT persuades.
- ELIZA was underestimated. GPT is overestimated – and used.
- The game isn’t fair. But it runs.
Ethics: Between Simulation and Self-Deception
We build systems that don’t understand – but pretend to.
We call it progress because it’s impressive.
But the question isn’t: “Can the system do things?”
It’s: “What does it do to us that we treat it as real?”
Machines simulate empathy – and we react emotionally.
Hallucination becomes “expected behavior”? Seriously?
Responsibility is delegated –
to algorithms that cannot be held accountable.
Ethical questions aren’t footnotes.
They’re the user manual we never received.
If AI becomes embedded in daily decisions –
what does that say about us?
Maybe we’re not just being deceived by the system.
Maybe we’re allowing it – because it’s convenient.
If GPT writes job applications no one reviews,
if students submit essays they didn’t write,
if governments automate replies to avoid thinking –
then the question isn’t “Should GPT do this?”
It’s “Why do we let it?”
Maybe we’ve made meaning so superficial
that resemblance is enough.
Maybe the standards of communication have sunk so low
that statistics now pass for understanding.
Ethics means asking hard questions – including of ourselves.
What do we delegate – not because machines are better,
but because we want to avoid responsibility?
And if GPT only “works”
because tasks are too simple, control is too weak,
and thinking is too tiring –
then the problem isn’t in the model.
It’s in the system.
Conclusion: Trust Is Not a Feature
GPT isn’t the answer to ELIZA.
It’s the next act in the same play.
Only now, the curtain is digital. The stage is global.
And the audience thinks they’re alone.
We speak to the machine – but hear ourselves.
And believe it’s more than that.
That’s not what trust sounds like.
References
- Joseph Weizenbaum – Computer Power and Human Reason (1976)
https://archive.org/details/computerpowerandhumanreason - 99% Invisible – The ELIZA Effect
https://99percentinvisible.org/episode/the-eliza-effect/ - OpenAI (2023): GPT models and hallucinations
https://openai.com/research - Wired: IBM Watson gave unsafe cancer treatments (2018)
https://www.wired.com/story/ibm-watson-recommended-unsafe-cancer-treatments/ - The Guardian: Microsoft deletes Tay after Twitter bot goes rogue (2016)
https://www.theguardian.com/technology/2016/mar/24/microsoft-deletes-tay-twitter-bot-racist - Netzpolitik.org: Reclaim Your Face
https://netzpolitik.org/tag/reclaim-your-face/ - EDRi: Ban Biometric Mass Surveillance
https://edri.org/our-work/ban-biometric-mass-surveillance/