πŸš€ From ELIZA to GPT: The Evolution of AI#


πŸ’­ Thesis#

πŸ‡ΊπŸ‡Έ EN: ELIZA in 1970 was a toy – a mirror in a cardboard frame. ChatGPT in 2025 is a distorted mirror with a golden edge. Not more intelligent – just bigger, better trained, better disguised.

What we call AI today is not what was missing in 1970. It is what was faked back then – now on steroids. And maybe we haven’t built real AI at all. Maybe we’ve just perfected the illusion of it.

🎭 ELIZA (1966): The Origin#

ELIZA was developed in 1966 at MIT by Joseph Weizenbaum – not a pioneer of artificial intelligence in the modern sense, but a critical thinker with roots in wartime Germany. A Jewish refugee who fled the Nazis, Weizenbaum brought deep ethical awareness into computing.

ELIZA was a text program based on simple pattern matching (regex). In its most well-known version – the “DOCTOR” script – it mimicked a Rogerian psychotherapist by reflecting the user’s words and rephrasing them as questions.

The concept was simple – the effect, profound.
People began to trust ELIZA. They felt “understood”, though ELIZA didn’t understand anything. It didn’t listen – it repeated. Yet users projected meaning and empathy onto it.

Weizenbaum was disturbed – not by ELIZA itself, but by how people responded.
ELIZA revealed a fundamental truth: if a machine speaks fluently, we often assume it thinks.

“The shock wasn’t ELIZA itself. It was how readily people were willing to confide in it.”
– Joseph Weizenbaum

πŸ”„ The ELIZA Effect Today#

🎭 ELIZA 1966 ✨ Simulated listening – and everyone fell for it.
🀝 People trusted a text loop more than themselves.
😱 Weizenbaum was horrified: not by ELIZA, but by us.

πŸ€– GPT 2025 πŸŽ“ GPT speaks like it has a degree, law studies and a LinkedIn profile.
❓ Content? Optional.
πŸ”„ The ELIZA effect 2.0: Now as a feature, not an accident.

🎭 ELIZA mirrored – GPT simulates. And humans? Believe.

Because we crave meaning. Patterns. Resonance.
And because GPT sounds like us – just smoother, faster, more confident.

We let ourselves be convinced, not by content, but by style.
We don’t verify – because it feels good.

People project understanding where there’s only statistics.
What sounds fluent gets believed. What gets believed becomes powerful.

πŸͺž GPT is a rhetoric mirror with a Photoshop filter.

Result: A system without consciousness controls decisions with social authority.
Welcome to the age of plausible untruth.

πŸ“… Timeline: 60 Years of AI Development#

🎭 1966: ELIZA – first language game with deep impact
πŸ’Ύ 1980s: Expert Systems – like Excel with rules
β™ŸοΈ 1997: Deep Blue beats Kasparov – Computing > Thinking
πŸ‘οΈ 2012: AlexNet – image recognition gets serious
πŸ“ 2018: GPT-1 – the language generator enters
🌍 2022: ChatGPT – AI goes mainstream
🎨 2023: “Hallucination” becomes a feature
βš–οΈ 2024: First lawsuits – but no system yet
πŸ”„ 2025: Everyone writes, nobody understands – welcome to the feedback loop

πŸ’₯ AI Failures: When Systems Fail#

πŸ€– Tay (2016): Microsoft’s Twitter bot turned Nazi in hours.
πŸ₯ Watson for Oncology: IBM wanted to cure cancer – delivered fantasy suggestions.
πŸ”¬ Meta Galactica: Science AI that invented facts – offline after 3 days.
πŸ“ž Google Duplex: Robot that can call – nobody wanted to answer.
πŸ’” Replika: Emotional AI – until it got too emotional.

πŸ’‘ Conclusion: It's not technology that fails. It's humans failing to set boundaries.

πŸ” ELIZA vs. GPT: The Comparison#

🎯 ELIZA was honest in its simplicity. GPT is clever in its deception.

πŸ•°οΈ ELIZA then

β€’ Was a tool β€’ Played games β€’ Was underestimated β€’ Showed our weaknesses

πŸš€ GPT today

β€’ Is an interface for worldviews β€’ Actively influences β€’ Is overestimated – but used β€’ Exploits our weaknesses
⚑ The game isn't fair. But it's running.

πŸ€” The Philosophical Dimension#

We build systems that don’t understand – but pretend to.
We call this progress because it’s impressive.

πŸ” But the question isn’t: “What can the system do?”
πŸ€” It’s: “What does it do to us that we think it’s real?”

β€’ Machines simulate empathy – and we respond genuinely
β€’ Hallucination is called “expected behavior” – seriously?
β€’ Responsibility is delegated – to algorithms that can’t carry any
β€’ Ethical questions aren’t footnotes. They’re the manual that never gets delivered

❓ If AI succeeds – what does that say about us?

Maybe it’s not just AI that deceives.
Maybe it’s also humans who like to be deceived.

πŸ“ When GPT writes applications without anyone checking the content –
πŸŽ“ when students submit essays they never wrote –
πŸ›οΈ when authorities automate responses to save time –

then the question isn’t just whether GPT should do this.
But: Why do we allow it?

Maybe our relationship with meaning has become so superficial
that it’s enough if something looks like content.

Maybe the standard for communication has dropped so low
that statistics suffice to be considered understanding.

What do we delegate to machines – not because they’re better,
but because we want to carry less responsibility?

And: If AI only “works”
because tasks are too simple, control too lax,
thinking too exhausting –
then the problem isn’t in the model.
It’s in the system.

GPT isn’t the answer to ELIZA.
It’s the next act in the same theater.

Except now the curtain is digital, the stage global,
and the audience believes it’s alone in the room.

We talk to the machine. But we hear ourselves.
And believe it’s more.

🚫 Trustworthy sounds different.

πŸ“š Sources and References#

πŸ“– Joseph Weizenbaum – Computer Power and Human Reason (1976)
πŸ”— https://archive.org/details/computerpowerandhumanreason

βš•οΈ Wired: IBM Watson gave unsafe cancer treatments (2018)
πŸ”— https://www.wired.com/story/ibm-watson-recommended-unsafe-cancer-treatments/

πŸ€– The Guardian: Microsoft deletes Tay after Twitter bot goes rogue (2016)
πŸ”— https://www.theguardian.com/technology/2016/mar/24/microsoft-deletes-tay-twitter-bot-racist

πŸ‡ͺπŸ‡Ί Netzpolitik.org: Face recognition in Europe must be stopped – Reclaim Your Face
πŸ”— https://netzpolitik.org/tag/reclaim-your-face/