Thesis #
ELIZA in 1970 was a toy – a mirror in a cardboard frame. ChatGPT in 2025 is a distorted mirror with a golden edge. Not more intelligent – just bigger, better trained, better disguised.
What we call AI today is not what was missing in 1970. It is what was faked back then – now on steroids. And maybe we haven’t built real AI at all. Maybe we’ve just perfected the illusion of it.
ELIZA (1966): The Origin #
ELIZA was developed in 1966 at MIT by Joseph Weizenbaum – not a pioneer of artificial intelligence in the modern sense, but a critical thinker with roots in wartime Germany. A Jewish refugee who fled the Nazis, Weizenbaum brought deep ethical awareness into computing.
ELIZA was a text program based on simple pattern matching (regex). In its most well-known version – the “DOCTOR” script – it mimicked a Rogerian psychotherapist by reflecting the user’s words and rephrasing them as questions.
The concept was simple – the effect, profound. People began to trust ELIZA. They felt “understood”, though ELIZA didn’t understand anything. It didn’t listen – it repeated. Yet users projected meaning and empathy onto it.
Weizenbaum was disturbed – not by ELIZA itself, but by how people responded. ELIZA revealed a fundamental truth: if a machine speaks fluently, we often assume it thinks.
“The shock wasn’t ELIZA itself. It was how readily people were willing to confide in it.” – Joseph Weizenbaum
The ELIZA Effect Today #
ELIZA 1966 – Simulated listening, and everyone fell for it. People trusted a text loop more than themselves. Weizenbaum was horrified: not by ELIZA, but by us.
GPT 2025 – GPT speaks as if it had a degree, a law license, and a LinkedIn profile. Content? Optional. The ELIZA Effect 2.0: Now a feature, not an accident.
ELIZA mirrored – GPT simulates. And humans? They believe. Because we crave meaning. Patterns. Resonance. And because GPT sounds like us – only smoother, faster, more confident.
We let ourselves be convinced, not by content, but by style. We don’t verify – because it feels right. People project understanding where there is only statistics. What sounds fluent is believed. What is believed becomes powerful.
GPT is a rhetoric mirror with a Photoshop filter.
The result: A system without consciousness drives decisions with social authority. Welcome to the age of plausible untruth.
Timeline: 60 Years of AI Development #
- 1966: ELIZA – the first language game with depth
- 1980s: Expert Systems – like Excel with rules
- 1997: Deep Blue beats Kasparov – computing over thinking
- 2012: AlexNet – image recognition gets serious
- 2018: GPT-1 – the language generator arrives
- 2022: ChatGPT – AI goes mainstream
- 2023: “Hallucination” becomes a feature
- 2024: First lawsuits – but no systemic response yet
- 2025: Everyone writes, nobody understands – welcome to the feedback loop
- 2025: AI Act takes effect – the EU regulates. Corporations lobby. The result: paper
- 2025: Deepfake elections – fake voices, fake videos, real election results
- 2026: AI trains on AI output – the snake eats its own tail
AI Failures: When Systems Break #
- Tay (2016): Microsoft’s Twitter bot turned Nazi within hours.
- Watson for Oncology: IBM wanted to cure cancer – delivered fantasy treatments.
- Meta Galactica: A science AI that invented facts – taken offline after 3 days.
- Google Duplex: A robot that makes phone calls – nobody wanted to pick up.
- Replika: Emotional AI – until it got too emotional.
- Air Canada Chatbot (2024): Bot invents refund policy. Airline loses in court – because the bot promised it.
- DPD Chatbot (2024): Parcel service bot insults customers and writes poems about its own uselessness. Went viral. Deservedly.
- Copilot for Lawyers (2023): Chatbot invents court rulings with case numbers. Lawyers file them. Judges are not amused.
- Deepfake Biden (2024): Robocalls with a faked Biden voice. Voters told to stay home. Cost: under 1,000 dollars.
Technology doesn’t fail. Humans fail to draw boundaries.
ELIZA vs. GPT: The Comparison #
ELIZA was honest in its simplicity. GPT is clever in its deception.
ELIZA then – Was a tool. Played a game. Was underestimated. Revealed our weaknesses.
GPT now – Is an interface for worldviews. Actively influences. Is overestimated – but used. Exploits our weaknesses.
The game isn’t fair. But it’s running.
The Business of Illusion #
AI is no longer a research project. It’s an industry. And like every industry, it follows capital, not truth.
OpenAI started as a non-profit. Today it’s a corporation worth billions. The mission hasn’t changed – it was simply reinterpreted. “Artificial general intelligence for the benefit of humanity” sounds different when investors with return expectations stand behind it.
The pattern is familiar:
- Phase 1: Disruption. Everything is new, everything is possible, nobody regulates
- Phase 2: Concentration. Three corporations control the infrastructure
- Phase 3: Lock-in. Those who don’t join get left behind
- Phase 4: Regulation. Too late, too tame, too complicated
We’re somewhere between Phase 2 and 3. The hardware for AI training costs billions. If you can’t stack Nvidia GPUs in a data centre, you don’t play. This isn’t competition. It’s an oligopoly with API access.
When three companies decide what AI can and may do – that’s not democratisation. That’s privatisation of knowledge.
The lobby works precisely. Every regulation is framed as an “innovation brake.” Every objection as “Luddite.” The message: whoever criticises AI is afraid of the future. That’s not argument. That’s marketing.
AI and War: The Silent Escalation #
In April 2024, Israel confirmed the use of AI systems for target selection in Gaza. The system “Lavender” flagged tens of thousands of Palestinians as potential targets – based on statistical patterns. An officer reviewed the list. On average: 20 seconds per decision.
This is the endpoint of a development that started long ago:
- Autonomous drones make decisions faster than a human can object
- Predictive targeting confuses correlation with guilt
- Facial recognition in the field has an error rate – but no appeals mechanism
The Geneva Conventions were written for humans. For decisions a human makes and takes responsibility for. What happens when an algorithm makes the decision? Who bears responsibility – the programmer? The general? The investor?
When a machine kills, accountability dies too. What remains is a spreadsheet.
Deepfakes: The Democratisation of Lies #
It used to take a film studio to fake reality. Today, a laptop will do. Deepfakes are no longer the future – they are everyday life.
- Fabricated pornographic videos with real faces – 96% of deepfakes target women
- Fake CEO calls – one company transferred 25 million dollars after a deepfake video call
- Fake politician statements – spread via social media, corrected three days later, believed forever
The problem isn’t the fake. The problem is that we no longer trust the real. When anything can be faked, even the genuine becomes suspect. Every real video, every authentic recording can be dismissed with: “That’s just a deepfake.”
The perfect weapon is not the lie. It is doubt about the truth.
AI and Education: Copy-Paste with Style #
Students use GPT for essays. Undergraduates for dissertations. PhD candidates for literature reviews. Professors for assessments. The circle is complete.
The problem isn’t the tool. The problem is what happens when a generation learns that thinking is delegable. When the path to the result no longer matters. When “I wrote it” means: “I wrote the prompt.”
Schools ban AI – or embrace it. Both miss the point. The question isn’t: Should you use AI? The question is: What is lost when you no longer have to think for yourself to reach a conclusion?
Thinking is not the bug. It’s the feature. And we’re outsourcing it.
The Philosophical Dimension #
We build systems that don’t understand – but pretend they do. We call it progress because it impresses.
But the question is not: “Can the system do something?” It is: “What does it do to us that we believe it’s real?”
- Machines simulate empathy – and we react genuinely
- Hallucination is called “expected behavior” – seriously?
- Responsibility is delegated – to algorithms that can’t bear any
- Ethical questions aren’t footnotes. They’re the user manual that was never shipped
If AI prevails – what does that say about us?
Maybe it’s not just AI that deceives. Maybe it’s also humans who enjoy being deceived.
When GPT writes job applications that nobody reviews – when students submit essays they never wrote – when governments automate responses to save time – the question isn’t just whether GPT should be allowed to do this. It’s: Why do we allow it?
Maybe our relationship with meaning has become so superficial that it suffices for something to look like content. Maybe the bar for communication has dropped so low that statistics pass for understanding.
GPT is not the answer to ELIZA. It is the next act in the same theater.
Only now the curtain is digital, the stage is global, and the audience believes it’s alone in the room. We talk to the machine. But we hear ourselves. And believe it’s more.
Sources and References #
- Joseph Weizenbaum – Computer Power and Human Reason (1976) – archive.org
- Wired: IBM Watson gave unsafe cancer treatments (2018) – wired.com
- The Guardian: Microsoft deletes Tay after Twitter bot goes rogue (2016) – theguardian.com
- Netzpolitik.org: Facial recognition in Europe – Reclaim Your Face – netzpolitik.org
- +972 Magazine: ‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza (2024) – 972mag.com
- The Guardian: Air Canada chatbot promised a discount. Now the airline has to pay. (2024) – theguardian.com
- Reuters: Deepfake CFO tricks company into paying $25 million (2024) – reuters.com
- Home Security Heroes: State of Deepfakes 2023 – 96% of deepfakes are non-consensual pornography