‘ELIZA on steroids: Why GPT is not intelligence’
Translations: DE
GPT and similar models simulate comprehension. They imitate conversations, emotions, reasoning. But in reality, they are statistical probability models, trained on massive text corpora – without awareness, world knowledge, or intent.
What Does GPT Actually Do?#
GPT (Generative Pretrained Transformer) is not a thinking system, but a language prediction model. It calculates which token (word fragment) is most likely to come next – based on the context of previous tokens.
GPT doesn’t know. It just continues. – Emily Bender, linguist and AI critic
No Understanding. No Thinking. No Intent.#
GPT is trained on vast amounts of text – scraped from the internet, books, forums, Wikipedia. From this, it learns statistical patterns. But: GPT has no mental model of the world, no goals, no experience, no self.
It doesn’t distinguish between truth and fiction, between quote and hallucination. Everything is equally likely – as long as it sounds coherent.
The “ELIZA Effect” 2.0#
Back in 1966, people projected deep understanding onto ELIZA – though it only mirrored user input via regex patterns. Today, we project consciousness onto GPT – though it’s just calculating.
“People react to GPT as if it thinks – because it speaks like us. Not because it thinks like us.” – Sherry Turkle, MIT
Illusion Instead of Intelligence#
GPT impresses – but it doesn’t think. It can generate text – but cannot form concepts. It can simulate an argument – but has no position. It can mimic emotion – but feels nothing.
This is known as: syntactic fluency without semantic understanding.
References and Sources#
- Emily Bender et al.: On the Dangers of Stochastic Parrots (2021) https://dl.acm.org/doi/10.1145/3442188.3445922
- Sherry Turkle: The Second Self (1984), Alone Together (2011)
- Joseph Weizenbaum: Computer Power and Human Reason (1976)
- Gary Marcus: Rebooting AI (2019)
- 99% Invisible Podcast: The ELIZA Effect
Conclusion#
GPT is not intelligence. It is the illusion of intelligence, perfected through linguistic patterning and massive data. It’s not the machine that deceives – we let ourselves be deceived.
GPT “works” – not because it understands, but because we’ve made understanding imitable.
Extended: The Dangers of Stochastic Parrots#
The apt term “stochastic parrots,” coined by Emily Bender and her co-authors in their groundbreaking paper, encapsulates the dilemma: GPT and similar models produce language without understanding it – they merely repeat statistical patterns from their training data, like a parrot that mimics words without grasping their meaning. But this imitation is no harmless trick. The abstraction lacks any mental representation, any world knowledge, and any ability to distinguish between truth and fiction. The model generates text that appears linguistically coherent but is semantically empty.
The Hidden Costs of Scale#
The development of ever-larger models has not only technical but profound ethical and ecological consequences. As Bender et al. demonstrate, training processes cause massive environmental burdens and financial costs that are often unreflectively externalized. The fascination with ever more parameters obscures the fact that size does not correlate with understanding. A 175-billion-parameter model remains a statistical process – just a more inefficient and resource-intensive one.
Data as a Mirror of Societal Biases#
The training data – mostly unfiltered from the internet, books, and forums – is not a neutral representation of the world but full of historical and systemic biases. Studies show that models don’t just replicate social prejudices (regarding gender, race, ethnicity) – they amplify them. They learn from texts containing discrimination, hate speech, and structural inequality, without any ability to critically reflect on these. The model has no mental model of justice or injustice – it only has probabilities.
The ELIZA Effect 2.0 and Its Consequences#
Sherry Turkle’s observation that people project consciousness onto GPT because it speaks like us has practical consequences: users trust false information, adopt biased positions, or develop emotional attachments to a system that cannot feel. This anthropomorphizing tendency is not a bug but a fundamental human inclination – yet it is deliberately exploited by companies for engagement and profit. The risk: we delegate decisions to systems that bear no responsibility and possess no concepts of truth or ethics.
Syntactic Competence Without Semantic Foundation#
GPT can discuss morality without being moral; it can describe emotions without feeling; it can generate scientific texts without scientifically understanding. This discrepancy between form and content leads to “hallucinations” – plausible-sounding but false claims presented with the same conviction as correct facts. The model has no reference model of the world, no way to verify whether its statements correspond to reality.
Necessary Perspective Shifts#
Bender et al. call for a radical course correction: away from the race for ever-larger models, toward careful data curation and documentation. Instead of swallowing everything the web offers, we need conscious selection, transparency about the origin and quality of data, and Value Sensitive Design that considers societal impacts from the outset. Research should focus on models that explicitly integrate world knowledge, causality, and ethical reflection – not just on optimizing language patterns.
Final Thought: The Illusion as Danger#
GPT works not because it understands, but because we have encoded understanding so efficiently in language that statistics can serve as a substitute. The danger lies not in the model itself, but in our tendency to attribute properties to it that it doesn’t have. As long as we fail to clearly communicate this difference, we risk delegating decisions of consequence – in education, medicine, law – to stochastic parrots. The task is not to build bigger models, but wiser humans who recognize the limits of this technology and shape it responsibly.
Related Posts
- 'ELIZA''s Rules vs. GPT''s Weights: The Same Symbol Manipulation, Just Bigger'
- AI Soberly Considered: Why Large Language Models Are Brilliant Tools – But Not Magic
- 'The Illusion of Free Input: Controlled User Steering in Transformer Models'
- iPhone 16 AI: Apple's Surveillance Revolution in Privacy Clothing
- Analysis of Meta, OpenAI, Microsoft, the WEF, and Decentralized AI Alternatives