May 4, 2025 – Alexander Renz Translations: DE
GPT and similar models simulate comprehension. They imitate conversations, emotions, reasoning. But in reality, they are statistical probability models, trained on massive text corpora – without awareness, world knowledge, or intent.
What Does GPT Actually Do?
GPT (Generative Pretrained Transformer) is not a thinking system, but a language prediction model. It calculates which token (word fragment) is most likely to come next – based on the context of previous tokens.
GPT doesn’t know. It just continues. – Emily Bender, linguist and AI critic
No Understanding. No Thinking. No Intent.
GPT is trained on vast amounts of text – scraped from the internet, books, forums, Wikipedia. From this, it learns statistical patterns. But: GPT has no mental model of the world, no goals, no experience, no self.
It doesn’t distinguish between truth and fiction, between quote and hallucination. Everything is equally likely – as long as it sounds coherent.
The “ELIZA Effect” 2.0
Back in 1966, people projected deep understanding onto ELIZA – though it only mirrored user input via regex patterns. Today, we project consciousness onto GPT – though it’s just calculating.
“People react to GPT as if it thinks – because it speaks like us. Not because it thinks like us.” – Sherry Turkle, MIT
Illusion Instead of Intelligence
GPT impresses – but it doesn’t think. It can generate text – but cannot form concepts. It can simulate an argument – but has no position. It can mimic emotion – but feels nothing.
This is known as: syntactic fluency without semantic understanding.
References and Sources
- Emily Bender et al.: On the Dangers of Stochastic Parrots (2021) https://dl.acm.org/doi/10.1145/3442188.3445922
- Sherry Turkle: The Second Self (1984), Alone Together (2011)
- Joseph Weizenbaum: Computer Power and Human Reason (1976)
- Gary Marcus: Rebooting AI (2019)
- 99% Invisible Podcast: The ELIZA Effect
Conclusion
GPT is not intelligence. It is the illusion of intelligence, perfected through linguistic patterning and massive data. It’s not the machine that deceives – we let ourselves be deceived.
GPT “works” – not because it understands, but because we’ve made understanding imitable.