A satirical but serious reminder that Large Language Models like GPT don’t truly understand semantics — they just simulate it.
Apples, Pears, and AI – When GPT Doesn’t Know the Difference

A satirical but serious reminder that Large Language Models like GPT don’t truly understand semantics — they just simulate it.
May 4, 2025 – Alexander Renz Translations: DE
GPT and similar models simulate comprehension. They imitate conversations, emotions, reasoning. But in reality, they are statistical probability models, trained on massive text corpora – without awareness, world knowledge, or intent.
GPT (Generative Pretrained Transformer) is not a thinking system, but a language prediction model. It calculates which token (word fragment) is most likely to come next – based on the context of previous tokens.
Transformer models don’t “think” – they optimize probability. Their output is impressive, but it’s entirely non-conceptual.
Despite the hype, Transformer-based models (like GPT) lack fundamental characteristics of thinking systems:
They are statistical mirrors, not cognitive agents.
A Transformer is not a mind. It’s a sophisticated parrot with vast echo chambers.