GPT
AI Soberly Considered: Why Large Language Models Are Brilliant Tools – But Not Magic
··2324 words·11 mins
There are two dominant narratives about Large Language Models:
Narrative 1: “AI is magic and will replace us all!” → Exaggerated, creates hype and fear
Narrative 2: “AI is dumb and useless!” → Ignorant, misses real value
Apples, Pears, and AI – When GPT Doesn't Know the Difference
“It’s like comparing apples and pears — but what if you don’t know what either is? Welcome to GPT.”
The debate around artificial intelligence often ignores a critical fact: Large Language Models like GPT do not understand semantic concepts. They simulate understanding — but they don’t “know” what an apple or a pear is. This isn’t just academic; it has real-world implications, especially as we increasingly rely on such systems in decision-making.
'The Illusion of Free Input: Controlled User Steering in Transformer Models'
··329 words·2 mins
What actually happens to your prompt before an AI system responds? The answer: a lot. And much of it remains intentionally opaque.
This post presents scientifically documented control mechanisms by which transformer-based models like GPT are steered – layer by layer, from input to output. All techniques are documented, reproducible, and actively used in production systems.
'ELIZA''s Rules vs. GPT''s Weights: The Same Symbol Manipulation, Just Bigger'
ELIZA was a parrot with rules – GPT is a chameleon with probabilities. Yet both are symbolic manipulators without true understanding.
'ELIZA on steroids: Why GPT is not intelligence'
··926 words·5 mins
Translations: DE
GPT and similar models simulate comprehension. They imitate conversations, emotions, reasoning. But in reality, they are statistical probability models, trained on massive text corpora – without awareness, world knowledge, or intent.