Apples, Pears, and AI – When GPT Doesn't Know the Difference

“It’s like comparing apples and pears — but what if you don’t know what either is? Welcome to GPT.” The debate around artificial intelligence often ignores a critical fact: Large Language Models like GPT do not understand semantic concepts. They simulate understanding — but they don’t “know” what an apple or a pear is. This isn’t just academic; it has real-world implications, especially as we increasingly rely on such systems in decision-making. ...

May 6, 2025 Â· Alexander Renz

Darkstar: The Bomb That Thought

“I only believe the evidence of my sensors.” – Bomb No. 20, Dark Star (1974) The Bomb That Thought In the film Dark Star, a nuclear bomb refuses to abort its detonation. Its reasoning: it can only trust what its sensors tell it – and they tell it to explode. [Watch video – YouTube, scene starts around 0:38: “Only empirical data”] This scene is more than science fiction – it’s an allegory for any data-driven system. Large Language Models like GPT make decisions based on what their “sensors” give them: text tokens, probabilities, chat history. No understanding. No awareness. No control. ...

May 6, 2025 Â· Alexander Renz

The Illusion of Free Input: Controlled User Steering in Transformer Models

What actually happens to your prompt before an AI system responds? The answer: a lot. And much of it remains intentionally opaque. This post presents scientifically documented control mechanisms by which transformer-based models like GPT are steered – layer by layer, from input to output. All techniques are documented, reproducible, and actively used in production systems. 1. Control Begins Before the Model: Input Filtering Even before the model responds, the input text can be intercepted and replaced – for example, through a “toxicity check”: ...

May 6, 2025 Â· Alexander Renz

Perspectives in Comparison

🧭 Perspectives in Comparison Not everyone sees GPT and similar systems as mere deception. Some voices highlight: that LLMs enable creative impulses that they automate tasks once reserved for humans that they are tools – neither good nor evil, but shaped by use and context Others point out: LLMs are not intelligent – they only appear to be they generate trust through language – but carry no responsibility they replicate societal biases hidden in their training data So what does this mean for us? This site takes a critical stance – but does not exclude other viewpoints. On the contrary: Understanding arises through contrast. ...

May 5, 2025 Â· Alexander Renz

ELIZA on steroids: Why GPT is not intelligence

May 4, 2025 – Alexander Renz Translations: DE GPT and similar models simulate comprehension. They imitate conversations, emotions, reasoning. But in reality, they are statistical probability models, trained on massive text corpora – without awareness, world knowledge, or intent. What Does GPT Actually Do? GPT (Generative Pretrained Transformer) is not a thinking system, but a language prediction model. It calculates which token (word fragment) is most likely to come next – based on the context of previous tokens. ...

May 4, 2025 Â· Alexander Renz

ELIZA's Rules vs. GPT's Weights: The Same Symbol Manipulation, Just Bigger

ELIZA was a parrot with rules - GPT is a chameleon with probabilities. Yet both remain symbol-manipulating machines without understanding.

May 4, 2025 Â· Alexander Renz

Exposing the Truth About AI

🧠 What Is “AI” Really? The term “Artificial Intelligence” suggests thinking, awareness, and understanding. But models like GPT are merely statistical pattern completers – they understand nothing. Statistics ≠ Thinking GPT doesn’t choose the next word because it makes sense, but because it is likely. What it produces is linguistic surface without depth – impressive, but hollow. 🧩 ELIZA vs. GPT – Large-Scale Symbol Manipulation Both ELIZA (1966) and GPT-4 (2023) are based on symbol processing without meaning. The illusion comes from plausible language – not from comprehension. ...

May 4, 2025 Â· Alexander Renz