More Illusion Than Intelligence: Why 90% of AI Systems Deliver No Real Understanding

Introduction: The Great Misunderstanding Since the hype around ChatGPT, Claude, Gemini, and others, artificial intelligence has become a buzzword. Marketing materials promise assistants that understand, learn, reason, write, and analyze. Startups put “AI-powered” on every second website. Billions change hands. Entire industries are built on the illusion. And yet, in the overwhelming majority of cases: These are not intelligent systems. They are statistically trained text generators optimized for plausibility—not truth, not understanding, not meaning. ...

July 26, 2025 Â· Alexander Renz

AI Is the Matrix – And We Are All Part of It

Introduction: The Matrix Is Here – It Just Looks Different AI is not the Matrix from the movies. It is more dangerous – because it is not perceived as deception. It works through suggestions, text, tools – not through virtuality, but through normalization. AI does not simulate a world – it structures ours. And no one notices, because everyone thinks it’s useful. 1. Invisible but Everywhere – The New Ubiquity The integration of AI into daily life is total – but silent: ...

May 8, 2025 Â· Alexander Renz

The Illusion of Free Input: Controlled User Steering in Transformer Models

What actually happens to your prompt before an AI system responds? The answer: a lot. And much of it remains intentionally opaque. This post presents scientifically documented control mechanisms by which transformer-based models like GPT are steered – layer by layer, from input to output. All techniques are documented, reproducible, and actively used in production systems. 1. Control Begins Before the Model: Input Filtering Even before the model responds, the input text can be intercepted and replaced – for example, through a “toxicity check”: ...

May 6, 2025 Â· Alexander Renz

Perspectives in Comparison

Perspectives in Comparison Not everyone sees GPT and similar systems as mere deception. Some voices highlight: that LLMs enable creative impulses that they automate tasks once reserved for humans that they are tools – neither good nor evil, but shaped by use and context Others point out: LLMs are not intelligent – they only appear to be they generate trust through language – but carry no responsibility they replicate societal biases hidden in their training data So what does this mean for us? This site takes a critical stance – but does not exclude other viewpoints. On the contrary: Understanding arises through contrast. ...

May 5, 2025 Â· Alexander Renz

ELIZA's Rules vs. GPT's Weights: The Same Symbol Manipulation, Just Bigger

ELIZA was a parrot with rules – GPT is a chameleon with probabilities. Yet both are symbolic manipulators without true understanding.

May 4, 2025 Â· Alexander Renz