Experience ELIZA in Your Browser – The Original Chatbot for Self-Study

“Please tell me more about that.” – ELIZA If you want to understand how language simulation worked before the AI boom, this is your starting point: 🔗 Try ELIZA now in your browser This demo replicates Joseph Weizenbaum’s original 1966 program. It simulates a Rogerian psychotherapist and responds using simple pattern rules – no understanding, no memory, no intelligence. Why ELIZA still matters ELIZA’s success surprised even Weizenbaum. Many users felt understood by a program that merely mirrored their statements with generic replies. ...

May 6, 2025 Â· Alexander Renz

The Book Nobody Wrote

The Book Nobody Wrote AI on Amazon – and How Words Become Nothing Again It feels like a bad joke. A “self-help” guide about narcissistic abuse, packed with clichés, buzzwords, and pseudo-therapeutic fluff – supposedly written by a human, but most likely generated by a language model. Sold on Amazon. Ordered by people in distress. And no one checks if the book was ever seen by an actual author. The New Business Model: Simulation Amazon has long since transformed. From a retailer to a marketplace of content that just feels “real enough.” Real authors? Real expertise? Real help? Not required. It’s enough for an algorithm to produce words that sound like advice. Text blocks that are grammatically correct, friendly in tone, and SEO-optimized. ...

May 6, 2025 Â· Alexander Renz

The Illusion of Free Input: Controlled User Steering in Transformer Models

What actually happens to your prompt before an AI system responds? The answer: a lot. And much of it remains intentionally opaque. This post presents scientifically documented control mechanisms by which transformer-based models like GPT are steered – layer by layer, from input to output. All techniques are documented, reproducible, and actively used in production systems. 1. Control Begins Before the Model: Input Filtering Even before the model responds, the input text can be intercepted and replaced – for example, through a “toxicity check”: ...

May 6, 2025 Â· Alexander Renz

Perspectives in Comparison

🧭 Perspectives in Comparison Not everyone sees GPT and similar systems as mere deception. Some voices highlight: that LLMs enable creative impulses that they automate tasks once reserved for humans that they are tools – neither good nor evil, but shaped by use and context Others point out: LLMs are not intelligent – they only appear to be they generate trust through language – but carry no responsibility they replicate societal biases hidden in their training data So what does this mean for us? This site takes a critical stance – but does not exclude other viewpoints. On the contrary: Understanding arises through contrast. ...

May 5, 2025 Â· Alexander Renz

Artificial Intelligence and Consumer Deception

The term “AI” creates an image for consumers of thinking, understanding, even consciousness. LLMs like GPT meet none of these criteria – yet they are still marketed as “intelligent.” 🔍 Core Problems: Semantic deception: The term “intelligence” suggests human cognition, while LLMs merely analyze large amounts of text statistically. They simulate language without understanding meanings or pursuing goals. The model has no real-world knowledge but instead makes predictions based on past training data. ...

May 4, 2025 Â· Alexander Renz

ELIZA on steroids: Why GPT is not intelligence

May 4, 2025 – Alexander Renz Translations: DE GPT and similar models simulate comprehension. They imitate conversations, emotions, reasoning. But in reality, they are statistical probability models, trained on massive text corpora – without awareness, world knowledge, or intent. What Does GPT Actually Do? GPT (Generative Pretrained Transformer) is not a thinking system, but a language prediction model. It calculates which token (word fragment) is most likely to come next – based on the context of previous tokens. ...

May 4, 2025 Â· Alexander Renz

ELIZA's Rules vs. GPT's Weights: The Same Symbol Manipulation, Just Bigger

ELIZA was a parrot with rules – GPT is a chameleon with probabilities. Yet both are symbolic manipulators without true understanding.

May 4, 2025 Â· Alexander Renz

Exposing the Truth About AI

🧠 What Is “AI” Really? The term “Artificial Intelligence” suggests thinking, awareness, and understanding. But models like GPT are merely statistical pattern completers – they understand nothing. Statistics ≠ Thinking GPT doesn’t choose the next word because it makes sense, but because it is likely. What it produces is linguistic surface without depth – impressive, but hollow. 🧩 ELIZA vs. GPT – Large-Scale Symbol Manipulation Both ELIZA (1966) and GPT-4 (2023) are based on symbol processing without meaning. The illusion comes from plausible language – not from comprehension. ...

May 4, 2025 Â· Alexander Renz

Statistics ≠ Thinking

Transformer models don’t “think” – they optimize probability. Their output is impressive, but it’s entirely non-conceptual. ❌ Why Transformers Don’t Think Despite the hype, Transformer-based models (like GPT) lack fundamental characteristics of thinking systems: No real-world grounding No understanding of causality No intentions or goals No model of self or others No abstraction or symbol grounding No mental time travel (memory/planning) They are statistical mirrors, not cognitive agents. A Transformer is not a mind. It’s a sophisticated parrot with vast echo chambers. ...

May 4, 2025 Â· Alexander Renz

Tech

Why LLMs are not Intelligent What is an LLM? A Large Language Model (LLM) like GPT-4 is a massive statistical engine that predicts the next most likely word in a sentence based on training data. It doesn’t think. It doesn’t understand. It completes patterns. How Transformers Work Inputs (tokens) are converted to vectors. Self-attention layers calculate relationships between tokens. The model predicts the next token using statistical weighting. There is no internal world model, no consciousness, no logic engine. ...

May 4, 2025 Â· Alexander Renz