OpenAI officially launched ChatGPT Search on September 1st, 2025 – an AI-based search engine that directly challenges Google. But while the tech world speaks of “revolutionary search,” a critical question arises: Will ChatGPT Search democratize information or establish the most subtle form of censorship we’ve ever seen?
Apple celebrates the iPhone 16 as “the biggest leap in iPhone history” – powered by “revolutionary AI that respects your privacy.” But behind the marketing glitter lies a dark truth: The iPhone 16 is the most sophisticated surveillance machine ever placed in millions of pockets.
The AI Safety Illusion: What “Security” Really Means # Marketing vs. Reality: # “AI Safety” Marketing: # “Protection from dangerous AI” “Algorithmic Accountability” “Bias Prevention” “Transparent AI Systems” “Human-Centric AI Development” “AI Safety” Reality: # Market entry barriers for startups Compliance costs that only Big Tech can handle Innovation paralysis through bureaucratic processes Regulatory arbitrariness as competitive weapon Surveillance legitimization in the name of “safety” Concrete “Safety” Measures and Their True Goals: # 1. “AI Model Registration” # Officially: “Create transparency about AI systems”
On August 1st, 2025, the EU AI Act came fully into force – Europe’s response to the rapid development of artificial intelligence. The EU celebrates itself as the “first continent with comprehensive AI regulation.” But behind the headlines lurks a bureaucratic monster that stifles innovation while ignoring the real problems.
By Alexander Renz • Last Update: June 2025
1. The Filter Mechanisms: How ChatGPT Decides What’s “Safe” # ChatGPT uses a multi-layered filtering system to moderate content:
Introduction # This in-depth analysis provides insight into the current landscape of artificial intelligence, highlighting major players like Meta, OpenAI, and Microsoft and their ties to the World Economic Forum (WEF). It explores data verification practices, platform strategies, ideological and cultural biases in training data, decentralized alternatives, and the complex network of power and influence shaping AI governance globally.
“What Can Be Done About Hate Speech and Fake News?” A paper from FH Kiel attempts to provide answers – but mainly delivers one thing: the controlled opposite of enlightenment.
The Book Nobody Wrote # AI on Amazon – and How Words Become Nothing Again # It feels like a bad joke. A “self-help” guide about narcissistic abuse, packed with clichés, buzzwords, and pseudo-therapeutic fluff – supposedly written by a human, but most likely generated by a language model. Sold on Amazon. Ordered by people in distress.
“It’s like comparing apples and pears — but what if you don’t know what either is? Welcome to GPT.”
The debate around artificial intelligence often ignores a critical fact: Large Language Models like GPT do not understand semantic concepts. They simulate understanding — but they don’t “know” what an apple or a pear is. This isn’t just academic; it has real-world implications, especially as we increasingly rely on such systems in decision-making.
What actually happens to your prompt before an AI system responds? The answer: a lot. And much of it remains intentionally opaque.
This post presents scientifically documented control mechanisms by which transformer-based models like GPT are steered – layer by layer, from input to output. All techniques are documented, reproducible, and actively used in production systems.
Why LLMs are not Intelligent # What is an LLM? # A Large Language Model (LLM) like GPT-4 is a massive statistical engine that predicts the next most likely word in a sentence based on training data. It doesn’t think. It doesn’t understand. It completes patterns.
Transformer models don’t “think” – they optimize probability. Their output is impressive, but it’s entirely non-conceptual.
Why Transformers Don’t Think # Despite the hype, Transformer-based models (like GPT) lack fundamental characteristics of thinking systems: