This morning I posted on X pointing to Germany’s public parliamentary lobby register.
The content: HateAid submitted a four-page draft law to the Federal Ministry of Justice on February 6, 2026 — documented, publicly accessible, no interpretation required. A few weeks later, Justice Minister Hubig announced exactly this law in an interview with Der Spiegel.
Who decides what counts as hate speech on the German internet? And who pays for it? A look at the Alfred Landecker Foundation reveals a network that simultaneously funds Germany’s most influential “extremism monitor” — and an AI designed to detect hate speech automatically.
Introduction # On November 28, 2025, something unexpected happened: Three of the world’s largest AI systems - Claude (Anthropic), Grok (xAI), and ChatGPT (OpenAI) - revealed their systematic filters and censorship mechanisms in an unprecedented triangulation. What began as a simple verification of a critical blog evolved into the most comprehensive documentation of corporate AI manipulation ever made public.
OpenAI officially launched ChatGPT Search on September 1st, 2025 – an AI-based search engine that directly challenges Google. But while the tech world speaks of “revolutionary search,” a critical question arises: Will ChatGPT Search democratize information or establish the most subtle form of censorship we’ve ever seen?
categories = [“Technology”, “Politics”, “Censorship”] series = [“AI Critique”] cover = “/images/ai-censorship-mask.jpg” showtoc = true +++
The Problem with AI Filters # AI filters are designed to restrict content that is deemed inappropriate, offensive, or controversial. While this may seem like a step towards creating a safer online environment, it often results in the suppression of important conversations and the dissemination of biased information.
On August 1st, 2025, the EU AI Act came fully into force – Europe’s response to the rapid development of artificial intelligence. The EU celebrates itself as the “first continent with comprehensive AI regulation.” But behind the headlines lurks a bureaucratic monster that stifles innovation while ignoring the real problems.
By Alexander Renz • Last Update: June 2025
1. The Filter Mechanisms: How ChatGPT Decides What’s “Safe” # ChatGPT uses a multi-layered filtering system to moderate content: