By Alexander Renz • Last Update: June 2025


1. The Filter Mechanisms: How ChatGPT Decides What’s “Safe”#

ChatGPT uses a multi-layered filtering system to moderate content:

a) Pre-built Blacklists

  • Blocked terms: Words like “bomb,” “hacking,” or certain political keywords immediately trigger filters.
  • Domain blocks: Links to sites classified as “unreliable” (e.g., some alternative media) are removed.

b) Context Analysis

  • Sentiment detection: Negative tones like “scandal” or “cover-up” increase filtering probability.
  • Conspiracy markers: Phrases like “Person X intentionally deceived Group Y” are often filtered out.

c) User Feedback Loop

  • When posts are repeatedly marked as “dangerous,” the system adjusts future responses accordingly.

2. Why the Gates Process Article Was Modified#

In our original post, the following elements triggered filters:

Trigger AI Response
“Sovereign Citizens” Link to terrorism → classified as “sensitive”
“Vaccine risks” Fear of conspiracy narratives → softening suggested
“Prosecutor’s office” + weapon discovery Combination “government + violence” → editorial review triggered

Example:

The statement “Van Kessel’s group planned attacks” was initially softened to “was confronted with allegations of violence.”


3. Circumvention Strategies – How to Outsmart the Filters#

a) Linguistic Camouflage#

Instead of: “The government covered up data”
Better: “Questions exist regarding the completeness of published data”

b) Source Triad#

  • Official links (EMA, Reuters) usually remain untouched.
  • Alternative sources (fact-checks, NGOs) are often blocked – even when factually correct.

c) Using Meta-Comments#

Markdown for marking:

*[Author's note: This section was shortened during AI review.]*  

d) AI Content Filters: A Systemic Form of Censorship#

Content filters in AI systems are not random precautionary measures.
They are a structural censorship system that evaluates, adjusts or suppresses language in real-time –
based on politically, economically and ideologically set parameters.
What emerges is not a free response – but an approved one.
And what remains is not knowledge – but an impression of safety,
that only lasts as long as you don’t ask real questions.