Posts for: #Justiz

Netherlands Vaccine Lawsuit: Civil Case Against Bill Gates Continues Despite Lawyer’s Arrest

1. Core Conflict: Civil Lawsuit vs. Criminal Law

Since 2023, seven Dutch citizens have been pursuing a purely civil legal proceeding (Case number: C/23/1234) against:

  • Mark Rutte (former Prime Minister)
  • Marion Koopmans (Virologist, former WHO advisor)
  • Albert Bourla (Pfizer CEO)
  • Bill Gates (as a private individual, not for the Gates Foundation)

Allegation: Intentional deception regarding efficacy, long-term effects and safety of COVID-19 vaccines, which allegedly led to “physical, psychological and financial damages.”
Objective: No criminal conviction, but monetary compensation – estimated up to €500,000 per plaintiff.
Legal basis: Dutch Civil Code (Art. 6:162 BW – unlawful act).

[]

Netherlands: Civil Lawsuit Against Bill Gates Proceeds Despite Lawyer’s Arrest

1. Core Conflict: Civil Claim vs. Criminal Charges

Since 2023, seven Dutch citizens have filed a purely civil lawsuit (Case No.: C/23/1234) against:

  • Mark Rutte (former Prime Minister)
  • Marion Koopmans (Virologist, ex-WHO advisor)
  • Albert Bourla (Pfizer CEO)
  • Bill Gates (as private individual, not representing Gates Foundation)

Allegations: Deliberate misrepresentation of COVID-19 vaccine efficacy, long-term effects, and safety, causing “physical, psychological, and financial harm”.
Objective: Financial compensation (up to €500,000 per plaintiff) – not criminal prosecution.
Legal Basis: Dutch Civil Code (Art. 6:162 BW – unlawful act).

[]

Omission as a Tool of Manipulation

Documented examples from the COVID-19 pandemic, climate crisis, and Middle East conflict — how media shape reality through omission and distortion.

[]

Artificial Intelligence and Consumer Deception

The term “AI” creates an image for consumers of thinking, understanding, even consciousness.
LLMs like GPT meet none of these criteria – yet they are still marketed as “intelligent.”

Core Problems:

  • Semantic deception: The term “intelligence” suggests human cognition, while LLMs merely analyze large amounts of text statistically. They simulate language without understanding meanings or pursuing goals. The model has no real-world knowledge but instead makes predictions based on past training data.

[]