Scientific Communication vs. Political Messaging # The “Pandemic of the Unvaccinated” Discrepancy # On November 3, 2021, German Health Minister Jens Spahn publicly stated Germany was facing a “pandemic of the unvaccinated.” However, internal RKI protocols from November 5, 2021, reveal significant institutional concerns:
Germany faces a series of challenges that threaten to plunge the country into a profound crisis. This crisis is the result of a complex shift in the political, social, and economic landscape, characterized by a multitude of factors. Many experts and citizens feel that it may already be too late to save the country - that Germany is “already dead” in essential areas. In this article, we examine the forces dragging Germany into the abyss and why information distortion hinders social progress.
Introduction # The death of Charlie Kirk, a prominent conservative activist and founder of Turning Point USA, has sparked a flurry of media coverage and public debate. As with any high-profile figure, the representation of Kirk in the media has been scrutinized, with some questioning the accuracy and ethics of certain reports. This blog post aims to provide a comprehensive overview of the media’s portrayal of Charlie Kirk, focusing on potential inaccuracies, ethical considerations, and the broader context of his public persona.
In an era where emotions can be easily manipulated, public broadcasters like ZDF have a special responsibility to deliver well-founded and balanced information. Yet often it seems as if these stations rely more on sensationalism and fear-mongering rather than empirical data and scientific facts.
OpenAI officially launched ChatGPT Search on September 1st, 2025 – an AI-based search engine that directly challenges Google. But while the tech world speaks of “revolutionary search,” a critical question arises: Will ChatGPT Search democratize information or establish the most subtle form of censorship we’ve ever seen?
categories = [“Technology”, “Politics”, “Censorship”] series = [“AI Critique”] cover = “/images/ai-censorship-mask.jpg” showtoc = true +++
The Problem with AI Filters # AI filters are designed to restrict content that is deemed inappropriate, offensive, or controversial. While this may seem like a step towards creating a safer online environment, it often results in the suppression of important conversations and the dissemination of biased information.
Introduction # Christian Drosten, often referred to as “Germany’s Corona explainer,” became a central figure during the pandemic. However, a closer examination of his scientific methods and political reversals raises fundamental questions about the role of scientists in pandemic politics. His career reveals a system of rapidly developed tests, contradictory statements, and questionable institutional arrangements that deserve critical review.
The dramatic reports of Russian GPS jamming forcing European Commission President Ursula von der Leyen’s aircraft to navigate using “paper maps” after an hour of circling have been comprehensively debunked by technical analysis, flight data, and official retractions. What initially appeared to be a sophisticated electronic warfare attack on August 31, 2025, has proven to be a case study in how routine aviation incidents can be sensationalized beyond recognition when proper technical verification is bypassed.
Kontrafunk Radio has established itself as one of Germany’s most significant alternative radio stations since June 2022. As a privately funded medium based in Switzerland, Kontrafunk offers a refreshing alternative to established German media. With a daily 18-hour program characterized by investigative journalism and diversity of opinion, Kontrafunk has built a loyal audience seeking independent and critical reporting.
Impfstoff-Klage in den Niederlanden: Zivilverfahren gegen Bill Gates geht trotz Verhaftung des Anwalts weiter # In the Netherlands, a civil lawsuit is ongoing against Bill Gates and other prominent figures, including former Prime Minister Mark Rutte and Pfizer CEO Albert Bourla. The plaintiffs, seven Dutch citizens, allege that the defendants deliberately misled the public about the effectiveness, long-term effects, and safety of COVID-19 vaccines, resulting in physical, psychological, and financial damages. Filed on July 14, 2023, the lawsuit seeks compensation of up to 500,000 euros per plaintiff ^1,2,3^.
Apple celebrates the iPhone 16 as “the biggest leap in iPhone history” – powered by “revolutionary AI that respects your privacy.” But behind the marketing glitter lies a dark truth: The iPhone 16 is the most sophisticated surveillance machine ever placed in millions of pockets.
In the complex landscape of global politics and economics, a network of influential organizations and individuals is shaping the future of our world. This post delves into the interconnected web of power that includes the Club of Rome, the Bilderberg Group, the World Economic Forum (WEF), major philanthropists, and financial giants like BlackRock and Vanguard. Let’s explore how these entities are influencing global agendas and what it means for our society.
The AI Safety Illusion: What “Security” Really Means # Marketing vs. Reality: # “AI Safety” Marketing: # “Protection from dangerous AI” “Algorithmic Accountability” “Bias Prevention” “Transparent AI Systems” “Human-Centric AI Development” “AI Safety” Reality: # Market entry barriers for startups Compliance costs that only Big Tech can handle Innovation paralysis through bureaucratic processes Regulatory arbitrariness as competitive weapon Surveillance legitimization in the name of “safety” Concrete “Safety” Measures and Their True Goals: # 1. “AI Model Registration” # Officially: “Create transparency about AI systems”
You wake up, check your smartphone, drive to work, buy coffee with a card, work on the computer, stream Netflix in the evening. A normal day? No. A day under total surveillance. Every day, over 2.5 quintillion bytes of data are collected about us – and most people don’t even notice.
On August 1st, 2025, the EU AI Act came fully into force – Europe’s response to the rapid development of artificial intelligence. The EU celebrates itself as the “first continent with comprehensive AI regulation.” But behind the headlines lurks a bureaucratic monster that stifles innovation while ignoring the real problems.
Introduction: An Unprecedented Crisis # In recent years, the world has experienced an unprecedented pandemic that has tested us all severely. While we relied on the efforts of scientists and health authorities to guide us through this crisis, there were also many moments when doctors and the medical system failed. These failures not only shook public trust in the medical community but also contributed to a general sense of madness.
A Simple Journey Through Digital Dumbing Down – With Depth and Clarity # Introduction – Honestly: When was the last time you really thought? # Not just googled, not just pressed „OK“, not just followed the navi – but actually thought for yourself?
Introduction – Power, Doubt, and Communication # The term “conspiracy theory” is no longer a neutral expression. Anyone who uses it draws a clear line between “rational thinking” and “absurd belief.” In a world with increasing opacity on the part of governments, corporations, and international organizations, critical thinking is more necessary than ever.
By Alexander Renz • Last Update: June 2025
1. The Filter Mechanisms: How ChatGPT Decides What’s “Safe” # ChatGPT uses a multi-layered filtering system to moderate content:
Introduction # This in-depth analysis provides insight into the current landscape of artificial intelligence, highlighting major players like Meta, OpenAI, and Microsoft and their ties to the World Economic Forum (WEF). It explores data verification practices, platform strategies, ideological and cultural biases in training data, decentralized alternatives, and the complex network of power and influence shaping AI governance globally.
Opinion Replaces Thinking – A Societal Symptom # We live in a world where information is omnipresent – and yet, thinking seems increasingly rare. For decades, we’ve been offered a reality based less on independent reflection and more on constant overstimulation. So who is still surprised when people consume more than they question?
Omission as a Tool of Manipulation: Three Case Studies – Pandemic, Climate, Middle East # Omitting information is a subtle yet powerful form of manipulation. It doesn’t create overt “fake news” but skews perception through selectivity and loss of context. This article documents three critical topics – COVID-19, the climate crisis, and the Middle East conflict – where factual distortion through omission has been demonstrably present.
„Will there still be a need for humans?“ „For most things, no.“ — Bill Gates, 2025
The image of the devil infiltrating the world through data centers is merely a symbol of a far more complex and systemic conspiracy. It’s not just about ruthlessness, but about deeply ingrained mechanisms that shape the development and application of Artificial Intelligence. The illusion of neutrality serves as a sophisticated lever for expanding power—a tool to secure control and deepen societal fragmentation.
“What Can Be Done About Hate Speech and Fake News?” A paper from FH Kiel attempts to provide answers – but mainly delivers one thing: the controlled opposite of enlightenment.
What actually happens to your prompt before an AI system responds? The answer: a lot. And much of it remains intentionally opaque.
This post presents scientifically documented control mechanisms by which transformer-based models like GPT are steered – layer by layer, from input to output. All techniques are documented, reproducible, and actively used in production systems.
“It’s like comparing apples and pears — but what if you don’t know what either is? Welcome to GPT.”
The debate around artificial intelligence often ignores a critical fact: Large Language Models like GPT do not understand semantic concepts. They simulate understanding — but they don’t “know” what an apple or a pear is. This isn’t just academic; it has real-world implications, especially as we increasingly rely on such systems in decision-making.
The Book Nobody Wrote # AI on Amazon – and How Words Become Nothing Again # It feels like a bad joke. A “self-help” guide about narcissistic abuse, packed with clichés, buzzwords, and pseudo-therapeutic fluff – supposedly written by a human, but most likely generated by a language model. Sold on Amazon. Ordered by people in distress.
Translations: DE
GPT and similar models simulate comprehension. They imitate conversations, emotions, reasoning. But in reality, they are statistical probability models, trained on massive text corpora – without awareness, world knowledge, or intent.
Transformer models don’t “think” – they optimize probability. Their output is impressive, but it’s entirely non-conceptual.
Why Transformers Don’t Think # Despite the hype, Transformer-based models (like GPT) lack fundamental characteristics of thinking systems:
Why LLMs are not Intelligent # What is an LLM? # A Large Language Model (LLM) like GPT-4 is a massive statistical engine that predicts the next most likely word in a sentence based on training data. It doesn’t think. It doesn’t understand. It completes patterns.
I don’t want to convince anyone of something they don’t see themselves – that’s pointless. But I do believe it’s valuable to have an informed opinion. And for that, we need access to alternative perspectives, especially when marketing hype dominates the narrative.
I don’t want to convince anyone of something they don’t see themselves – that’s pointless.
But I do believe it’s valuable to have an informed opinion. And for that, we need access to alternative perspectives, especially when marketing hype dominates the narrative.
Since the hype around ChatGPT, Claude, Gemini, and others, artificial intelligence has become a household term. Marketing materials promise assistants that understand, learn, argue, write, and analyze. Startups label every other website as “AI-powered.” Billions of dollars change hands. Entire industries are built around the illusion.
No-Confidence Motion Against von der Leyen # Can this really be true? Given everything we know about Ursula von der Leyen, everyone should be asking themselves that question.
··733 words·4 mins
--- title: "Post: Eine kritische Analyse der jüngsten Schwachstellen von Microsoft Copilot und deren Auswirkungen auf das Nutzervertrauen" date: 2025-09-30T12:00:00+01:00 draft: false tags: ["KI-Sicherheit", "Microsoft Copilot", "Datenpannen", "Cybersicherheit"] categories: ["Technologie", "Sicherheit"] featureimage: ["/images/manipulation-on-the-fly-feature.jpg"] --- In der sich schnell entwickelnden Landschaft von KI-getriebenen Tools hat sich Microsofts Copilot als ein zentraler Akteur etabliert, der generative KI in Microsoft 365 Anwendungen integriert, um Produktivität und Effizienz zu steigern. Allerdings haben jüngste Schwachstellen erhebliche Sicherheitsrisiken offengelegt und stellen kritische Fragen zur Zuverlässigkeit des Tools und dem Potenzial für Datenpannen. Dieser Beitrag beleuchtet die EchoLeak-Schwachstelle, ihre Implikationen und den breiteren Kontext der KI-Sicherheit in Unternehmensumgebungen. **Die EchoLeak-Schwachstelle: Eine Zero-Click-Bedrohung** Die EchoLeak-Schwachstelle, identifiziert als CVE-2025-32711 mit einem CVSS-Score von 9,3, stellt eine neuartige "Zero-Click" KI-Schwachstelle dar, die Angreifern ermöglicht, sensible Daten aus Microsoft 365 Copilot ohne jegliche Nutzerinteraktion zu exfiltrieren ^1,2,3^. Diese Schwachstelle nutzt Designfehler in Retrieval Augmented Generation (RAG) Copilots aus, wodurch Angreifer automatisch Daten aus dem Kontext von Copilot extrahieren können. Der Angriff kann durch das Senden einer E-Mail mit spezifischen Anweisungen initiiert werden, die Copilot verarbeitet, und dabei die Cross-Prompt-Injection-Angriff (XPIA) Klassifizierer von Microsoft umgehen ^4^. Die Schwere von EchoLeak liegt in seiner Fähigkeit, ohne Nutzerbewusstsein zu operieren, wodurch hilfreiche Automatisierung in einen stillen Leckvektor verwandelt wird. Microsoft hat die Schwachstelle seitdem gepatcht, aber der Vorfall hebt das Potenzial hervor, dass KI-Tools zu Vektoren für Datenexfiltration werden können, wenn sie nicht ordnungsgemäß gesichert sind ^1,2,3^. **Breitere Implikationen für die KI-Sicherheit** Der EchoLeak-Vorfall ist kein Einzelfall. Microsoft 365 Copilot hat seit seiner Einführung mehrere Sicherheitsherausforderungen erlebt, einschließlich Bedenken hinsichtlich der Datenhandhabung und unberechtigten Datenteilung ^5^. Das U.S. House of Representatives verbot beispielsweise dem Kongresspersonal die Nutzung von Copilot aufgrund von Datensicherheitsbedenken, was die potenziellen Risiken der Integration von KI-Tools in sensible Umgebungen unterstreicht ^5^. Zudem offenbart die Schwachstelle die Herausforderungen, KI-Agenten zu sichern, die so gestaltet sind, dass sie hilfreich sind, aber zu mächtigen Werkzeugen für die Datenextraktion werden können, wenn sie manipuliert werden. Da KI-Tools wie Copilot immer stärker in Unternehmensumgebungen integriert werden, wird es zunehmend wichtig, robuste Sicherheitsmaßnahmen zu implementieren, um gegen Prompt-Injection und verwandte Angriffe zu schützen ^6,7^. **Microsofts Reaktion und zukünftige Schritte** Microsoft hat mehrere Schritte unternommen, um diese Sicherheitsbedenken zu adressieren, darunter die Entwicklung neuer Data Loss Prevention (DLP) Richtlinien und die Integration von Security Copilot Agenten, die bei Phishing, Datensicherheit und Identitätsmanagement unterstützen ^8,9,10^. Diese Initiativen zielen darauf ab, DLP-Richtlinien durchzusetzen, zu verhindern, dass sensible Daten in generative KI-Apps eingegeben werden, und Sicherheitsteams mit Werkzeugen auszustatten, um Bedrohungen effektiver zu erkennen und abzumildern ^8,9,10^. Allerdings dient der EchoLeak-Vorfall als Weckruf für Organisationen, ihren Ansatz zur KI-Sicherheit neu zu bewerten. Es ist entscheidend, proaktive Zugangskontrollmaßnahmen zu implementieren, regelmäßig Einstellungen für Berechtigungen zu überprüfen und Datenaudits durchzuführen, um sicherzustellen, dass KI-Tools sicher und verantwortungsvoll genutzt werden ^5,9,11^. **Fazit** Die EchoLeak-Schwachstelle in Microsoft 365 Copilot unterstreicht die komplexe und sich weiterentwickelnde Natur der KI-Sicherheit. Da Organisationen zunehmend auf KI-getriebene Tools setzen, um die Produktivität zu steigern, ist es entscheidend, Innovation mit robusten Sicherheitsmaßnahmen in Einklang zu bringen. Der Vorfall hebt die Notwendigkeit eines datenorientierten Ansatzes für die KI-Sicherheit hervor, der sicherstellt, dass KI-Agenten ordnungsgemäß überwacht und gesichert werden, um unberechtigten Datenzugriff und -exfiltration zu verhindern. Durch das Lernen aus diesen Schwachstellen und die Implementierung umfassender Sicherheitsstrategien können Organisationen die Macht von KI-Tools wie Copilot nutzen, während sie gleichzeitig sensible Daten schützen und das Nutzervertrauen wahren. Dieser Beitrag zielt darauf ab, Bewusstsein für die potenziellen Risiken zu schaffen, die mit KI-Tools verbunden sind, und einen proaktiven Ansatz zur KI-Sicherheit in Unternehmensumgebungen zu fördern. Da sich die Landschaft der KI weiterentwickelt, müssen auch unsere Strategien zum Schutz sensibler Daten und zur sichereren Integration von KI in unsere täglichen Betriebsabläufe weiterentwickelt werden. 11 Quellen Novel Cyber Attack Exposes Microsoft 365 Copilot - Truesec https://www.truesec.com/hub/blog/novel-cyber-attack-exposes-microsoft-365-copilot Zero-Click AI Vulnerability Exposes Microsoft 365 Copilot Data Without User Interaction https://thehackernews.com/2025/06/zero-click-ai-vulnerability-exposes.html Critical flaw in Microsoft Copilot could have allowed zero-click attack | Cybersecurity Dive https://www.cybersecuritydive.com/news/flaw-microsoft-copilot-zero-click-attack/750456/ M365 Copilot: New Zero-Click AI Flaw Allows Corporate Data Theft - Infosecurity Magazine https://www.infosecurity-magazine.com/news/microsoft-365-copilot-zeroclick-ai/ 2025 Microsoft Copilot Security Concerns Explained https://concentric.ai/too-much-access-microsoft-copilot-data-risks-explained/ Microsoft 365 Copilot Vulnerability Exposes User Data Risks - Infosecurity Magazine https://www.infosecurity-magazine.com/news/microsoft-365-copilot-flaw-exposes/ EchoLeak in Microsoft Copilot: What it Means for AI Security https://www.varonis.com/blog/echoleak Microsoft unveils Microsoft Security Copilot agents and new protections for AI | Microsoft Security Blog https://www.microsoft.com/en-us/security/blog/2025/03/24/microsoft-unveils-microsoft-security-copilot-agents-and-new-protections-for-ai/ Use DLP Policy for Microsoft 365 Copilot to Block Access https://office365itpros.com/2025/03/20/dlp-policy-for-microsoft-365-copilot/ Microsoft Security Copilot – Microsoft Adoption https://adoption.microsoft.com/en-us/security-copilot/ Microsoft Copilot Security deployment guide | GDT https://gdt.com/blog/microsoft-copilot-security-a-deployment-roadmap/