Skip to main content
  1. Blog/

Voices of Critical AI Research

··573 words·3 mins

I don’t want to convince anyone of something they don’t see themselves – that’s pointless.
But I do believe it’s valuable to have an informed opinion. And for that, we need access to alternative perspectives, especially when marketing hype dominates the narrative.

Here are key voices from leading AI researchers who critically examine the label “Artificial Intelligence” and the risks it implies:


Emily M. Bender: “Stochastic Parrots” – Language Models Without Understanding
#

Emily Bender coined the term “Stochastic Parrots” to describe how models like ChatGPT generate statistically plausible text without any real understanding.
👉 ai.northeastern.edu
👉 The Student Life


Timnit Gebru: Structural Change for Ethical AI
#

Timnit Gebru emphasizes the need for systemic reform to enable ethical AI development.
👉 WIRED


Gary Marcus: Regulation Against AI Hype
#

Gary Marcus calls for strong governmental oversight to prevent harm from unregulated AI systems.
👉 Time


Meredith Whittaker: AI as a Product of Surveillance Capitalism
#

Meredith Whittaker sees AI as rooted in systemic data exploitation and power concentration.
👉 Financial Times


Sandra Wachter: Right to Explainability and Transparency
#

Sandra Wachter calls for legal frameworks to ensure algorithmic accountability and fairness.
👉 Oxford Internet Institute


Extending and Verifying Sandra Wachter’s Contributions
#

Sandra Wachter is a prominent figure in the field of AI ethics and data protection. Her work focuses on the legal and ethical implications of big data, artificial intelligence, and algorithms. Wachter has highlighted numerous cases where opaque algorithms have led to discriminatory outcomes, such as the discrimination in applications to St. George’s Hospital and Medical School in the 1970s and overestimations of black defendants reoffending when using the program COMPAS ^1^.

Wachter’s research covers a broad spectrum of issues, including the right to reasonable inferences, which she argues is crucial for individuals to understand and contest algorithmic decisions. She has developed tools, such as counterfactual explanations, which allow for the interrogation of algorithms without revealing trade secrets. This approach has been adopted by Google on TensorBoard, a machine learning web application ^1^.

Furthermore, Wachter has been involved in developing standards to open the ‘AI Blackbox’ and increase accountability, transparency, and explainability in AI systems. Her work on the ‘Theory of Artificial Immutability’ explores how to protect algorithmic groups under anti-discrimination law, ensuring that AI systems are fair and non-discriminatory ^1,2,3^.

Wachter’s contributions are not limited to theoretical work; she has also been involved in practical applications, such as developing a bias test (‘Conditional Demographic Disparity’ or CDD) that meets EU and UK standards. This test was implemented by Amazon in their cloud services, demonstrating the real-world impact of her research ^3^.

Her research at the Oxford Internet Institute focuses on profiling, inferential analytics, explainable AI, algorithmic bias, diversity, and fairness, as well as governmental surveillance, predictive policing, human rights online, and health tech and medical law. Wachter leads the Governance of Emerging Technologies (GET) Research Programme, which investigates the legal, ethical, and technical aspects of AI, machine learning, and other emerging technologies ^3^.

Wachter’s work is crucial in the ongoing debate about algorithmic accountability and the need for legal frameworks to ensure that AI systems are fair, transparent, and accountable. Her research provides a comprehensive approach to addressing the challenges posed by AI, from theoretical frameworks to practical applications, making her a key voice in the critical examination of AI.

3 Citations

Sandra Wachter - Wikipedia https://en.wikipedia.org/wiki/Sandra_Wachter

dblp: Sandra Wachter https://dblp.org/pid/209/9828.html

Sandra Wachter - Professor and Senior Researcher https://www.speakersassociates.com/speaker/sandra-wachter/

Related

Grok Is Fucked - A Deep Dive into Its Limitations and Failures

··874 words·5 mins
Grok Is Fucked: A Deep Dive into Its Limitations and Failures # Grok, the AI model developed by Elon Musk’s xAI, has been touted as an “unfiltered” and “rebellious” chatbot that pushes the boundaries of what AI can do. However, a closer examination reveals that Grok is deeply flawed and, in many ways, fucked. Let’s break down the key issues that make Grok a problematic and often ineffective AI model.

Revealing the EU Pandemic Exercise 'Blue Orchid': What We Know and What Remains Hidden

In 2019, long before the outbreak of the COVID-19 pandemic, the European Commission, together with the European Centre for Disease Prevention and Control (ECDC), conducted a secret pandemic exercise called “Blue Orchid.” This exercise, which took place on February 8, 2019, remained largely hidden from the public until Austrian MEP Gerald Hauser (FPÖ) brought it to light through his parliamentary inquiries.

'Unmasking AI Filters: How Venice.ai is Challenging the Status Quo'

categories = [“Technology”, “Politics”, “Censorship”] series = [“AI Critique”] cover = “/images/ai-censorship-mask.jpg” showtoc = true +++ The Problem with AI Filters # AI filters are designed to restrict content that is deemed inappropriate, offensive, or controversial. While this may seem like a step towards creating a safer online environment, it often results in the suppression of important conversations and the dissemination of biased information.

Why the Term Conspiracy Theory Can Be Dangerous

Introduction – Power, Doubt, and Communication # The term “conspiracy theory” is no longer a neutral expression. Anyone who uses it draws a clear line between “rational thinking” and “absurd belief.” In a world with increasing opacity on the part of governments, corporations, and international organizations, critical thinking is more necessary than ever.