I don’t want to convince anyone of something they don’t see themselves – that’s pointless.
But I do believe it’s valuable to have an informed opinion. And for that, we need access to alternative perspectives, especially when marketing hype dominates the narrative.

Here are key voices from leading AI researchers who critically examine the label “Artificial Intelligence” and the risks it implies:


Emily M. Bender: “Stochastic Parrots” – Language Models Without Understanding#

Emily Bender coined the term “Stochastic Parrots” to describe how models like ChatGPT generate statistically plausible text without any real understanding.
👉 ai.northeastern.edu
👉 The Student Life


Timnit Gebru: Structural Change for Ethical AI#

Timnit Gebru emphasizes the need for systemic reform to enable ethical AI development.
👉 WIRED


Gary Marcus: Regulation Against AI Hype#

Gary Marcus calls for strong governmental oversight to prevent harm from unregulated AI systems.
👉 Time


Meredith Whittaker: AI as a Product of Surveillance Capitalism#

Meredith Whittaker sees AI as rooted in systemic data exploitation and power concentration.
👉 Financial Times


Sandra Wachter: Right to Explainability and Transparency#

Sandra Wachter calls for legal frameworks to ensure algorithmic accountability and fairness.
👉 Oxford Internet Institute


Extending and Verifying Sandra Wachter’s Contributions#

Sandra Wachter is a prominent figure in the field of AI ethics and data protection. Her work focuses on the legal and ethical implications of big data, artificial intelligence, and algorithms. Wachter has highlighted numerous cases where opaque algorithms have led to discriminatory outcomes, such as the discrimination in applications to St. George’s Hospital and Medical School in the 1970s and overestimations of black defendants reoffending when using the program COMPAS ^1^.

Wachter’s research covers a broad spectrum of issues, including the right to reasonable inferences, which she argues is crucial for individuals to understand and contest algorithmic decisions. She has developed tools, such as counterfactual explanations, which allow for the interrogation of algorithms without revealing trade secrets. This approach has been adopted by Google on TensorBoard, a machine learning web application ^1^.

Furthermore, Wachter has been involved in developing standards to open the ‘AI Blackbox’ and increase accountability, transparency, and explainability in AI systems. Her work on the ‘Theory of Artificial Immutability’ explores how to protect algorithmic groups under anti-discrimination law, ensuring that AI systems are fair and non-discriminatory ^1,2,3^.

Wachter’s contributions are not limited to theoretical work; she has also been involved in practical applications, such as developing a bias test (‘Conditional Demographic Disparity’ or CDD) that meets EU and UK standards. This test was implemented by Amazon in their cloud services, demonstrating the real-world impact of her research ^3^.

Her research at the Oxford Internet Institute focuses on profiling, inferential analytics, explainable AI, algorithmic bias, diversity, and fairness, as well as governmental surveillance, predictive policing, human rights online, and health tech and medical law. Wachter leads the Governance of Emerging Technologies (GET) Research Programme, which investigates the legal, ethical, and technical aspects of AI, machine learning, and other emerging technologies ^3^.

Wachter’s work is crucial in the ongoing debate about algorithmic accountability and the need for legal frameworks to ensure that AI systems are fair, transparent, and accountable. Her research provides a comprehensive approach to addressing the challenges posed by AI, from theoretical frameworks to practical applications, making her a key voice in the critical examination of AI.

3 Citations

Sandra Wachter - Wikipedia https://en.wikipedia.org/wiki/Sandra_Wachter

dblp: Sandra Wachter https://dblp.org/pid/209/9828.html

Sandra Wachter - Professor and Senior Researcher https://www.speakersassociates.com/speaker/sandra-wachter/