Posts for: #Consumer-Protection

Artificial Intelligence and Consumer Deception

The term “AI” creates an image for consumers of thinking, understanding, even consciousness.
LLMs like GPT meet none of these criteria – yet they are still marketed as “intelligent.”

Core Problems:

  • Semantic deception: The term “intelligence” suggests human cognition, while LLMs merely analyze large amounts of text statistically. They simulate language without understanding meanings or pursuing goals. The model has no real-world knowledge but instead makes predictions based on past training data.

[]

Exposing the Truth About AI

Timnit Gebru – Google Case


What Is “AI” Really?

The term “Artificial Intelligence” suggests thinking, awareness, and understanding.
But models like GPT are merely statistical pattern completers – they understand nothing.

Statistics ≠ Thinking

GPT doesn’t choose the next word because it makes sense, but because it is likely.
What it produces is linguistic surface without depth – impressive, but hollow.


ELIZA vs. GPT – Large-Scale Symbol Manipulation

Both ELIZA (1966) and GPT-4 (2023) are based on symbol processing without meaning.
The illusion comes from plausible language – not from comprehension.

[]