The term “AI” creates an image for consumers of thinking, understanding, even consciousness.
LLMs like GPT meet none of these criteria – yet they are still marketed as “intelligent.”
🔍 Core Problems:
-
Semantic deception: The term “intelligence” suggests human cognition, while LLMs merely analyze large amounts of text statistically. They simulate language without understanding meanings or pursuing goals. The model has no real-world knowledge but instead makes predictions based on past training data.
-
Lack of transparency: Users typically receive no systematic disclosure of the limitations of generative systems, such as:
- No true understanding of text (no semantic grasp)
- Frequent hallucinations (fabricated facts with no basis in reality)
- Lack of traceability (black-box behavior)
- No consciousness or intentionality – the AI does not “know” what it is saying
-
Marketing vs. reality: Many companies use anthropomorphic terms (“assisting,” “thinking,” “learns from you”) and visual or linguistic tools that exaggerate system capabilities. This creates a false expectation of autonomy and reliability in consumers.
⚖️ Legal Assessment:
-
Section 5 UWG – Misleading about essential characteristics of a product:
Using the term “intelligence” or suggesting true autonomy can constitute misleading commercial practice if essential information about function, limitations, or risks is withheld. -
Violation of Section 3 UWG (ban on unfair business practices):
Particularly problematic is concealing system-related malfunctions such as hallucinations – especially in sensitive domains like education, healthcare, justice, or consulting. -
EU AI Act (Regulation on Artificial Intelligence – final version adopted in 2024, effective mid/late 2025):
- Obligation for transparency in generative models (Art. 52 AI Act):
- Disclosure that content was generated by AI
- Documentation of technical limitations and possible risks
- Ban on manipulative interface design (dark patterns)
- AI applications interacting with humans must be clearly identifiable as such (Art. 52 para. 1)
- Obligation for transparency in generative models (Art. 52 AI Act):
-
Legal comparison USA / EU:
- The US lacks unified AI legislation, but since 2023 the FTC has warned explicitly against “AI Washing” – marketing products as AI-based when this is not, or misleadingly, the case.
- The EU, in contrast, is introducing a precedent-setting regulatory framework with the AI Act.
✅ Proposed Solutions
-
Mandatory notices for generated text:
e.g. “This text was generated automatically. The system does not understand content.” -
Ban on anthropomorphic branding:
No visualization as “smart assistants” with eyes, voice, or emotional speech if no cognitive capabilities are present. -
Strengthen consumer education:
Awareness campaigns about the differences between:- Statistics and meaning
- Machine learning and human thinking
- Output illusion and actual competence
-
Standardized risk disclaimers:
Especially in sensitive areas like medicine, legal advice, or child welfare, a risk disclaimer should be mandatory. -
Clarify liability issues:
Who is liable for damage caused by AI output? Providers must be held accountable if systems are misrepresented or risks downplayed intentionally.