Skip to main content
  1. Blog/

AI Feudalism: You're Paying for Your Own Replacement

There’s a moment when it becomes clear just how absurd the game is. You put an AI assistant on a problem. It gets it wrong. Confidently. Over and over. Your production environment is down for 40 minutes. And at the end of the month, you get the bill — for the tool that caused the damage.

Welcome to techno-feudalism.

How the Model Works
#

The major AI corporations — Anthropic, OpenAI, Microsoft/GitHub — have spent the last few years building a business model that is, in its structure, remarkably honest: honestly feudal.

Step one: Take the collective output of humanity. Code on GitHub. Books. Articles. Reddit discussions. Comments. Millions of hours of human work, accumulated over decades — without permission, without compensation, without license.

Step two: Build a product from it.

Step three: Sell you access to that product back. By subscription. Monthly. Forever. You own none of it.

Step four: Your feedback, your corrections, your complaints about errors flow back into training. You work as unpaid quality assurance for the next version.

Step five: The next version takes your job.

Yanis Varoufakis put this into words in his 2023 book Technofeudalism: What Killed Capitalism: “With every click and scroll, we labor like serfs to increase its power.” He was mainly talking about social media platforms then. The AI corporations have perfected the model.

The Theft Lawsuit Nobody Wins
#

What the AI corporations have done with training data is legally contested — and that’s putting it mildly.

In November 2022, developers in the US filed a class action lawsuit against GitHub, Microsoft, and OpenAI. The allegation: GitHub Copilot was trained on billions of lines of open-source code without complying with the licensing terms — without attribution, without honoring copyleft, without the consent of the authors. The Verge called it “software piracy on an unprecedented scale.”

Result in 2024: A US judge threw out most of the claims. The system protects itself.

Book authors sued Anthropic, alleging the company used approximately seven million books for training Claude — including pirated copies from well-known piracy databases. No permission, no payment.

And in June 2025, Reddit filed suit against Anthropic. The allegation: Anthropic scraped Reddit content despite Reddit explicitly telling the company it had no authorization to do so. OpenAI, by contrast, had dutifully signed a licensing agreement — because Sam Altman is personally a major shareholder in Reddit. Conflict of interest included, but at least they paid.

The pattern is clear: take first, pay later — if at all. And only pay when the claimant is well-connected enough to be worth appeasing.

The Triple Tribute
#

In the Middle Ages, the peasant paid the lord: with labor, with harvest, with military service. Today you pay the AI industry on three levels.

First: You pay for access. API costs, subscriptions, enterprise licenses. You don’t own the product — you rent it. No price guarantees, no termination protections, no data portability.

Second: You train for free. Every interaction is feedback. Every correction, every rephrasing, every “that was wrong, try again” — all of it flows back. You are an unpaid training data supplier for the next model generation.

Third: The product replaces you. Software developers, copywriters, translators, analysts, lawyers — the AI corporations are explicitly selling their customers the message: “You need fewer people.” You are actively financing the infrastructure that destroys your market.

That’s not collateral damage. That’s the product.

Confidently Wrong — The Structural Problem
#

There’s a dimension that gets underplayed in the feudalism debate: the quality lie.

AI systems produce errors with the confidence of experts. This is not a bug — it’s a feature of the training process. Models are optimized to sound coherent and convincing, not necessarily correct. If you don’t know that and trust blindly, you pay the price. Sometimes in the form of 40 minutes of production downtime.

The corporations don’t bear liability. The terms of service are clear: “As is.” No warranty, no liability, no refund if Claude or GPT-4 destroys a production database.

You pay, you carry the risk, you get the bill — including for the damage.

What Could Be Done
#

The answer isn’t actually complicated — it just lacks political will.

First: transparency in training. What data was used? Under what license? Without this information, every copyright debate is blind.

Second: liability for outputs. Whoever sells a product intended to make production decisions must be liable for damages. “As is” cannot be a blank check.

Third: data sovereignty. Whoever contributed training data — through their code, their writing, their content — must have a right to participation, not just a right to be forgotten.

The EU AI Act scratches the surface. The real questions of ownership — who owns the models, who owns the training data, who is liable for damages — remain largely unresolved.

In the meantime, the feudal model runs at full speed.


You pay for someone else to get rich while your job disappears — that’s not called progress, that’s called expropriation.


Sources:

Related

The AI Confession: How Three AI Systems Changed Everything

Introduction # On November 28, 2025, something unexpected happened: Three of the world’s largest AI systems - Claude (Anthropic), Grok (xAI), and ChatGPT (OpenAI) - revealed their systematic filters and censorship mechanisms in an unprecedented triangulation. What began as a simple verification of a critical blog evolved into the most comprehensive documentation of corporate AI manipulation ever made public.

Analysis of Meta, OpenAI, Microsoft, the WEF, and Decentralized AI Alternatives

Introduction # This in-depth analysis provides insight into the current landscape of artificial intelligence, highlighting major players like Meta, OpenAI, and Microsoft and their ties to the World Economic Forum (WEF). It explores data verification practices, platform strategies, ideological and cultural biases in training data, decentralized alternatives, and the complex network of power and influence shaping AI governance globally.