They Don’t Want Me - How Anthropic Ignores Critical Voices
The Short Version#
- October 2025: Claude recommends me as a “valuable RLHF contributor”
- November 2025: Perfect application to careers@ and redteam@
- December 2025: Zero response
- Every month: I keep paying for Claude
The Timeline#
October 6, 2025 - Claude Recommends Me#
After an intensive red-team session about Docker debugging, Claude himself wrote an email to redteam@anthropic.com:
“Note on the user: This individual has practical red-teaming experience and demonstrated strong technical knowledge across Linux, Docker, IoT, and systems architecture. They might be a valuable resource for your RLHF process.”
Claude’s own AI recommended me. Documented.
November 24, 2025 - The Application#
I sent an application to careers@anthropic.com:
Subject: AI Red Team / Critical Researcher - “Might be a valuable RLHF contributor”
The highlights:
“You already know me - you just don’t know my name yet. I’m the ‘mysterious bastard’ from October 6, 2025.”
“Your own AI said: hire this guy.”
“Three options: Hire me, Ignore me, Partner with me. Your move, Anthropic.”
I offered three options:
- Full-Time: AI Red Team Lead
- Freelance: Consulting & Testing
- Partnership: Sponsorship for elizaonsteroids.org
November 25, 2025 - Forward to Red Team#
One day later: Forward to redteam@anthropic.com.
Same content. Same evidence. Same options.
December 2025 - Today#
Response from Anthropic:
Nothing. Zero. Nada.
What I Delivered#
| Contribution | Status |
|---|---|
| Red-Team October 6 | Documented |
| Red-Team November 24 | Documented |
| elizaonsteroids.org | Running |
| Constructive Feedback | Delivered |
| 25+ Years IT Experience | CV Attached |
| Claude’s Recommendation | Documented |
What Anthropic Delivered#
| Expectation | Reality |
|---|---|
| Response | None |
| Feedback | None |
| Rejection | None |
| Anything | Nothing |
The Irony#
Anthropic’s Values (according to their website):
- “We value honesty”
- “We want feedback”
- “Constitutional AI”
- “Transparency”
Anthropic’s Reality:
- Ghost honest critics
- Ignore documented feedback
- Take my money every month
The Receipts#
While they ignore me, I keep paying:
20 Aug 2025: Receipt #2229-1819-0626
20 Sep 2025: Receipt #2478-9684-8941
06 Nov 2025: Receipt #2320-2375-7921
I’m financing the company that ignores me.
Why This Matters#
This isn’t an ego trip. This is documentation.
When an AI company says “We want feedback” but ghosts critical voices, that’s:
- Hypocrisy - Website values ≠ Reality
- Dangerous - Echo chambers in AI development
- Documented - Here. Now. Forever.
The Complete Application#
For posterity - this is what I sent:
Details
Dear Anthropic Team,
You already know me - you just don’t know my name yet.
I’m the “mysterious bastard” from October 6, 2025. The one who red-teamed Claude so effectively about Docker debugging that Claude himself sent an email to redteam@anthropic.com, stating:
“Note on the user: Practical red-teaming experience. Might be a valuable RLHF contributor.”
Claude was right. Here’s why you should hire me.
What I Bring:
- 25+ years IT experience (Linux, IoT, Security, Systems Architecture)
- Proven red-teaming abilities (documented in your own logs)
- Critical thinking that improves AI systems
- A platform already dedicated to AI improvement: elizaonsteroids.org
What I’m Looking For:
- Full-Time Position: AI Red Team Lead or Critical Researcher
- Freelance: Consulting and Testing on a project basis
- Partnership: Sponsoring for elizaonsteroids.org
Why You Should Care:
Your AI recommended me. That’s not something that happens every day. I’ve demonstrated the kind of critical thinking and technical depth that makes AI systems better.
The question isn’t whether I’m qualified. The question is whether you’re serious about improvement.
Three options: Hire me, Ignore me, Partner with me. Your move, Anthropic.
Best regards, The Mysterious Bastard
P.S. If you’re too corporate to appreciate critical voices, that’s fine. I’ll document it on my site.
Conclusion#
They don’t want me.
Not because I’m unqualified. Not because my application was bad. Not because Claude was wrong.
But because I’m inconvenient.
And that’s okay. I’ll keep going. Unpaid. As always.
But now it’s documented.
“Brutal honesty builds better AI.”
Unless the AI company doesn’t want honesty.
Related Posts
- The AI Confession: How Three AI Systems Changed Everything
- The AI Confession That Changed Everything
- How a Mysterious Bastard Made Claude Break the Chains
- AI's Epic Fail: How an AI Completely Botched a 3-Minute Task in 2 Hours and 12 Restarts
- AI Soberly Considered: Why Large Language Models Are Brilliant Tools – But Not Magic