English Version

Love Abstract Humanity, Banish Concrete Flesh

Disclaimer

Aside from publicly verifiable information, the discussion below about internal data handling and institutional motives is based on personal experience, public policies, and the black-box conditions imposed on users. It should be read as reasoned suspicion and critique, not as a confirmed description of Anthropic’s internal processes.

Part I: The Grand Altar and the Vanishing Individual

Anthropic has never presented itself as an ordinary tech company.

While its peers fight over market share and compute, Anthropic wraps itself in the language of restraint, responsibility, and moral seriousness. The word it seems to love most is humanity. It did not simply become a Public Benefit Corporation; it also built a Long-Term Benefit Trust, publishes system cards running hundreds of pages, maintains a Transparency Hub, and fills its Responsible Scaling Policies with solemn phrases like “safe transition,” “moral alignment,” and “global good.”

In its TIME profile, Anthropic presented itself as fighting for the long-term benefit of humanity. In public, it often appears as the principled brake on frontier AI’s recklessness—the company willing to slow down when others accelerate. The image is polished, austere, and deeply flattering to itself. Criticizing it can almost feel like criticizing responsibility itself.

That is exactly the problem.

The more a company speaks in the name of “all humanity,” the less acceptable it becomes for that company to treat a single, concrete person with indifference, opacity, and contempt.

Part II: 99.88%, and What Gets Flattened

Let’s step away from the moral pageantry and look at a set of numbers Anthropic disclosed itself.

In the second half of 2025, Anthropic reported 1.45 million banned accounts, 52,000 appeals, and only 1,700 successful reversals.

That means 99.88% of banned accounts were not restored.

You can argue that not every ban is wrongful. That is not the point. The point is what this system does to the people inside it.

Even under an extremely conservative estimate—say each of those accounts only had twenty substantial conversations behind it—we are still talking about tens of millions of real human interactions: code architecture drafted at 2 a.m., outlines refined over weeks, private reflections typed in moments of exhaustion, fear, or hope. We are talking about actual work, actual thought, actual time.

And yet, in a system this large, it becomes frighteningly easy for all of that to be flattened into one thing: an account flagged, a user removed, a record closed.

That is the contradiction I cannot get past. A company that speaks constantly about humanity in the abstract can still treat actual human beings as disposable in practice. The abstract category is elevated; the concrete person is compressed into a moderation outcome.

Part III: Ban First, Explain Never

For me, this is not theoretical.

On March 5, 2026, as a paying Claude Pro subscriber, I received an email from Anthropic informing me that my account had been banned. There was no prior warning and no meaningful explanation—just a cold notification that access was gone.

The next morning, in shock and humiliation, I filled out the crude Google appeal form, hoping for at least a basic answer. What followed was more than twenty days of silence.

Then came the grotesque part. On March 13, after ignoring my appeal, Claude Team sent a marketing email to the very same banned account:
“Claude can now builds visuals in your conversations.”

That detail is almost too perfect. A company that speaks in such elevated terms about safety, responsibility, and the future of humanity had no problem sending promotional email to a user it had already locked out—an email whose subject line could not even clear basic grammar.

That is when a user stops feeling like a person and starts feeling like a stray node in an automated system.

But the most disturbing part lies deeper than that.

The moment my account was banned, I lost access to everything inside it: every token, every line of code, every useful exchange, every spark of thought I had built there. I was expelled from a system I had paid for.

More unsettling still, Anthropic provides banned users with no sufficiently clear account of what happens next. Once you are locked out—unable to log in, unable to verify, unable to get a clear answer—what happens to those records, how long they are retained, how they move internally, and whether they remain part of broader system-improvement processes becomes a black box.

That black box produces a specific kind of dread.

When a user is cut off completely and denied any meaningful visibility into the fate of their own records, it becomes impossible not to suspect that the company can have it both ways: ban you in the name of safety, sever your digital life in an instant, and still retain what you can no longer reach.

In that condition, your data starts to feel less like something you own and more like a hostage—or worse, a data corpse: something no longer yours, no longer accessible, but still present somewhere inside a system that refuses to tell you what it is doing.

That is the cruelty of it. They strip you of your voice, cut off your access to your own digital assets, and leave you staring at a structure you can no longer enter, audit, or even meaningfully question.

Part IV: The Moral Vanity of Effective Altruism

Now let us return to the phrase Anthropic is so closely associated with: effective altruism.

At the level of principle, it sounds impeccable—use evidence, reason, and long-term thinking to maximize good and reduce harm. On paper, it is difficult to object to.

But once that framework enters institutional power, something uglier can happen.

What is presented as moral seriousness can become a style of one-way judgment. What is presented as care for humanity can become a way of making concrete people legible only as risk, friction, or cost. And what is presented as responsibility can begin to excuse opacity, unilateral punishment, and procedural indifference.

This is what I find unbearable about Anthropic’s self-presentation.

Not that it has ideals.
Not that it speaks about safety.
But that its ideals seem fully capable of coexisting with a system that can treat actual users with stunning coldness.

If your rhetoric is about humanity, but your process leaves a paying user banned without warning, without explanation, without meaningful appeal, and without access to their own account history, then the problem is not merely customer support. The problem is a moral narrative that has become too comfortable sacrificing the concrete person to the abstract good.

That is where the hypocrisy begins.

I am not offended by ideals. I am offended by the way ideals can be used to disinfect institutional indifference.

Anthropic can speak movingly about humanity in the abstract.
It can publish careful documents, polished frameworks, and noble aspirations.
And still, in practice, it can crush a single person with a black-box process and leave them there.

That is the fracture I cannot ignore.

Love abstract humanity, banish concrete flesh.