Why I Run My Business on Anthropic — and the Cancel ChatGPT Movement Proved Me Right

Illustration of the ChatGPT logo shattering as people walk away, with the Pentagon and surveillance cameras looming in the background

I run an AI agent named Owen. He publishes content for my clients, monitors ad campaigns, handles communications, produces videos, manages my schedule — basically runs the operational backbone of my business. Owen's brain is Anthropic's Claude Opus 4.5, the most capable model Anthropic makes.

People ask me why I didn't go with ChatGPT. It's the bigger name. It's what everyone knows. And until today, my answer was mostly about capability — Opus 4.5 reasons better, follows complex instructions more reliably, and handles nuance in ways that GPT still struggles with.

Today, I got a much better answer.

What Happened Today

Anthropic — the company behind Claude — told the U.S. Department of War "no." Two red lines: no autonomous weapons, no mass surveillance of American citizens. The Pentagon wanted full, unrestricted access to Claude. Anthropic declined. They were immediately designated a supply chain risk and banned from government use.

Within hours, Sam Altman swooped in. OpenAI pledged ChatGPT and its other technologies to the Pentagon. Altman claimed on X that the models wouldn't be used for mass surveillance. A government official immediately contradicted him, saying OpenAI's tools would be available for "all lawful means."

The Patriot Act makes mass surveillance of communications metadata legal in certain scenarios. So Altman's promise is worth exactly nothing when the legal framework already permits what he's claiming won't happen.

Why I Chose Anthropic in the First Place

When I was building Owen, I tested everything. ChatGPT, Claude, Gemini, open-source models. I wasn't picking a chatbot — I was picking the foundation for an autonomous agent that would operate across my entire business. The stakes were different.

Opus 4.5 won on the technical merits. It handles multi-step reasoning better. It maintains context over longer conversations without losing the thread. When Owen is managing a WordPress publishing schedule for four different clients, coordinating Google Ads monitoring, and producing video content — all simultaneously — I need a model that doesn't lose its mind halfway through a complex task. Opus 4.5 does that consistently.

But there was another factor I weighted heavily: who built it, and what they seem to care about.

Anthropic was founded by former OpenAI researchers who left specifically because they were concerned about safety. Their whole company thesis is that AI is potentially dangerous and needs to be developed carefully. You can debate whether they live up to that perfectly — no company does — but at least the institutional DNA points in the right direction.

OpenAI started as a nonprofit dedicated to safe AI development. Then it became a capped-profit company. Then the cap got higher. Then Sam Altman got fired by his own board over safety concerns. Then he came back. Then they restructured again. And now they're valued at $730 billion and pledging their tech to the Pentagon. That trajectory tells you everything.

The Cancel ChatGPT Movement

Reddit is on fire right now. Threads with thousands of upvotes showing people canceling ChatGPT Plus subscriptions. The sentiment is straightforward: people don't want to fund AI that might be pointed at them.

Will it dent OpenAI's $730 billion valuation? Probably not in the short term. But trust erosion compounds. Every time a company shows you who they are, some percentage of users remembers. And the people canceling today aren't casual users — they're the power users, the developers, the people building on the platform. Those are the hardest customers to win back.

Principles vs. PR

Anthropic wanted contractual control over how its technology would be used. That's a principle — it has teeth. OpenAI deferred to the government's interpretation of existing laws. That's PR — it has plausible deniability built in.

If you run a business, you understand this instinctively. You don't hand a contractor your credit card and say "I trust you to only charge what's fair." You define the scope. You set the limits. Anthropic tried to set limits. OpenAI said "you decide."

And let's look at the rest of the field. Google quietly removed its ban on autonomous weapons from internal rules last year. Microsoft is fine with autonomous weapons as long as a human "pulls the final trigger." Amazon has nothing but vague "responsible use" language. Meta has been courting Pentagon contracts. Palantir is enthusiastically all-in.

Nobody's clean in this industry. Every major LLM was built on scraped data from the open internet. There are no saints. But there are degrees, and those degrees matter when you're trusting a company's model to run your business.

What This Means If You're Building on AI

If you're a business owner using AI — really using it, not just playing with chatbots — you need to think about who's behind the model. Not because of some abstract moral argument, but because the decisions these companies make today will determine what their products become tomorrow.

A company willing to hand unrestricted access to a government with a track record of surveillance overreach is a company that will make other compromises when the money is right. That calculus affects product decisions, safety investments, data handling, and ultimately the reliability of the tools you're depending on.

Owen runs on Opus 4.5 because it's the best model I've tested for what I need. But today proved that the company behind the model matters just as much as the model itself. Anthropic told the most powerful government on earth "no" and took the hit. That's not marketing. That's an actual decision with actual consequences.

These AI models still hallucinate. They still fail at basic logic puzzles sometimes. They can be manipulated with small data poisoning attacks. The technology is powerful but far from perfect. Given all that, I'd rather build on a foundation where the people in charge are at least trying to be careful with it.

The Bottom Line

I chose Anthropic for Owen before today. The technical capabilities of Opus 4.5 justified that choice on their own. But today, Anthropic showed me something that doesn't show up in benchmark scores — they're willing to lose money over a principle.

OpenAI showed me the opposite. When the Pentagon came calling, one company said "not like this." The other said "how much?"

My business runs on the one that said no. And after today, I'm more confident in that decision than ever.

Mike Slatton

Mike Slatton

Founder, Pro Level Gear LLC — Building AI-powered marketing systems for small businesses.