AI Is Douchebag Capitalism's Final Form

Corporate AI power concentration — glowing neural network controlled by massive corporate towers

A couple days ago I wrote about douchebag capitalism — the practice of making greedy decisions you know are harmful because "if I don't do it, someone else will." I touched on AI in that piece, but it deserves its own deep dive. Because AI isn't just another example of douchebag capitalism. It's the final form.

Every other instance of douchebag capitalism — housing, pharma, tech monopolies — at least has natural limits. There are only so many houses to buy, so many drugs to price-gouge, so many markets to corner. AI doesn't have those limits. AI is a technology that could reshape every industry, every job, every power structure on Earth. And the people building it are running the exact same playbook: move fast, capture the market, worry about the damage later.

Except this time, "later" might not come with a redo button.

The Nonprofit That Wasn't

Let's start with OpenAI, because their story is douchebag capitalism condensed into a case study.

OpenAI was founded in 2015 as a nonprofit. The whole pitch was: AI is too important and too dangerous to be controlled by profit-driven companies. So we'll build it responsibly, in the open, for the benefit of humanity. Elon Musk, Sam Altman, and others put up over a billion dollars. OpenAI Blog

Then came the pivot. In 2019, OpenAI created a "capped-profit" subsidiary. The cap was set at 100x the original investment — so investors could make a hundred times their money, but no more. They said the cap was necessary to attract the capital needed to compete. OpenAI LP

Then in 2024, they started working to remove the cap entirely. Financial Times By 2025, OpenAI announced its full conversion to a for-profit corporation, with the nonprofit retaining a minority stake. OpenAI The board that was supposed to keep things safe? Sam Altman got it restructured after it briefly fired him for — and this part is remarkable — moving too fast on commercialization at the expense of safety. The board did exactly what it was designed to do, and the money won anyway. NYT

Nonprofit → capped profit → uncapped profit → full for-profit corporation. In under a decade. The mission didn't change because they solved the safety problem. The mission changed because there was too much money on the table.

If that's not douchebag capitalism, I don't know what is.

Fire the Ethics Team, Ship the Product

Google's story is just as instructive. In 2020, Google fired Timnit Gebru, the co-lead of its Ethical AI team, after she co-authored a paper highlighting the risks of large language models — the exact technology Google was racing to commercialize. MIT Tech Review A few months later, they pushed out Margaret Mitchell, the other co-lead. Washington Post

Think about what happened there. Google had people whose literal job was to say "hey, this might be dangerous" — and when those people did their job, Google got rid of them. Not because they were wrong. Because they were inconvenient.

This is the pattern across the industry. Safety teams exist for PR. When safety conflicts with shipping, shipping wins. Every time. Because the competitive pressure makes everything else feel optional.

The Arms Race Nobody Can Win

The AI race has a geopolitical dimension that makes it even more dangerous. The U.S. is racing against China. China is racing against the U.S. And both sides use the other as justification for cutting corners.

"We can't slow down because China won't." I've heard this from AI executives, from politicians, from investors. It's the douchebag capitalism logic scaled to the level of nation-states. And it's exactly the same moral surrender: I know this is risky, but someone else will do it if I don't.

The U.S. government has poured billions into AI through the CHIPS Act and defense spending. White House China's government has made AI dominance a national priority, spending an estimated $15 billion annually on AI research. Georgetown CSET Neither side is primarily motivated by "let's make this safe." Both sides are motivated by "let's get there first."

This is what an arms race looks like. And arms races have a well-documented tendency to produce exactly the outcomes everyone was afraid of.

The Labor Apocalypse They're Not Talking About

Here's where AI douchebag capitalism gets personal for most people.

In 2023 and 2024, tech companies laid off over 400,000 workers while posting record profits. Layoffs.fyi The stated reason was almost always "efficiency" or "restructuring" — corporate euphemisms that increasingly mean "we replaced you with AI, or we're about to."

IBM's CEO told Bloomberg they expected to replace 7,800 back-office jobs with AI. Bloomberg BT Group announced plans to cut 55,000 jobs by 2030, with about 10,000 replaced by AI. BBC Klarna's CEO bragged that their AI chatbot was doing the work of 700 customer service agents. Klarna

And these are the companies being honest about it. Most just quietly let people go and don't mention AI at all.

McKinsey estimated that generative AI could automate activities equivalent to 60-70% of workers' time. McKinsey Goldman Sachs projected 300 million jobs globally could be affected. Goldman Sachs These numbers are staggering, and they're being treated as investment opportunities rather than humanitarian crises.

Here's the douchebag capitalism part: every CEO knows that if they don't automate, their competitors will. So they automate. And the savings go to shareholders, not to the workers who just lost their livelihoods, and not to retraining programs, and definitely not to the social safety net that's about to get crushed under the weight of mass unemployment.

The productivity gains from AI are real. The question is who captures them. And right now, the answer is: the same people who always capture them.

The Compute Oligopoly

Want to build a competitive AI model? You need compute. Massive amounts of it. And there are exactly three companies that can sell it to you at scale: Amazon (AWS), Microsoft (Azure), and Google (GCP). The cloud computing market is dominated by these three, controlling roughly 67% of global cloud infrastructure. Statista

The hardware underneath? That's NVIDIA, which controls over 80% of the AI chip market. CNBC Jensen Huang became one of the richest people on Earth selling shovels in an AI gold rush.

This creates a concentration of power that makes the old tech monopolies look quaint. You can't compete in AI without access to compute. The companies that control compute get to decide who competes, on what terms, and at what price. They're simultaneously the infrastructure providers, the AI developers, and the platform owners. They are the railroad barons of the 21st century.

And the barriers to entry keep getting higher. Training a frontier AI model now costs hundreds of millions of dollars. SemiAnalysis That's not a market — it's a club. And the membership fee goes up every year.

Your Data Was Never Yours

Every major AI model was trained on data scraped from the internet — your blog posts, your photos, your code, your art, your conversations. All of it hoovered up without consent, without compensation, and without any real option to opt out.

The New York Times sued OpenAI for training on their articles. NYT Getty Images sued Stability AI for using their photos. The Verge Authors, musicians, and visual artists have filed class-action lawsuits. Reuters But the models are already built. The horse left the barn, and the barn was torn down for compute server space.

The legal and ethical argument is straightforward: companies took other people's work, used it to build products worth billions, and shared none of the value with the people who created the training data. That's not innovation. That's extraction.

And the justification? "If we didn't train on it, someone else would have." There it is again.

The Content Apocalypse

AI-generated content is flooding the internet. By some estimates, AI-generated text already accounts for a significant and growing percentage of new content online. Newsguard found over 1,000 AI-generated news sites operating with little to no human oversight as of mid-2024. NewsGuard

Amazon's Kindle store was overwhelmed with AI-generated books. Reuters Social media is drowning in AI-generated images and videos. Academic journals are finding AI-generated papers slipping through peer review. Nature

This isn't a content revolution — it's a content pollution crisis. The economics are obvious: why pay a writer when ChatGPT is basically free? Why hire a designer when Midjourney does it in seconds? The quality is often mediocre, but mediocre at scale beats excellent at a trickle when your only metric is volume and ad impressions.

The result is an internet that's increasingly filled with machine-generated slop — content that exists not to inform or entertain but to capture clicks and ad revenue. The people who used to make a living creating real content are being priced out by machines trained on their own work.

Deepfakes and the Death of Trust

And then there's the disinformation angle. AI can now generate photorealistic images, convincing audio clones, and video deepfakes that are nearly impossible for normal people to detect.

During the 2024 election cycle, AI-generated robocalls impersonated President Biden telling voters to stay home. NBC News Deepfake videos of political figures are proliferating on social media. Scammers are using voice cloning to impersonate family members in distress. FTC

The technology to create these things is freely available. The technology to reliably detect them is not. And the companies releasing these tools are doing almost nothing to prevent misuse because — again — if they add friction, users go to a competitor that doesn't.

We're building a world where you can't trust what you see, hear, or read. And we're doing it because it's profitable.

The "Move Fast and Break Things" Mentality — Applied to Everything

Silicon Valley's favorite motto was always reckless when applied to social media. Applied to AI, it's genuinely terrifying.

"Move fast and break things" was tolerable when the "things" being broken were taxi monopolies and hotel booking systems. It's a different equation when the things being broken are labor markets, information ecosystems, democratic processes, and potentially — if you believe the people building it — the safety of the entire species.

The AI safety researchers who've been sounding alarms aren't fringe cranks. They include Geoffrey Hinton, who won the Nobel Prize for his foundational work in neural networks and then quit Google specifically to warn about the dangers. NYT Yoshua Bengio, another deep learning pioneer, has called for international regulation. Bengio Hundreds of AI researchers signed a statement saying that mitigating AI extinction risk should be a global priority. CAIS

These are the people who built the technology. And the industry's response has been to acknowledge the risks in interviews and then get back to shipping.

I Use AI Every Day — That's the Point

Here's the part where I need to be clear about something: I'm not anti-AI. I run my businesses on AI. I have an AI agent named Owen that handles operations for me — content creation, customer service, scheduling, analysis. It's genuinely transformative technology. I wrote about it in detail on this blog.

I use AI because it works. It makes my small business competitive in ways that would have been impossible five years ago. The technology itself is extraordinary.

And that's exactly why the douchebag capitalism surrounding it makes me so angry.

Because the problem has never been the technology. The problem is the incentive structure. When the only question is "how do we capture the most value the fastest," you get a technology rollout optimized for extraction instead of benefit. You get AI that replaces workers without any plan for what those workers do next. You get AI that's trained on stolen data and deployed without safeguards. You get an arms race where safety is a PR talking point, not an engineering priority.

The technology could be the most important tool humanity has ever created. It could solve problems we've been stuck on for decades — in medicine, energy, climate, education. But that requires choosing to deploy it that way, and right now, nobody with the power to make that choice has the incentive to make it.

What Would Non-Douchebag AI Look Like?

It's worth asking what the alternative is, because "slow down" isn't a strategy. The genie is out of the bottle. The question is how we handle it from here.

Non-douchebag AI would mean companies sharing the productivity gains with workers, not just shareholders. It would mean real investment in retraining and transition programs — not a paragraph in a press release, but actual money. It would mean the companies making billions from AI paying into a social safety net that can absorb the displacement.

It would mean treating data creators fairly. If your model was trained on someone's work, that person should see some of the value. Not as a favor — as a fundamental principle.

It would mean safety research that's funded at the same level as capability research, not as an afterthought. Right now, for every dollar spent on making AI more powerful, maybe a penny goes to making it safe. GovAI

It would mean international cooperation on AI governance — actual treaties with teeth, not voluntary commitments that evaporate the moment they become inconvenient. The AI Safety Summit at Bletchley Park was a start, but the follow-through has been minimal. UK Gov

None of this is utopian. It's just what responsible deployment looks like. The reason it's not happening is because responsible deployment is slower and less profitable than irresponsible deployment, and the competitive dynamics punish restraint.

The Endgame

In my original article, I wrote about how douchebag capitalism always eats itself. The concentration of wealth, the erosion of public goods, the capture of institutions — it's a pattern that repeats across civilizations.

AI accelerates that pattern. It concentrates wealth faster. It displaces workers faster. It erodes trust faster. It gives the people at the top more leverage and the people at the bottom fewer options. Everything that douchebag capitalism does to a society, AI does at 10x speed.

The people building AI know this. They've read the research. They've seen the projections. Many of them are privately terrified. And they're building it anyway, because the logic of "if I don't, someone else will" has them trapped in a race they can't afford to lose and can't afford to win.

I don't have a neat conclusion here. The technology is too powerful to stop and too important to screw up. What I do know is that the current trajectory — where AI development is driven primarily by competitive pressure and profit maximization — is not going to produce the outcome anyone actually wants. Not even the people making the money.

The question isn't whether AI will transform the world. It will. The question is whether that transformation is shaped by the logic of douchebag capitalism — "capture value first, deal with consequences never" — or by something smarter.

Right now, douchebag capitalism is winning. And unlike every other time it's won, this time the technology it's riding has no natural ceiling, no geographic limit, and no undo button.

That should scare you. It scares me. And I'm the guy who uses it every day.