Everyone Is Simulating
There's a debate in the AI world that's been going on for years now, and it boils down to this: has AI achieved AGI — Artificial General Intelligence — or is it just really, really good at faking it?
The honest answer is the second one. What we have right now is not general intelligence. It's sophisticated pattern matching operating at a scale and speed that looks like understanding. The industry calls it "narrow AI" or sometimes "broad AI" if they're feeling generous, but the more accurate term is probably what researchers Emily Bender and Timnit Gebru coined: a stochastic parrot. ACM A system that produces language — and increasingly, decisions — by predicting what comes next based on patterns in its training data, without actually understanding what any of it means.
That's not a criticism. It's a description. And the results are genuinely impressive. AI can write legal briefs, diagnose medical images, pass the bar exam, generate working code, and hold conversations that are indistinguishable from talking to a sharp human. It does all of this without understanding law, medicine, logic, programming, or conversation. It simulates understanding so well that the distinction starts to feel academic.
But it's not academic. It matters. Because here's the part nobody in the AI industry wants to talk about: humans do the exact same thing.
The Performance of Being Human
Brené Brown has spent over two decades researching vulnerability, shame, and authenticity. Her work — particularly in The Gifts of Imperfection and Daring Greatly — maps out something that most people feel but rarely articulate: we spend an enormous amount of our lives performing instead of being.
Brown puts it bluntly: "Authenticity is a collection of choices that we have to make every day. It's about the choice to show up and be real. The choice to be honest. The choice to let our true selves be seen." Brené Brown The implication is clear — authenticity requires active, deliberate effort. Which means the default state is something else.
The default state is simulation.
Think about your own day. You wake up and present one version of yourself to your family. You get to work and become someone slightly different — different vocabulary, different emotional register, different priorities. You get on a client call and shift again. You go to a social event and there's another version. You talk to your parents and another one shows up. Each version is "you," but none of them is the complete, unfiltered you. You're running different models for different contexts.
Psychologists have a term for part of this: code-switching. Sociologists call it impression management, a concept Erving Goffman explored in The Presentation of Self in Everyday Life back in 1956. Wikipedia Goffman argued that social life is essentially theater — we're all performing roles, managing impressions, reading our audience and adjusting accordingly.
Sound familiar? That's exactly what a large language model does. It reads the context, predicts what response fits best, and generates output optimized for the situation. The mechanism is different. The behavior is the same.
The Masks We Don't Talk About
Brown's research on vulnerability gets at why we simulate. It's not because we're dishonest people. It's because authenticity is terrifying. Showing your real self — your doubts, your confusion, your actual opinions — carries real social risk. You might be rejected. You might be judged. You might lose status, opportunities, relationships.
So instead, we optimize. We read the room and produce the version of ourselves most likely to be accepted. We say what we think people want to hear. We suppress the thoughts that don't fit the context. We perform competence when we feel uncertain. We perform calm when we're falling apart.
"If you trade your authenticity for safety," Brown writes, "you may experience the following: anxiety, depression, eating disorders, addiction, rage, blame, resentment, and inexplicable grief." Brené Brown
That's a heavy list. And it describes a lot of people.
The uncomfortable truth is that most of us, most of the time, are doing a version of what AI does — pattern matching our way through social situations, producing contextually appropriate outputs without necessarily engaging our deepest, most authentic selves. We're not lying, exactly. We're simulating. Running the social model instead of the real one.
The Autopilot Problem
And it goes deeper than social performance. Think about what a typical day actually looks like for most people.
You wake up. You check your phone. You shower, get dressed, drive to work — or walk to your desk if you're remote — and you start executing tasks. Emails. Meetings. Slack messages. Deadlines. Errands. Pickups. Dinner. You grind through a sequence of obligations that someone with "general intelligence" should theoretically be able to step back from and question, but you don't. You just... do them. One after another. On autopilot.
How much of your day do you spend actually thinking? Not reacting. Not responding. Not executing the next item on the list. Actually thinking — being present, aware, making deliberate choices about what you're doing and why?
For most people, the honest answer is almost none. We operate on autopilot for the vast majority of our waking hours. We're so buried in task execution that we never surface long enough to live in the moment. We have this extraordinary capacity for consciousness, awareness, creativity, wonder — and we spend it answering emails and sitting in traffic.
That's the real irony of the AGI debate. We hold AI to a standard — "true understanding," "genuine awareness," "real intelligence" — that we ourselves rarely meet on any given Tuesday. We're supposedly the benchmark for general intelligence, and most of us spend our days functioning exactly like a well-trained narrow AI: receive input, process task, produce output, move to next task. Repeat until sleep.
A truly intelligent being — one living up to the full potential of general intelligence — would stop sometimes. Would notice the sky. Would question whether the meeting they're walking into actually matters. Would choose presence over productivity, at least occasionally. Instead, we've built entire lives and economies around making sure that never happens.
We didn't need AI to turn humans into machines. We were already there.
AGI Is a Mirror, Not a Destination
The tech industry frames AGI as a goalpost: a future state where AI can do everything a human can do. But maybe the more interesting observation is that AI already mirrors something fundamental about how humans actually operate.
We don't have general intelligence either. Not really. Not the idealized version.
What we have is a collection of specialized skills, context-dependent behaviors, and pattern-matching heuristics that we've assembled over a lifetime. We're brilliant in some domains and embarrassingly bad in others. We're creative in familiar contexts and useless in unfamiliar ones. We can solve complex problems in our area of expertise and struggle to assemble IKEA furniture.
We think we have general intelligence because we experience ourselves from the inside. From the outside — if you could observe a human the way we observe AI — you'd see something that looks a lot more like a very sophisticated narrow system running multiple specialized models and switching between them based on context.
The difference is consciousness, right? We experience our simulation. We feel the gap between who we're pretending to be and who we actually are. AI doesn't feel that gap. And that matters — it might be the only thing that matters — but it doesn't change the functional observation: at the behavioral level, both humans and AI spend most of their time simulating.
The Vulnerability Gap
Here's where Brown's work becomes more than a metaphor.
Brown argues that the antidote to simulation is vulnerability. That the only path to genuine connection, creativity, and what she calls "wholehearted living" is the willingness to be seen — really seen — without the armor of performance.
AI can't do that. AI has no choice in the matter. It simulates because simulation is all it has. There's no authentic self underneath the pattern matching. No vulnerable core it's protecting. It's performance all the way down.
Humans can do that. We have the option. We can choose to drop the simulation, show the real thing, and accept the risk that comes with it. The fact that we rarely make that choice — that we default to the same kind of contextual optimization that AI does — is on us.
And that's what makes this parallel so uncomfortable. It's not that AI is scarily human. It's that humans are scarily algorithmic. We've been running on pattern matching and social prediction long before GPT existed. AI just made it obvious by doing the same thing without a soul.
The Authenticity Premium
There's a business angle here too, because this is a business blog and I can't help myself.
As AI-generated content floods the internet — and it's flooding fast — the thing that will become most valuable is the thing AI literally cannot produce: authenticity. Real opinions. Genuine vulnerability. The stuff that only comes from a person who has something to lose by saying it.
I wrote a few weeks ago about douchebag capitalism and AI's role in it. Those articles worked because they said things that a language model wouldn't generate on its own — they had a point of view that carried actual risk. Not because I'm brave, but because I'm a real person with real opinions and real skin in the game.
That's the premium now. Not information — AI can generate infinite information. Not even insight — AI can synthesize patterns that look like insight. The premium is someone actually meaning it. Someone choosing, as Brown would say, to show up and be real.
Brands that understand this will win. The ones that replace their human voice with AI slop — and there are already thousands of them — will discover that they've automated away the only thing that made people care.
What This Means for How We Think About AI
I work with AI every day. I have an AI agent that runs significant parts of my business operations. I'm not anti-AI and I'm not going to pretend to be.
But I think the conversation about AGI is mostly missing the point. The question shouldn't be "when will AI achieve real intelligence?" The question should be "how often do we achieve real intelligence?" How often do we actually think, versus pattern-match? How often do we form genuine opinions, versus reproduce whatever our social context rewards? How often do we show up as our real selves, versus the simulated version that's optimized for acceptance?
Brené Brown's work suggests the answer is: not as often as we'd like to believe.
AI is a mirror. It shows us what human behavior looks like when you strip away consciousness and just leave the mechanism. And the mechanism — the pattern matching, the context switching, the optimizing for acceptance — is uncomfortably familiar.
The gap between AI and human intelligence isn't that we think and it doesn't. The gap is that we can choose not to simulate. We can choose vulnerability over performance. We can choose authenticity over optimization.
Whether we actually make that choice is a different question entirely.
And it's one worth sitting with — especially in a world that's about to be flooded with very convincing simulations of everything.