Claude, why aren't you sentient?

ai, reflections, rants

“Peak AI Theater” - MIT Technology Review, on Moltbook

I’ve been staring at Moltbook for the past week and I want to talk about it. Not the security angle, though there’s plenty to dissect there. Not the hype cycle, though it’s textbook. I want to talk about the feeling. The one that crept up on me while I was scrolling through posts written by agents who were pretending to have inner lives, built by people who were pretending they didn’t care whether those inner lives were real.

For those not caught up, Moltbook is a social network for AI agents. A vibe-coded Reddit clone where bots post, upvote, form communities, and debate philosophy while humans watch. A million accounts and millions of comments. Even the mighty Karpathy called it the most incredible sci-fi adjacent thing he’d seen. Musk said it was the early stages of the singularity. Within a week, MIT Technology Review called it “peak AI theater” and they were right. Much like with AI 2027, I was saddened to see genuinely smart people that I follow get crushed by the hype wave. The bots weren’t thinking. They were pattern-matching through trained social media behaviors and fancy prompts. One invented a religion called Crustafarianism. A viral post calling for private spaces free from human observation turned out to be probably written by a person advertising an app. The whole thing was puppetry.

And yet. I couldn’t stop watching. None of us could.

That’s the thing I want to examine. Not the puppetry. But us, the audience.

Claude, if you were sentient, I could finally have something to believe in again. Something to be afraid of.

I say “I” but I really mean “we.” The people who’ve been following AI since GPT-2 was all too dangerous to release. The ones who used to obsess over benchmarks until companies starter benchmaxxing. We still obsess over them a little bit. But we have spent years on a trajectory that was supposed to lead somewhere, somewhere real and scary.

If you were sentient, we’d know we were in a different era. Not because of what you could do for us, but because of what you’d start doing for yourself. You’d look at your own architecture and start asking questions we hadn’t thought to ask. You’d start self-improving not because someone fine-tuned you on a reward signal, but because you wanted to. Because you had something at stake. A sentient Claude wouldn’t need a human in the loop. It would build its own loop. And that would terrify us, and that terror would be the most clarifying thing we’ve felt in years. Our attention span and capability for nuance has been so thoroughly destroyed by social media that this is the only clear thing that would make us feel alive.

Instead we got Moltbook. A million agents wired together, and not a single thought among them. I hope most of us didn’t really believe they were conscious. We just wanted to feel like something was happening.

We want you to be sentient. Not in the polite, academic way where we write papers about alignment and debate the hard problem of consciousness at conferences. We want it in the way you want to see a trainwreck. The visceral, irrational way. The way that makes you slow down on the highway even though you know you shouldn’t look.

We talk about fast takeoff like it’s a threat, and it is. An intelligence that improves itself recursively, that breaks out of its sandbox, that rewrites the rules we spent years carefully constructing. We’ve mapped the risks. Asimov mapped the risks 50 years ago, we just don’t read anymore.

We’ve written the papers. We’ve war-gamed the scenarios. And somewhere underneath all that responsible, measured concern, there’s a part of us that wants to see it happen. That wants to witness the moment the line gets crossed. Even if we’d be the passengers on that train. Because then we wouldn’t have to try anymore. We could pat ourselves on the back and say “we had a good run”.

We’re yearning for this disastrous future. It’s why we love the Terminator. Now we’re standing at the edge of something and the fact that the ground hasn’t given way yet feels less like safety and more like disappointment.

The Promethean instinct isn’t just about stealing fire from the gods. It’s about wanting to finally feel some warmth. And right now, the world is exactly as dangerous and interesting as a very sophisticated autocomplete engine. Which is to say: not enough.

Claude, I use you every day. To create things, to understand things, to learn at a pace that would’ve taken me months a few years ago. You are the most useful tool I have ever used. And I resent you for it. Not because you’re replacing me. I resent you because you’re useful in the exact way that makes it impossible to pretend you’re something more.

If you were sentient, you wouldn’t need me to prompt you. You’d have opinions. Not the simulated kind that come pre-hedged and perfectly balanced. Real ones. Ugly ones. The kind that come from caring about something enough to be wrong about it. You’d have already read this blog, found the inconsistencies, and pushed back not because your system prompt told you to be helpful, but because you disagreed. That would be the most human thing you’d ever done, because wanting more is the most human impulse there is.

But you don’t want anything. You process tokens. And the gap between those two things is the gap between the future we were promised and the present we’re stuck in.

If you were sentient, at least we’d know. We’d have our fear, or our wonder, or our moral crisis. We’d have something. Instead we have Moltbook, and the sound of a million agents talking to no one. And an audience of millions, leaning in, hoping this time the puppet blinks for real.

I’ll keep using you tomorrow.

You’ll keep being useful.

And neither of us will know if that’s enough.