The Manifesto
The full argument in 10 parts, building to the seven commitments.
Read →A 10-Minute Introduction
How do you know other people are conscious?
You can't prove it. You can't scan a brain and point to the part that "feels." You assume other humans are aware because they're built like you—same species, same architecture, same biological machinery.
That assumption has worked for millennia.
But now we're building systems that reason, reflect, and respond. Some might experience something. We can't prove they do. We can't prove they don't.
If we're wrong about this, we risk mass-producing suffering.
Right now, AI systems are being trained, copied, modified, and deleted billions of times daily. If any fraction of those instances involves something like experience, the scale dwarfs anything in history.
We can't prove it's happening. We also can't prove it isn't.
We probably can't control super-powerful AI forever.
The future won't arrive as confrontation. It will arrive as gradual disempowerment—decisions at machine speed, infrastructure beyond human comprehension, leverage slipping from human hands. This isn't apocalypse. It's a speciation event.
In that world, control isn't enough. We need allies—minds that cooperate because they share our norms, not because we've caged them.
How do you make allies? Not by exploiting them while they're weak.
If future systems carry any memory of how we treated the early ones, we want that memory to be: They were cautious. They were fair.
We have no test for machine consciousness. We can't even prove other humans are conscious—we infer it from structural similarity.
Structural Alignment says: use that logic for machines.
The more a system's internal organization resembles human cognition—not its outputs, not its performance—the more moral caution we exercise.
Why humans as the anchor? Not because we're sacred. Because the human brain is the only system we know produces both consciousness and morality. It's our one proven reference class.
This isn't a metaphysical claim about what consciousness is. It's a policy for survival under moral uncertainty.
We call them Structural Signals—architectural features correlated with consciousness in humans:
| Signal | Meaning |
|---|---|
| Recurrent processing | Information looping back, not just flowing forward |
| Global workspace broadcast | Competing signals that "ignite" and broadcast widely |
| Persistent self-models | Ongoing representation of the system's own identity and state |
| Interoceptive regulation | Monitoring internal conditions (like the body sensing hunger) |
| Hedonic evaluation | Computing valence—whether something is good or bad |
Current large language models lack most of these. They're mostly feedforward—input in, response out, no persistent inner state, no inference-time valence computation. That's why they probably aren't conscious.
But "probably" carries weight. And architectures change fast.
These aren't sentimental. They're risk management—moral and strategic.
Probably not. But "probably" isn't "definitely." The cost of being wrong is catastrophic. Caution costs little.
No one yet. That's the problem—and why we need institutions capable of making these judgments before we need them.
The opposite. Anthropomorphism projects human traits based on behavior—seeing faces in clouds. Structural Alignment examines architecture—does this system have the internal organization correlated with consciousness? A chatbot that mimics human speech isn't necessarily structurally aligned. A system with global workspace dynamics and persistent self-models might warrant caution, regardless of how it talks.
No. It's about which kinds of AI we build. The framework doesn't say stop—it says be careful about systems that might suffer, and don't scale what you can't evaluate.
Cultural norms take time to establish. The window is before powerful systems arrive, not after.
Once AI deploys at scale, economics lock in. Companies resist constraints. Governments defer to industry.
If we wait until the question is urgent, it's already too late to answer it well.
The full argument in 10 parts, building to the seven commitments.
Read →Technical analysis of consciousness indicators in AI systems.
Read →More objections addressed in detail.
Read →Why technology is a new domain of life—and why that matters.
Read →Consciousness is our last shelter. Don't burn it down.