What IIT Gets Right (And Wrong)
Integrated Information Theory is the most mathematical theory of consciousness we have. It's also incompatible with the indicator approach—but not with structural alignment.
Read →Reflections on machine consciousness and structural alignment
Occasional essays exploring the ideas behind Structural Alignment—from philosophical foundations to practical implications for AI development and governance.
Integrated Information Theory is the most mathematical theory of consciousness we have. It's also incompatible with the indicator approach—but not with structural alignment.
Read →You're on an ethics board. A new AI system is up for deployment. Someone asks: "Should we be worried this might be conscious?" This guide offers one possible approach to that moment.
Read →Two research programs try to move beyond vibes when assessing machine consciousness. The consciousness-indicators approach asks what evidence should move our beliefs; Structural Signals asks what should trigger restraint when the downside is morally catastrophic.
Read →The most dangerous AI won't hate us. It won't notice us. Antification—becoming to advanced AI what ants are to humans—is the existential risk that doesn't require malice.
Read →Structural alignment uses human cognition as the reference point for moral consideration. But couldn't a superior AI use the same logic against us? A self-critique of our own framework.
Read →