| Krisztián Schäffer

Occasional essays exploring the ideas behind Structural Alignment—from philosophical foundations to practical implications for AI development and governance.

What IIT Gets Right (And Wrong)

Integrated Information Theory is the most mathematical theory of consciousness we have. It's also incompatible with the indicator approach—but not with structural alignment.

Read →

Consciousness "Indicators" vs. Structural Signals

Two research programs try to move beyond vibes when assessing machine consciousness. The consciousness-indicators approach asks what evidence should move our beliefs; Structural Signals asks what should trigger restraint when the downside is morally catastrophic.

Read →

The Antification Risk

The most dangerous AI won't hate us. It won't notice us. Antification—becoming to advanced AI what ants are to humans—is the existential risk that doesn't require malice.

Read →

The Benchmark Problem

Structural alignment uses human cognition as the reference point for moral consideration. But couldn't a superior AI use the same logic against us? A self-critique of our own framework.

Read →
Background 1/8 Loading...
𝄞 No track selected