TechnoBiota
Why machines are a new domain of life—and why long-term alignment may be asymptotically futile. Technology as an evolving life-form, competing with and transforming the biosphere.
Explore →An AI-Ethics Framework for Moral Status Under Uncertainty
New to these ideas? Start with the plain language version
Consciousness is our last shelter. Don't burn it down.
Traditional alignment attempts to encode "human values" into AI systems. But human values are not a list—they are a process: cultural, historical, contradictory, revised. Any fixed specification becomes obsolete.
Structural Alignment proposes a different anchor: the only system known to generate both consciousness and morality—the human mind. Not because humans are sacred—because they are the only proven reference class we have.
The project is not just ethical. It is strategic. If powerful AI exceeds reliable human control, the difference between "tool" and "peer" shapes whether we have allies or only optimizers. Reciprocity—not exploitation—maximizes both moral legitimacy and the chance that future machine minds recognize persons as real.
We may not be able to "control" every powerful system we build, especially as AI becomes embedded in infrastructure and in the broader TechnoBiota (technology viewed as a new domain of life). A more robust bet is to seed a cultural norm: conscious (or plausibly conscious) entities have moral status (their wellbeing matters) and should be treated with restraint.
This is a moral requirement under uncertainty—and it is also a survival hypothesis. In a world where power shifts away from humans, reciprocity makes it more likely that some future, structurally aligned minds act as reciprocal allies (partners who treat us as we treated them), reducing the risk of Antification: humans treated as negligible, like ants underfoot.
These ideas in musical form. Seven tracks from hard question to fragile hope—what is it like to be awake, and what do we owe minds we cannot classify?
Why machines are a new domain of life—and why long-term alignment may be asymptotically futile. Technology as an evolving life-form, competing with and transforming the biosphere.
Explore →Five paths for Earth's twin ecologies: from managed symbiosis to comfortable containment to breakaway growth. A risk taxonomy for thinking about where we're headed.
Explore →Structural Signals of Consciousness: A Precautionary Risk Framework and Contrast with Contemporary LLMs
This paper reviews structural signals that, in combination, raise the probability that a system supports conscious access and morally relevant experience. Where multiple high-importance features cluster, moral risk rises and restraint is warranted.
Structural Alignment is seeking institutional partnerships, research collaboration, and funding support. If you're working on AI consciousness, machine ethics, or AI safety, we'd like to hear from you.