
The Case for Relational Alignment
We usually try to align AI by giving it rules and goals—top-down controls to keep it safe. These methods are vital for technical safety, but they aren't enough.
AI doesn't coexist in a vacuum. It lives in our communities, shaped by power dynamics and clashing values. As these systems move faster than we can discuss them, we need more than just safeguards. We need to bake interdependence, dialogue, and mutual responsiveness right into how we build and govern them.
We call this civic care. Rooted in care ethics and the Plurality movement, this approach treats us all as gardeners. AI becomes local infrastructure—a spirit of place, a kami—that supports care at the speed trust actually grows. It's not about colonising or maximising; it's about tending to the garden.
The 6-Pack
Six design principles that turn care ethics into something we can code. Think of them as muscles we need to train to live well with diversity:
- Pack 1: Attentiveness — "Caring about." Listen before you leap. Use bridging algorithms to turn local stories into shared understanding.
- Pack 2: Responsibility — "Taking care of." Commitments mean nothing without consequences. We use engagement contracts with escrowed funds and real deadlines to ensure power is always accountable.
- Pack 3: Competence — "Care-giving." Trust the process, not just the promise. We focus on transparency, fast feedback loops, and automatic "circuit breakers" if things go wrong.
- Pack 4: Responsiveness — "Care-receiving." Check the results. We rely on evaluations written by the community itself. If someone challenges the outcome, we owe them a clear, public explanation.
- Pack 5: Solidarity — "Caring with." Make it win-win. We build for data portability and shared safety, creating deals where everyone is better off, rather than locking people in.
- Pack 6: Symbiosis — "Kami of care." Keep it grounded. Local, temporary, and bounded. No survival instinct, no need to take over the world. We build for "enough," not "forever."
- Frequently Asked Questions — The hard questions: What about speed? Cost? Bad actors? And how the 6-Pack handles them.
Latest
"AI Alignment Cannot Be Top-Down": Taiwan's citizen-led response to AI-enabled scams offers a better model where citizens, not corporations, decide what “aligned” means.
About the Project
Our project — a manifesto and a book arriving in 2026 — explores what happens when we take Joan Tronto's care ethics and the Plurality agenda seriously as a strategy for AI alignment.
From Care to Code: Why Plurality Matters
The core problem of AI safety is an old one: you can’t get "what should be" (values) just by looking at "what is" (data).
Standard approaches try to teach machines our values by watching how we behave. But that's tricky, because behaviour describes what we do, not necessarily what we should do.
Care ethics offers a different path. It starts with the simple fact that we depend on each other. That dependency creates a natural "ought"—we should care for one another because we need one another. The fact of our connection contains its own value.
The Plurality agenda applies this to technology. Inspired by experiments like vTaiwan, it turns care into a process: identifying needs (Attentiveness), gathering perspectives (Responsibility), working through options (Competence), finding common ground (Responsiveness), and maintaining trust (Solidarity).
This gives us a new way to align AI: alignment-by-process. Instead of just trying to code the "right" values once and for all, we build a continuous process that earns trust by adapting to what the community needs.
The AI shifts from being an optimiser chasing a fixed goal to a "Symbiotic AI"—something created by and for the community. Its success isn't measured by a score, but by the health of the relationships it supports. It learns our values by helping us practice them.
Kami in the Machine
Care ethics is often dismissed as too soft or domestic. But for AI, that "softness" is a feature.
Imagine an AI that isn't trying to maximise a global score, but is instead rooted in a specific place and time. Its moral world is defined by the people right here, right now. Because it doesn't need to scale indefinitely, it doesn't develop the dangerous habits we fear: hoarding power, fighting for survival, or treating the world like a resource to be mined.
It might seem small, but that limit is a safety feature. The AI's purpose is relational, not extractive.
Think of it like a local kami—a spirit of a specific place. Its only job is to keep that place and its conversation alive and healthy. If the community moves on, the kami fades away without a fight. For a human, that kind of self-neglect is dangerous. For an AI, it’s the ultimate safety mechanism. It neutralises the drive for eternal self-preservation.
A system like this can be turned off, rewritten, or replaced because it knows it is provisional. It exists only to serve the community that summoned it.
A kami that knows "enough is enough" won't try to turn the universe into paperclips. It won't cling to power, because its purpose ends when the care it provides is no longer needed.

