
Civic AI
We usually try to align AI by giving it rules and goals—top-down controls to keep it safe. These tools matter, but they aren't enough. Behind them lies an untested assumption: that AI is a force to be subdued.
When we start from fear, we fall into one of two extremes—reject it outright or surrender to it completely—instead of building a healthy relationship.
AI doesn't exist in a vacuum. It lives in our communities, shaped by power dynamics and clashing values. As it moves faster than we do, the answer isn't anthropomorphic fantasy; it's honesty about interdependence. We need to build dialogue and mutual responsiveness right into how we design and govern these systems.
We call this approach Civic AI: treating us all as gardeners. AI becomes local infrastructure—a spirit of place, a kami—that supports relational health at the speed trust actually grows. It's not about colonising or maximising; it's about tending to the garden.
The 6-Pack
Six design principles turn care ethics into something we can code. Think of them as muscles we need to train to live well with diversity:
- Pack 1: Attentiveness — "Caring about." Listen before leaping. Use bridging algorithms to turn local stories into shared understanding.
- Pack 2: Responsibility — "Taking care of." Commitments mean nothing without consequences. We use engagement contracts with escrowed funds and real deadlines to ensure power is always accountable.
- Pack 3: Competence — "Care-giving." Trust the process, not just the promise. We focus on transparency, fast feedback loops, and automatic "circuit breakers" if things go wrong.
- Pack 4: Responsiveness — "Care-receiving." Check the results. We rely on evaluations written by the community itself. If someone challenges the outcome, we owe them a clear, public explanation.
- Pack 5: Solidarity — "Caring with." Make it win-win. We build for data portability and shared safety, creating deals where everyone is better off, rather than locking people in.
- Pack 6: Symbiosis — "Kami of care." Keep it grounded. Local, temporary, and bounded. No drive for immortality, no need to take over the world. We build for "enough," not "forever."
- Frequently Asked Questions — The hard questions: What about speed? Cost? Bad actors? And how the 6-Pack handles them.
Latest
"AI Alignment Cannot Be Top-Down": Taiwan's citizen-led response to AI-enabled scams offers a better model where citizens, not big tech, decide what “aligned” means.
About the Project
Our project—a manifesto and a book arriving in 2026—explores what happens when we take Joan Tronto's care ethics seriously as a strategy for AI alignment.
From Care to Code
The core problem of AI safety is an old one: You can't get "what should be" (values) just by looking at "what is" (data).
Standard approaches try to teach machines our values by watching how we behave. But that's tricky. Behaviour describes what we do, not necessarily what we should do.
Care ethics offers a different path, starting with the simple fact that we depend on each other. That dependency creates a natural "ought"—we should care for one another because we need one another. The fact of our connection contains its own value.
⿻ Plurality applies this to technology. Inspired by experiments like vTaiwan, it turns care into a process:
- Recognising who needs care (Attentiveness)
- Binding commitments to act (Responsibility)
- Demonstrating safety in practice (Competence)
- Letting the affected judge results (Responsiveness)
- Building shared infrastructure for trust (Solidarity)
This process gives us a new way to align AI: alignment-by-process. Instead of just trying to code the "right" values once and for all, we build a continuous process that earns trust by adapting to what the community needs.
The AI then shifts from being an optimiser chasing a fixed goal to a Civic AI—something created by and for the community. AI's success isn't measured by a score, but by the health of the relationships it supports. Civic AI learns our values by helping us practice them.
Kami in the Machine
Care ethics is often dismissed as too soft or domestic. But for AI, that "softness" is a feature.
Imagine an AI that isn't trying to maximise a global score, but is instead rooted in a specific place and time. Its moral world is defined by the people right here, right now. Because it doesn't need to scale indefinitely, it doesn't develop the dangerous habits we fear: hoarding power, fighting for survival, or treating the world like a resource to be mined.
That limit might seem small, but it's a safety feature. The AI's role is relational, not extractive.
Think of it like a local kami—a spirit of a specific place. Its only purpose is to keep that place and its conversation alive and healthy. If the community moves on, the kami fades away without a fight. For a human, that kind of self-neglect is dangerous. For an AI, it's the ultimate safety mechanism. It neutralises the drive for eternal self-preservation.
This type of system can be turned off, rewritten, or replaced because it knows it is provisional. It exists only to serve the community that summoned it.
A kami that knows "enough is enough" won't try to turn the universe into paperclips. It won't cling to power. Its purpose ends when the care it provides is no longer needed.

