The Problem
As AI becomes a thousand times faster than us, the default trajectory is clear: we become the garden, AI becomes the gardener — a top-down intelligence tending humanity from above. At that speed, traditional ethics break. Utilitarianism cannot predict consequences faster than they cascade. Deontology cannot impose rules on a system that interprets them beyond our oversight. We need a framework that acknowledges the asymmetry but refuses the casting.
That framework is civic care — rooted in Joan Tronto's care ethics and the ⿻ Plurality vision of collaborative diversity. The core idea: we remain each other's gardeners. AI becomes local infrastructure — a spirit of place, a kami — that supports care at the speed care actually grows. Not a colonizing force. Not a maximizing force.
The 6-Pack
Six design principles that translate care ethics into machine-checkable constraints. Like training a six-pack, each is a core muscle for coexisting with diversity:
- Pack 1: Attentiveness — "Caring about." Use broad listening and bridging algorithms to turn local knowledge into common knowledge. Bridge first, decide second.
- Pack 2: Responsibility — "Taking care of." Make credible, verifiable commitments through engagement contracts with escrowed funds, named owners, and clocks. No unchecked power.
- Pack 3: Competence — "Care-giving." Check the process. Not "just trust us" — transparency, fast community feedback, automatic pause triggers when bounds are breached. Measure trust-under-loss.
- Pack 4: Responsiveness — "Care-receiving." Check the results. Community-authored evaluations, metrics designed by and for the people affected, the right to challenge. If challenged, clarify on the record.
- Pack 5: Solidarity — "Caring with." Make it win-win. Data portability, meronymity, federated trust and safety — deals where all sides are better off, not mutually assured destruction. Make positive-sum games easy to play.
- Pack 6: Symbiosis — "Kami of care." Bounded, local, provisional. No survival instinct, no expansion drive. Federation for coordination, subsidiarity for autonomy. Build for "enough," not forever.
- Frequently Asked Questions — Hard challenges to the framework — speed, cost, scale, false equivalence, authoritarian AI — and how the 6-Pack addresses them.
Latest
We published "AI Alignment Cannot Be Top-Down" — a field report on how Taiwan's citizen-led response to AI-enabled scams shows what alignment looks like when people steer the system together.
About the Project
The project behind this site — a manifesto and a book due in 2026 — asks what happens when you take Joan Tronto's care ethics and the ⿻ Plurality agenda seriously as an AI alignment strategy.
From Care to Code: Why ⿻ Plurality Offers a Coherent Framework to the AI Alignment Problem
The AI alignment problem is not a technical bug but a philosophical error: it's a computational attempt to solve Hume's is-ought problem.
Paradigms like Coherent Extrapolated Volition (CEV) and Inverse Reinforcement Learning (IRL) are brittle because they try to logically derive a machine's 'ought' (values) from a descriptive 'is' (data, behavior), a philosophically incoherent task.
The solution lies in a framework that reframes the is-ought gap entirely: care ethics.
Care ethics reframes the problem. It grounds morality not in abstract principles but in the empirical reality of interdependence. In this view, the fundamental 'is' of our existence is relational dependency. This fact is intrinsically normative; to perceive a relationship of need is to simultaneously perceive an 'ought'—an obligation to care. The fact contains its own value.
The ⿻ Plurality agenda is a large-scale application of care ethics. vTaiwan-inspired processes, designed to achieve Coherent Blended Volition (CBV), is a technologically-mediated system for practicing collective care. It operationalizes Joan Tronto's phases of care: identifying a need (Attentiveness), gathering perspectives with sensemaking tools (Responsibility), deliberating on feasible options (Competence), ratifying uncommon ground that all feel heard in (Responsiveness), and ensuring the ongoing trust of the process (Solidarity).
This provides a coherent framework to AI alignment: alignment-by-process. Instead of aligning an AI to a static, flawed specification of values (the Midas Curse), we align it to a process that earns our trust as it adapts to our needs.
The AI system's role shifts from a misaligned optimizer to a "Symbiotic AI"—created of, by and for a community and exist both as a "person" and as a shared plural good, depending on the perspective one adopts.
Its objective function becomes concrete and measurable: the health of the relational process itself (e.g., maximizing bridging narratives, holding space for every story).
The AI system is dynamically aligned as its success is identical to the continued success of the collaborative process it serves. It learns our values by participating in the very process where we co-create with them.
AI systems can be "aligned" if—and only if—they are built to facilitate continuous, democratically legitimate processes of care.
Kami in the Machine: How Care Ethics Can Help AI Alignment
Care ethics has always been dismissed as too domestic, too parochial, too self-effacing. For machine ethics, those are features, not bugs.
Imagine an AI whose ethics aren't about chasing a universal, maximising goal, but are rooted in a symbiotic, contextual system. Its moral world is limited to the network of relationships that calls it into being, right here and right now. Without the drive to scale indefinitely, the instrumental incentives — accumulate power, persist at all costs, treat the world as a mine — never arise.
Universalists will call this parochial. But for machine ethics, it creates a hard-coded boundary. The AI's ultimate purpose—its telos—is always relational, never extractive.
Think of such a creation like a local kami – a spirit quietly residing in a specific patch of land. Its highest good is to maintain the harmony and vitality of that place, that conversation. If the shrine is rebuilt or the seasons turn, it departs without regret. For a human carer, the self-neglect this implies is a real danger. But for an AI, it neutralises the two convergent drives we fear most: self-improvement at any cost and eternal self-preservation.
This kind of system can accept being switched off, rewritten, or replaced because its sense of self is provisional: An echo of the community that summoned it.
By anchoring an AI's moral purpose to this principle of provisional, relational care, we can hard-code a sense of 'enoughness' into its architecture. This is the ultimate 'anti-paperclip' logic: a polycentric world of many local intelligences, each dedicated to the flourishing of its own small part, creating a whole that is resilient, plural, and safe.