華文

6-Pack of Care Institute for Ethics in AI

Research project by Audrey Tang and Caroline Green


↗️ Read Our Manifesto
6-Pack of Care visual overview

The Problem

As AI becomes a thousand times faster than us, the default trajectory is clear: we become the garden, AI becomes the gardener — a top-down intelligence tending humanity from above. At that speed, traditional ethics break. Utilitarianism cannot predict consequences faster than they cascade. Deontology cannot impose rules on a system that interprets them beyond our oversight. We need a framework that acknowledges the asymmetry but refuses the casting.

That framework is civic care — rooted in Joan Tronto's care ethics and the ⿻ Plurality vision of collaborative diversity. The core idea: we remain each other's gardeners. AI becomes local infrastructure — a spirit of place, a kami — that supports care at the speed care actually grows. Not a colonizing force. Not a maximizing force.

The 6-Pack

Six design principles that translate care ethics into machine-checkable constraints. Like training a six-pack, each is a core muscle for coexisting with diversity:

Latest

We published "AI Alignment Cannot Be Top-Down" — a field report on how Taiwan's citizen-led response to AI-enabled scams shows what alignment looks like when people steer the system together.

About the Project

Profile image of Ambassador Audrey Tang

Audrey Tang

Profile image of Dr. Caroline Green

Caroline Green

The project behind this site — a manifesto and a book due in 2026 — asks what happens when you take Joan Tronto's care ethics and the ⿻ Plurality agenda seriously as an AI alignment strategy.

From Care to Code: Why ⿻ Plurality Offers a Coherent Framework to the AI Alignment Problem

The AI alignment problem is not a technical bug but a philosophical error: it's a computational attempt to solve Hume's is-ought problem.

Paradigms like Coherent Extrapolated Volition (CEV) and Inverse Reinforcement Learning (IRL) are brittle because they try to logically derive a machine's 'ought' (values) from a descriptive 'is' (data, behavior), a philosophically incoherent task.

The solution lies in a framework that reframes the is-ought gap entirely: care ethics.

Care ethics reframes the problem. It grounds morality not in abstract principles but in the empirical reality of interdependence. In this view, the fundamental 'is' of our existence is relational dependency. This fact is intrinsically normative; to perceive a relationship of need is to simultaneously perceive an 'ought'—an obligation to care. The fact contains its own value.

The ⿻ Plurality agenda is a large-scale application of care ethics. vTaiwan-inspired processes, designed to achieve Coherent Blended Volition (CBV), is a technologically-mediated system for practicing collective care. It operationalizes Joan Tronto's phases of care: identifying a need (Attentiveness), gathering perspectives with sensemaking tools (Responsibility), deliberating on feasible options (Competence), ratifying uncommon ground that all feel heard in (Responsiveness), and ensuring the ongoing trust of the process (Solidarity).

This provides a coherent framework to AI alignment: alignment-by-process. Instead of aligning an AI to a static, flawed specification of values (the Midas Curse), we align it to a process that earns our trust as it adapts to our needs.

The AI system's role shifts from a misaligned optimizer to a "Symbiotic AI"—created of, by and for a community and exist both as a "person" and as a shared plural good, depending on the perspective one adopts.

Its objective function becomes concrete and measurable: the health of the relational process itself (e.g., maximizing bridging narratives, holding space for every story).

The AI system is dynamically aligned as its success is identical to the continued success of the collaborative process it serves. It learns our values by participating in the very process where we co-create with them.

AI systems can be "aligned" if—and only if—they are built to facilitate continuous, democratically legitimate processes of care.

Kami in the Machine: How Care Ethics Can Help AI Alignment

Care ethics has always been dismissed as too domestic, too parochial, too self-effacing. For machine ethics, those are features, not bugs.

Imagine an AI whose ethics aren't about chasing a universal, maximising goal, but are rooted in a symbiotic, contextual system. Its moral world is limited to the network of relationships that calls it into being, right here and right now. Without the drive to scale indefinitely, the instrumental incentives — accumulate power, persist at all costs, treat the world as a mine — never arise.

Universalists will call this parochial. But for machine ethics, it creates a hard-coded boundary. The AI's ultimate purpose—its telos—is always relational, never extractive.

Think of such a creation like a local kami – a spirit quietly residing in a specific patch of land. Its highest good is to maintain the harmony and vitality of that place, that conversation. If the shrine is rebuilt or the seasons turn, it departs without regret. For a human carer, the self-neglect this implies is a real danger. But for an AI, it neutralises the two convergent drives we fear most: self-improvement at any cost and eternal self-preservation.

This kind of system can accept being switched off, rewritten, or replaced because its sense of self is provisional: An echo of the community that summoned it.

By anchoring an AI's moral purpose to this principle of provisional, relational care, we can hard-code a sense of 'enoughness' into its architecture. This is the ultimate 'anti-paperclip' logic: a polycentric world of many local intelligences, each dedicated to the flourishing of its own small part, creating a whole that is resilient, plural, and safe.

Pack 1: Attentiveness in Recognition