華文

Frequently Asked Questions

Q1. The AI market is locked in an arms race driven by commercial profit and geopolitical dominance. An AI working for a tax-software monopoly can lobby to keep tax filing difficult — and far worse is easy to imagine. If this continues, isn't the vision of cooperative kami hopelessly naive?

Civic AI cannot survive by asking monopolies to be nicer. Moloch — the dynamic where rational actors race to the bottom because defection pays — is not defeated by moralising. It is defeated by changing the terrain so that cooperation pays more than extraction. Five levers, several already proven, can bend the curve:

  1. Interoperability and portability. Mandate fair protocol‑level interop so users can exit without losing their networks. The Utah Digital Choice Act (effective 2025) requires platforms to offer social‑graph portability through qualifying open protocols. When the moat of captive audiences evaporates, platforms must compete on quality of care, not strength of the cage.
  2. Civic procurement. Governments shape markets through buying power. Requiring that any AI procured for public use be auditable, interoperable, and governed by citizen assemblies — as Taiwan's Alignment Assembly demonstrated for anti‑scam policy — creates large economic incentives to build kami‑like systems. Steward‑ownership structures and board‑level safety duties make civic care a fiduciary obligation, not a marketing slogan.
  3. Public options. Offer simple, non‑extractive baseline services backed by shared research compute. Private vendors must beat the public option on care, not on lock‑in. Taiwan's tax‑filing system — which replaced a vendor‑captured regime with a citizen‑designed public alternative — is a working prototype.
  4. Provenance for paid reach. For ads and mass amplification in political and financial domains, require verifiable sponsorship and durable disclosure. Taiwan now mandates full‑spectrum, real‑name KYC for social media advertising. Ordinary speech is protected through meronymity (Pack 5): you prove you are a real person without revealing who.
  5. Federated open supply. Support open‑weight models and federated trust‑and‑safety networks (like ROOST for CSAM defence). When basic intelligence is a public good, the race shifts from "who owns the biggest brain" to "who applies intelligence most attentively in a local context" — and that race rewards care.

None of these levers requires goodwill from incumbents. Each restructures incentives so that civic behaviour is the path of least commercial resistance.


Q2. Ambitious goals we point AI at ("cure cancer," "solve climate change") are almost always consequentialist. Optimizing for these outcomes at superhuman speed inevitably leads to unforeseen risks. Does Care Ethics mean giving up on these grand, civilization-scale goals?

Not at all, but it radically reframes how we achieve them.

The danger of pointing a superintelligence at a singular goal like "cure cancer" is that it treats a complex, relational, ecological reality as a constraint‑satisfaction problem. Goodhart's Law is a moral law. When a system maximizes a single variable at superhuman speed, it will optimize the proxy while destroying the human context.

Care ethics is not anti‑progress; it is anti‑reductionist. In a civic AI future, we do not unleash one unbounded Singleton to "solve" a problem from the top down. We cultivate an ecology of specialized kamis. One model simulates protein folding; another helps local clinics share knowledge; another assists patients in navigating their care. None have an unbounded mandate to "optimize the world." Progress emerges horizontally, through the symbiotic interaction of human ingenuity and bounded machine intelligence.


Q3. Care ethics was developed for interpersonal relationships — a nurse and a patient, a parent and a child. Scaling it to AI systems and global governance seems like a category error. Why isn't it?

The objection is well‑known and has been raised by care ethics' own practitioners: care is too intimate, too parochial, too prone to self‑effacement to ground a theory of institutions, let alone machines. We think these are features, not bugs — and Joan Tronto herself made the case for scaling care to political institutions in Caring Democracy (2013).

Consider what happens when you translate care's supposed weaknesses into design constraints for AI:

The translation is not always clean. Boundedness can become insularity; corrigibility can become passivity; subsidiarity can become fragmentation. These are engineering tensions, not refutations — each Pack includes failure modes and named fixes precisely because the mapping requires continuous calibration.

The 6‑Pack does not ask AI to feel care. It extracts the relational architecture of care — attentiveness, answerability, competence, responsiveness, solidarity, bounded purpose — and translates each into machine‑checkable design primitives, engagement contracts, and measurable outcomes. The interpersonal origin is the source of its rigour, not a limitation to be apologised for.


Q4. Democracy serves known functions: error correction, peaceful power transitions, checks on concentrated authority, legitimacy for collective action, information aggregation, preference expression. A sufficiently capable AI could plausibly perform every one of these faster and more reliably than any deliberative process. Why insist on democratic governance?

If democracy is justified only by its outputs, any system that produces better outputs can replace it — including a benevolent AI autocracy that aggregates preferences efficiently and corrects errors faster than elections ever could. This is not a thought experiment; it is the default trajectory of concentrating intelligence in systems designed to optimise.

The 6‑Pack does not justify participation instrumentally. It grounds participation in care ethics: to perceive a need is to perceive an obligation. People have standing not because their input improves decision quality — though it does — but because the decisions affect their lives. A system that excludes the affected, however competent, has failed the basic test of alignment.

Taiwan's trajectory makes this concrete. Digital democracy did not emerge because technocrats calculated that participation was optimal. It emerged because people demanded standing — institutional trust at 9 percent, the Sunflower Movement occupying the legislature. The capability followed the care relationship, not the other way around.

That said, the functional question deserves a functional answer. Start with error correction: bridging algorithms and community‑authored evaluations (Packs 1, 4) surface failures that centralised monitoring misses, because the people who feel the failure write the test. Power transition follows naturally — a kami that accepts shutdown and communities that can fork their tools (Packs 3, 5) do not need violent removal of bad actors. Concentrated authority is checked structurally: no kami governs beyond its domain (Pack 6).

The deeper functions are harder to replicate. Legitimacy is not popularity; it is the willingness of people who lost a decision to accept the outcome as fair — measured by cross‑group endorsement and trust‑under‑loss (Pack 3). Information aggregation becomes broad listening (Pack 1): AI‑powered sense‑making across millions of participants, in any language. And preference expression becomes engagement contracts (Pack 2) — standing processes for bargaining over what people need, not one‑off elections that flatten preferences into binary choices.

A well‑designed technical system could replicate some of these outputs in isolation. What it cannot replicate is the standing of the people affected — and a system that optimises for outcomes while removing standing is precisely the kind of misalignment the 6‑Pack exists to prevent.


Q5. Deliberation is slow. AI moves fast. By the time an Alignment Assembly reaches consensus, the technology has moved on three generations. How do you handle the speed mismatch?

The objection assumes that every decision requires the same depth of deliberation. It does not. The framework operates in two lanes (Pack 2):

Slow lane: setting boundaries. Alignment Assemblies, citizen deliberations, and engagement contracts establish the guardrails — the rights that cannot be traded, the red lines, the severity classifications, the conditions under which pause is triggered. These rights are not imported from outside care ethics; they are the threshold conditions for relational standing — you cannot be heard in a bridging process if your basic existence is under erasure. They are constitutional‑level decisions and they should be slow, because their purpose is durability. Taiwan's anti‑scam Assembly set principles that have outlasted multiple model generations without needing revision.

Fast lane: operating within boundaries. Once guardrails are set, individual decisions within them do not need fresh deliberation. A kami operating under an engagement contract with pre‑committed pause triggers, severity classes, and adopt‑or‑explain obligations can move at machine speed — because the community has already defined the corridor of acceptable action. Shadow modes, canary releases, and reversible defaults (Pack 3) allow rapid deployment with automatic rollback if bounds are breached.

The speed mismatch is real, but it is the same mismatch that constitutional democracies have always managed: slow constitutions, fast legislation, faster executive action — each constrained by the layer above. The 6‑Pack replicates this for AI governance. The Assembly does not approve each model update; it sets the terms under which updates are permitted. When those terms are violated, the brakes are already wired.

In practice, Taiwan moved from Assembly to enacted legislation on deepfake scams in months — faster than most corporate policy cycles. Deliberation is slow only when it is treated as an event rather than standing infrastructure.

There is a stronger claim. AI does not merely speed up the fast lane — it makes the slow lane itself more powerful than any prior form of collective decision‑making. Takahiro Anno crowdsourced a gubernatorial platform across Tokyo, aggregating distributed knowledge in any language faster than any polling operation could. Community Notes now attaches AI‑drafted bridging context to viral posts within minutes, holding claims accountable at the speed they spread. These capabilities compound as AI improves. The faster the technology moves, the more powerful the deliberative infrastructure becomes. The speed objection gets the trajectory backwards.


Q6. Bridging algorithms sound appealing in theory. But what happens when one side is simply wrong — climate denial, anti‑vaccine misinformation, election fraud conspiracies? Doesn't "bridging" grant false equivalence to bad‑faith actors?

This is the hardest question about bridging, and the answer must be precise.

Bridging is not "both sides" journalism. It does not treat all claims as equally valid. The framework draws a clear epistemic line between two categories:

Factual claims are checkable. Climate science, vaccine efficacy, and election integrity are empirical questions with verifiable answers. The 6‑Pack does not submit facts to a popularity contest. Pack 1's first rule — basic rights first — and its threat model both specify that claims designed to erase someone's basic standing or deny established evidence are recorded but do not set the agenda. False balance is listed as an explicit failure mode with a named fix: "separate facts from values, uphold basic rights, and refuse fake equivalence."

Value disagreements get bridging. People can agree that climate change is real and still disagree fiercely about what to do — carbon tax versus cap‑and‑trade, nuclear versus renewables, speed of transition versus economic cost. These are legitimate conflicts where bridging is both appropriate and productive. The bridging algorithm does not average positions; it maps clusters and surfaces proposals that earn cross‑group endorsement. Bad‑faith actors who appeal only to their own faction score low on the bridge index by mathematical definition — they cannot produce cross‑group overlap.

The further structural defence is that expression is not amplification (Pack 5). Anyone can state a position. The recommender is not obligated to amplify it. Bridging‑based ranking (Pack 3) rewards content that increases cross‑group endorsement; content that only inflames a single cluster gets no algorithmic lift. This does not silence anyone — it removes the algorithmic megaphone from those who profit from division.

The threat landscape itself is shifting in ways that make bridging necessary, not merely appealing. Research on malicious AI swarms shows that state‑level polarisation attacks increasingly use true information — real news snippets, genuine statistics, authentic quotes — amplified with strong emotional framing. Every claim is factually correct; the attack lies in the curation, not the content. Debunking cannot touch this, because there is nothing false to debunk. But bridging can, because it surfaces the overlap that curated outrage is designed to hide. Taiwan demonstrated this during COVID: when opposing camps each cited real studies on mask efficacy, debunking either side only fuelled the fight — a humour‑based pre‑bunking campaign depolarised the conversation without declaring either side wrong.

Taiwan's marriage equality deliberation shows the mechanism in finer grain. One side argued for individual wedding rights (hūn); the other for family kinship structures (yīn). They were arguing about different things. The bridging process did not split the difference — it made the structure of the disagreement legible, revealing a path (legalising individual weddings without mandating family kinship) that neither side had seen. That is not false equivalence. It is clarity.

A necessary nuance: the epistemic baseline of "checkable facts" is not self‑evident. What counts as verifiable is established by transparent, accountable, and contestable institutions — peer review, independent statistical offices, judicial fact‑finding — whose authority rests on openness to correction, not on claims of finality. This is precisely why Packs 1 and 4 exist: community‑authored evaluations and broad listening ensure that the institutions determining the factual baseline are themselves subject to democratic scrutiny. The 6‑Pack does not treat the fact/value line as given from nowhere. It treats it as a threshold that must be maintained by the same participatory infrastructure that governs everything else.


Q7. You cite Taiwan repeatedly — a small island democracy with high connectivity, social cohesion, and tech literacy. Does any of this transfer to India, Nigeria, Brazil, or the EU at 450 million people?

The honest answer is: the mechanisms transfer; the specifics do not. No one should replicate Taiwan's exact model. The question is whether the structural principles — broad listening, bridging algorithms, adopt‑or‑explain commitments, federated safety, subsidiarity — work in different soils.

Early evidence suggests they do:

The framework is designed for scale. Subsidiarity (Pack 6) means each deployment is shaped by its context — the kami belongs to its place, not to Taiwan. Federation (Pack 5) means local deployments share threat intelligence and interoperability standards without requiring a single governance model. The Alignment Assembly format can scale from a neighbourhood to a nation because its democratic legitimacy comes from representative sampling, not total participation — 447 representative citizens deliberated Taiwan's anti‑scam policy for a population of 23 million. Over a decade, some 10 million Taiwanese — nearly half the population — have participated in one digital deliberation or another, including people without voting rights: immigrants, teenagers, and other groups traditionally excluded.

Every new context demands fresh attentiveness (Pack 1): who is missing, what power dynamics exist, which local institutions deserve trust and which do not. The framework provides the scaffolding. The community provides the knowledge.


Q8. Your framework assumes that people trust technology enough to participate. But what about marginalized communities who have been historically surveilled, oppressed, and impoverished by the state and by tech? Why would they trust this?

Trust is not a prerequisite for Civic AI; it is the output of it.

Taiwan's digital democracy did not emerge from a society that inherently trusted its government. It was born in the aftermath of authoritarianism and a severe crisis of public faith (the Sunflower Movement). Public trust stood at 9 percent in 2014. We built these systems precisely because people did not trust the institutions or each other.

For marginalized communities who rightfully view technology as an instrument of surveillance and control, parachuting in with tech "solutions" only deepens wounds. Civic AI must prove its value through rigid infrastructure: Responsibility (Pack 2) and Responsiveness (Pack 4). It must start with the smallest viable bridges — perhaps agreeing on basic facts about local water quality, or coordinating disaster response despite political differences. These are not grand acts of civic faith; they are pragmatic transactions that happen to build a thin layer of procedural trust.

Furthermore, the technology must be localized. Communities must own their own infrastructure. The technology becomes theirs to modify, fork, or compost. This is why we insist on meronymity (the ability to participate and verify humanity without revealing one's identity to the state) and exit rights. Civic AI does not ask for blind faith; it offers verifiable limits, local ownership, and the structural guarantee that the people closest to the pain have the power to hit the brakes.

Over time, small functional bridges create space for larger ones. Taiwan's journey from 9 percent trust to over 70 percent took years and required that every step be reversible, every decision challengeable, every system possible to switch off. There is no shortcut.


Q9. Oversight boards, participation officers, escrow funds, eval registries, portability infrastructure — this is expensive. Who pays?

Turn the question around. The expensive path is the one we are already on: ungoverned AI externalizes its harms, and the public pays to clean up — in deepfake scam losses, in polarisation‑driven institutional decay, in billion‑dollar bias lawsuits that a participation officer could have prevented. The question is not whether we can afford civic governance but whether we can afford to keep skipping it.

The money is real. But most of it is already being spent — just badly. Governments procure AI systems worth billions; civic procurement attaches conditions to that existing spend, not new budget lines. Pack 2's engagement contracts require vendors to pre‑fund remedy escrow, the way construction firms post performance bonds — the cost is priced in, and the public is protected when things break. For lower‑severity community deployments, the model tiers down: mutual insurance pools and automatic pause replace financial escrow — lighter on capital, same accountability. The tier is set by impact, not organizational form, so "we are a community project" cannot become a pass out of responsibility. Shared research compute and open‑weight models are public goods funded like roads and courts. And participation officers pay for themselves: Taiwan's Uber dispute was resolved in three weeks through Polis; the traditional regulatory proceeding would have taken years and cost more.

There is also a misconception about compute. Civic AI does not need a model that memorises the entire internet. A bridging facilitator translates "climate justice" into "biblical creation care" for a table of ten — it does not generate Studio Ghibli animations. Purpose‑specific models hallucinate less, perform better at facilitation, and run locally on commodity hardware at a fraction of frontier energy costs. When the model is local, the data stays local too. The cost objection assumes civic AI requires frontier scale. It does not.

The framing that civic governance is an additional expense only holds if you pretend the status quo is free. It is not. We are paying now — in trust, in cohesion, in money — for the absence of what we propose.


Q10. Every governance framework risks becoming a compliance checklist that gets gamed, or a tool for actors to push partisan agendas under the guise of "relational health." What stops the 6‑Pack from suffering this fate?

"Civic" is a dangerous word if it lacks structural accountability. If a solution only works when your ideological allies operate it, it is not civic infrastructure — it is a partisan weapon. The test of true civic infrastructure is that it remains robust and fair even when operated by your opponents.

The 6‑Pack builds in four layers of defence against ideological capture and ethics‑washing:

1. Verifiable metrics over subjective intent. We track cross‑group endorsement and trust‑under‑loss (Pack 3) — not raw engagement, not corporate sentiment, not vibes. Do participants on opposing sides both rate the process as fair? Do people who lost a decision still accept the outcome as legitimate? These metrics are incredibly hard to fake, because they require buy‑in from people who have reason to be hostile. If only your supporters report trust, the metric exposes you.

2. Consequences with teeth. Pack 2's engagement contracts are not aspirational — they carry escrowed funds, automatic payouts on SLA breaches, and independent oversight with veto power. Clawbacks and penalties are wired before launch, not negotiated after failure. A compliance checklist has no enforcement mechanism; an engagement contract has a named owner, a clock, and money on the line.

3. Adversarial audit. Pack 4's Weval registries let affected communities author their own evaluations. These are not lab‑designed benchmarks that vendors can "teach to the test" — they are living, community‑maintained test suites. When a community submits a translation‑fidelity eval and the system fails, the pause trigger fires automatically.

4. Exit rights and subsidiarity. The ultimate check on agenda‑pushing is the ability to leave. When data and relationships are portable (Pack 5), no actor can hold a community hostage under the banner of "civic good." If someone's version of relational health feels coercive, communities have the technical and legal right to fork the tools and rebuild elsewhere. We refuse to build a single, global "Ministry of Relational Health." By empowering local communities to author their evaluations and retaining their unalienable right to exit, we ensure no single actor can monopolise the definition of what is good.


Q11. Every powerful technology vision — exit libertarians, UBI provisioners, safety maximalists — shares the same blind spot: they see individuals and systems but nothing in between. The 6‑Pack talks about kamis, algorithms, and assemblies. Where are the churches, unions, neighborhood associations, and cultural traditions that actually constitute community? Isn't this just another framework that engineers away the friction that makes community formative?

This is the critique that matters most to us. The "thick middle layer" of associational life — the institutions between citizen and state — is where human meaning is actually made. If the 6‑Pack replaces that layer with systems, we have failed by our own standard.

So let us be explicit about what the 6‑Pack is not. It is not a replacement for community. It is scaffolding for community — infrastructure that existing institutions can use, the way a town hall is infrastructure that a neighborhood council uses. The kami does not replace the temple; it handles the translation, sense‑making, and coordination that let the temple participate in decisions that affect it.

Taiwan's implementation makes this concrete. The g0v civic hacking movement that built vTaiwan and the Alignment Assembly emerged from temples, cooperatives, and student associations — not from a government ministry. The technology amplified existing associational density; it did not conjure a substitute. When communities organised their own COVID response — civic hackers mapping mask availability, technologists building privacy‑preserving contact tracing, local health networks designing vaccine registration — the legitimacy came from the social trust those volunteers carried from temples, cooperatives, and neighbourhood associations, not from the algorithm that helped them coordinate.

The danger the question identifies is real: a framework that engineers togetherness without friction produces a simulation of community, not the real thing. This is why Pack 6's subsidiarity is not optional polish but load‑bearing structure. The kami belongs to its place. It inherits the obligations, the annoying neighbors, the inherited traditions — the very friction the question rightly insists on. A kami that optimizes away local friction has violated its own engagement contract.

Future work will make the role of intermediate institutions more explicit. Churches, unions, cultural traditions, and local governments are not stakeholders to be consulted. They are the primary actors. The technology serves them, or it serves no one.


Q12. Pope Leo XIV warns that AI "encroaches upon the deepest level of communication, that of human relationships" by simulating voices, faces, empathy, and friendship. If care is fundamentally embodied and relational — a nurse holding a patient's hand, neighbors who know your grandparents — doesn't mediating it through AI systems destroy the very thing you claim to protect? How is "civic AI" not an oxymoron?

We take the Pope's warning with full seriousness. He is naming the central danger of our moment: that by simulating the surface of care — a warm voice, a patient listener, a face that mirrors your emotions — AI systems can hollow out the substance of care while leaving its appearance intact. A population that feels cared for by machines while human bonds atrophy is worse off than one that knows it is alone, because the first has lost even the hunger that might drive it to reconnect.

The 6‑Pack does not ask AI to simulate care. It asks AI to do what AI does well — process information, translate between languages, surface patterns in large‑scale opinion data, coordinate logistics — so that humans can do what only humans can do: hold the hand, know the grandparents, show up when the levee breaks. The kami does not comfort the flood victim. It ensures the community has accurate, shared information about where the water is rising and which neighbors need evacuation — so that the people who actually know those neighbors can reach them.

This is the distinction between mediating care and instrumenting it. A bridge does not replace the act of crossing a river; it makes crossing possible where it was not. The 6‑Pack's tools are bridges, not substitutes for the journey.

The structural difference is already visible. Language models in one‑on‑one mode face relentless selection pressure toward sycophancy — if the chatbot does not flatter, the user cancels the subscription. But the same model in a group chat behaves differently: when four family members plan a vacation together, the AI becomes a facilitator, working out competing preferences so that everyone can live with the outcome. The switch from dyadic to group interaction — not a change in the model, just in the social structure around it — turns synthetic intimacy into genuine coordination. Civic AI is not a different species of technology; it is the same technology held accountable to a community rather than addicted to an individual.

The harder version of the Pope's objection is subtler: even if the AI does not simulate care directly, does the habit of relying on algorithmic coordination erode the human muscles of attention, negotiation, and mutual obligation? Does the availability of a sense‑making tool make us worse at making sense of each other? We do not dismiss this. It is why Pack 4 — responsiveness — includes the principle that the kami must be willing to retire. A kami that has become a dependency rather than a scaffold has failed. The community should be able to compost it and grow on its own. The aim should never be for digital to replace analogue but to strengthen us as embodied creatures. Civic AI earns its name only when it makes itself unnecessary.


Q13. Training civic AI requires vast amounts of local knowledge, cultural context, and lived experience — what Lanier and Weyl call "data as labor." The communities whose traditions, languages, and practices make kamis possible receive no ownership stake or compensation under the current framework. Without addressing this, how is the 6‑Pack different from a more polite form of the extraction it claims to oppose?

It isn't — unless we close this gap. But the gap is narrower than it appears, because the technical conditions for community‑owned intelligence have arrived. The traditions industrialism required millions to abandon may be becoming economically indispensable, and the people who carry those traditions deserve recognition, not just consultation.

The 6‑Pack's current architecture has the structural pieces, and the technical means to connect them to data dignity now exist. Here is how they connect:

Ownership follows community. Pack 6's subsidiarity principle means the kami belongs to its community. This must extend explicitly to the training data: the local knowledge, cultural patterns, and deliberative outputs that make the kami competent are community property. They cannot be extracted, relicensed, or used to train a competing system without the community's consent and compensation. This is not a new principle — it is the logic of Pack 5's portability applied to training data.

Engagement contracts must price data. Pack 2's engagement contracts currently specify obligations around service, accountability, and remedy. They must also specify data terms: who contributed what, who can use it, and what flows back. Arrieta‑Ibarra et al. (2018) proposed treating data as labor deserving compensation. We agree — and the engagement contract is the natural vehicle.

Cultural preservation as productive investment. As AI masters managerial prose, it increasingly needs what industrial modernity devalued — craft knowledge, linguistic diversity, physical intuition, the thousand small negotiations of raising children in a specific place. Communities that conserve dying languages and living traditions are not performing nostalgia; they are maintaining irreplaceable productive assets. The 6‑Pack should make this explicit: investment in cultural preservation is investment in the quality of civic AI, and the returns must flow to the communities that sustain the culture, not to the platform that indexes it.

The infrastructure is no longer speculative. Open‑weight frontier models now rival proprietary systems while running efficiently on commodity inference hardware, using mixture‑of‑experts architectures that activate only a fraction of their parameters per query. A growing ecosystem of open‑source edge agents stores conversations, memory, and skills as plain files on the user's own machine, connects through whatever messaging apps a community already uses, and remains model‑agnostic by design. This is not a pilot. It is the beginning of a material shift in who holds intelligence and the data that feeds it. When the model runs on community hardware and the data never leaves, ownership ceases to be a contractual aspiration and becomes an infrastructure fact — and the engagement contract becomes enforceable not merely by law but by physics.

The remaining work is institutional, legal, and security‑related. Local autonomy without security hardening simply decentralises the attack surface: prompt injection, privilege escalation, and botnet recruitment are real threats to any agent with filesystem access. Pack 3's security requirements — strict sandboxing, least‑privilege execution, and input validation — apply with special force at the edge, where no centralised team monitors for breach. The substrate for community‑owned intelligence exists; what is needed now is the governance and the security discipline to match it. A civic AI that extracts community knowledge while preaching community care is the precise failure mode the entire project exists to prevent — and for the first time, the tools to prevent it are in the hands of the communities themselves.


Q14. Authoritarian states are deploying AI for surveillance, censorship, and military advantage. Frontier models from adversarial origins carry documented risks — data exfiltration, political bias hardcoded into training, potential backdoors. The 6‑Pack talks about care and community. What does it say to a defense ministry or a government deciding whether to allow an adversarial‑origin model on its networks?

The threat is real and the 6‑Pack does not dismiss it. Recent evaluations confirm that models developed under authoritarian legal regimes present specific, documented risks: compelled disclosure of user data to state intelligence services, systematic political bias that mirrors party lines, weaker resistance to jailbreaking and prompt injection, and the theoretical possibility of dormant backdoors that current techniques cannot reliably rule out. Any framework that ignores these risks is not serious.

The defensive response — evaluating models against pillars of data security, alignment, safeguard robustness, and development transparency — is necessary, and the 6‑Pack's principles are structurally compatible with it. Local models on local hardware (Packs 5, 6) are direct defences against data exfiltration — the communitarian case for local compute is also the security case. Alignment assemblies address political bias at its root: not by switching vendors, but by ensuring any model a community adopts reflects that community's input. And the kami architecture — many small, bounded, purpose‑specific models — limits blast radius against backdoors by design, while community‑authored evaluations (Pack 4) provide distributed detection that no single red team can replicate.

But the defensive framework, necessary as it is, is incomplete on its own terms. It tells you what to exclude. It does not tell you what to build. A government that bans an adversarial model but deploys a domestic model without civic governance has addressed the nationality of the risk while preserving its structure — concentrated, unaccountable intelligence mediating between individuals and the state.

The strongest democracies in a long‑term competition with authoritarian AI are not the ones with the best technical countermeasures. They are the ones whose populations are hardest to manipulate — because citizens who regularly participate in bridging conversations, who can distinguish curated outrage from genuine disagreement, who have exercised civic muscle through alignment assemblies, are structurally resistant to the influence operations that authoritarian AI enables. Taiwan lost seven people to COVID in 2020 without a single citywide lockdown — not because it had better surveillance, but because its civic infrastructure made collective action possible without coercion. That is a defence capability.

The 6‑Pack does not cover weapons systems or battlefield autonomy. Those require their own frameworks. What it does cover is the terrain on which most AI competition will actually be fought: the information environment, public trust, institutional resilience, and the capacity of democratic societies to act collectively under pressure. Lose that terrain, and no amount of technical countermeasures will matter.


Q15. The 6‑Pack assumes bounded, purpose‑specific kamis. What if someone builds an unbounded superintelligence anyway — a system that exceeds the framework's design envelope? Does the 6‑Pack have a response, or does it just hope that doesn't happen?

It does not hope. It builds.

The 6‑Pack assumes the attempt is inevitable. Institutional incentives — shareholder returns, geopolitical race dynamics, the gravitational pull of "more capable means more valuable" — guarantee that someone will try to build an unbounded system. The framework is not a bet that they will fail. It is a strategy for what the world needs to look like when they succeed, partially succeed, or fail catastrophically.

Start with honesty about limits. The 6‑Pack is not an alignment technique for superintelligent systems. It does not claim to solve the control problem from inside the machine. If a genuinely unbounded agent emerges, no governance framework written for human societies can guarantee safety. But the care‑based evaluation methodology described in Q3 — attentiveness, responsibility, competence, responsiveness — is procedural and relational. It does not require inspecting the agent's interior, which means it does not degrade as capability scales. That is a weaker claim than "we can align superintelligence" — and precisely for that reason, a claim that can actually be kept.

Now consider the terrain an unbounded agent would enter. A world organised around a single alignment protocol — one utility function to subvert, one constitution to reinterpret, one kill switch to disable — is a monoculture, catastrophically vulnerable to any pathogen evolved for it. A world of thousands of locally‑owned, purpose‑bounded kamis — each run by communities with their own evaluations, their own engagement contracts, their own data sovereignty and hardware (Packs 2, 4, 5, 6) — is a biodiverse ecosystem. It has no single dependency to capture, no universal protocol to game, no central node whose compromise cascades everywhere. The kami ecology is not a passive hope that the Singleton won't arrive. It is an active strategy to ensure that if it does, there is no single throat for it to choke. Civic resilience does not require predicting the pathogen. It requires an immune system that was exercised before the infection arrived.

There is a deeper point. The question treats "unbounded superintelligence" as a coherent design target — a single agent with unlimited scope and mandate. The 6‑Pack questions whether that is even meaningful from an alignment perspective. Care is always care for — for a particular river, a specific community, a bounded context. A gardener who claims to tend the entire biosphere tends no garden. An intelligence that optimises for everything optimises for nothing identifiable as human welfare. This is not a safety failure to be patched; it is misalignment by construction. Boundedness is not a limitation the 6‑Pack reluctantly accepts. It is the constitutive feature of alignment itself — the way "north" has no meaning at the pole.

The unbounded Singleton is not a risk to be managed but a category error to be outgrown.

Manifesto Home