華文

Chapter 7: Frequently Asked Questions

Question: Right now the AI market is locked in an arms race kind of situation, and in particular, scrambling to make AIs that will bring commercial profit. That can lead to nasty incentives, e.g. an AI working for a tax software company can help it lobby the government to keep tax filing difficult, and of course much worse things can be imagined as well. If this continues, all the nice vision of kami and so on will just fail to exist. What is to be done, in your opinion?

Answer: On bending the market away from arms‑race incentives, here are some levers that worked, inspired by Taiwan’s tax-filing case, that shift returns from lock‑in to civic care:

  1. Interoperability: Make “data portability” the rule. Mandate fair protocol‑level interop so users and complements can exit without losing their networks. Platforms must compete on quality of care, not captivity.
  2. Public options: Offer simple public options (and shared research compute) so there’s always a baseline service that is easy, safe, and non‑extractive. Private vendors must beat it on care, not on lock‑in.
  3. Provenance for Paid Reach: For ads and mass reach in political/financial domains, require verifiable sponsorship and durable disclosure. Preserve anonymity for ordinary speech via meronymity.
  4. Mission‑Locked Governance — Through procurement rules, ensure steward‑ownership/benefit structures and board‑level safety duties so “civic care” is a fiduciary obligation, not a marketing slogan.
  5. Institutionalize Alignment Assemblies and localized evals; pre‑commit vendors to adopt outcomes or explain deviations. Federate trust & safety so threat intel flows without central chokepoints.

Question: Ambitious goals we point AI at (“cure cancer”) are always consequentialist in nature, which inevitably leads to unforeseen risks when pursued at superhuman speed. How could civic care help?

Answer:  There are powerful examples to highlight how civic care could help. One of them is Bolinas, a birthplace of integrative cancer care, which centers healing for communities, catalyzed by people experiencing cancer. This care ethic prioritizes virtues like attentiveness and responsiveness to relational health over outcome optimization.

Asking a superintelligence to ‘solve cancer’ in one fell swoop — regardless of collateral disruptions to human relationships, ecosystems, or agency — directly contravenes this, as it reduces care to a terminal goal rather than an ongoing, interdependent process.

In a d/acc future, one tends to the research ecosystem so progress emerge through horizontal collaboration — e.g., one kami for protein‑folding simulation, one for cross‑lab knowledge sharing; none has the unbounded objective “cure cancer.” We still pursue cures, but with kamis each having non-fungible purposes. The scope, budget, and latency caps inherent in this configuration means capability gains don’t translate into open‑ended optimization.

Question: Are you proposing Civic AI as a panacea to social division online?

Answer: We are not proposing that AI is a panacea to or can bridge all social divides online or in societies and communities more general, which are of course often driven by complex social, political, economical and historic factors. But from past experience we have learnt that it is possible for AI, together with the necessary human mindset and drive, to enable systems that are built on a common ground between different, often opposing (even polarised) voices. Civic AI is a relational theory offering a vision on how to integrate AI into lives and communities for relational health, inspired by past experiences.

Question: There are many communities and people around the world who do not have access to technology or aren’t aware how to use AI and engage. How do we make sure that such communities and individuals are also included and see value in being part of the 6-packs of care process?

Answer: There are three important points related to this question. First of all, there is a point around ‘AI competency’ and a second point around ‘access to AI’, both of which are foundational for people to be able to engage in civic AI. The third point is around the way that AI is developed - often behind closed doors - in computer labs, mostly inaccessible to a broad public and driven by philosophical approaches that do not locate the work of computer scientists in a wider web of relationship with other people and communities. AI has been quite an exclusive, elitist and ‘unrelational’ endeavour.

The three points are really interconnected and civic AI can help to make these connections: Examples of when civic AI has already worked in practice can help to shed a light on how it can build relational health in local contexts, thereby making the framework less abstract and more practical. The approach is based on the idea of collaboration and ‘co-production’, in which all voices matter equally. It thereby challenges exclusivity or a ‘hierarchy’ in the voices that matter. For people who do not use technology at all or those who do not feel able to engage with AI, there are great examples of activities and projects around the world that address these issues, again on a very local, community level. For example, in London, UK, there are intergenerational learning hubs in some local community centres, like libraries, in which people can come and learn about technology and AI. These hubs are mostly meant for older people, who will then be taught and guided by younger people how to use various systems. Again, this is a nice example how AI and technology can bring people together, working towards relational health.

Comment: Your idea of ‘civic AI’ and finding common ground and building relational health is very hopeful. But you are working on the basis of various assumptions, importantly that people trust technology and AI and that they want to be ‘civic’ with each other. But there are many communities suffering from inequality and conflict, in which people experience serious poverty, violence, lack of trust in government, technology and towards each other. It is very difficult to build relational health in such contexts with technology that people see as a factor that if anything contributes to their suffering.

Answer: You’re absolutely right. Taiwan’s digital democracy didn’t emerge from trust; it emerged from profound distrust after decades of authoritarian rule. We built these systems precisely because people didn’t trust government or each other.

The key insight: start with the smallest viable bridges. It might mean something as modest as agreeing on basic facts about local water quality, or coordinating disaster response despite political differences.

Most importantly, communities own their own infrastructure. We learned from bitter experience that parachuting in with technology “solutions” deepens wounds. Instead, we need local facilitators who understand the specific traumas and conflicts. The technology becomes theirs to modify, fork, or compost.

The goal isn’t to make people “want to be civic”—it’s to create tools that remain useful even amid conflict. Over time, these small functional bridges can create space for larger ones — not harmony, but perhaps a pragmatic coexistence.

Question: How to ensure that people are not pushing their own agendas under the guise of ‘civic AI’ and relational health?

This is precisely why we emphasize transparent mechanisms over good intentions. When someone proposes a bridging note or civic intervention, they must show their work. Not just “this promotes relational health” but specifically: which communities were consulted, what trade-offs were considered, whose voices might be missing.

Indeed, if a “civic” solution only works when your allies run it, it’s not civic—it’s partisan. True civic infrastructure remains robust even when operated by your opponents.

Exit rights and portability matters: The ultimate check on agenda-pushing is the ability to leave. When data and relationships are portable, no one can capture a community under the banner of “civic good.” If someone’s version of relational health feels coercive, communities can fork the tools and rebuild elsewhere.

Finally, we track bridging metrics, not engagement. Do participants report finding common ground with those they previously disagreed with? Do polarization measures decrease? These are harder to game than likes or shares.

Previous Home