่ฏๆ–‡

Podcast: The 6-Pack of Care

September 5, 2025

Originally recorded for Accelerating AI Ethics, University of Oxford Institute for Ethics in AI.

Audrey Tang and Caroline Green introduce the 6-Pack of Care โ€” live, in the week the site launched. Fifty-eight minutes covering all six packs, from attentiveness to symbiosis, with grounding in social care practice, co-production, the kami metaphor, and what to do after listening.

๐ŸŽ™

Accelerating AI Ethics ยท 5 September 2025 ยท 58 min


Caroline Green: Hello and welcome back to Accelerating AI Ethics. I'm Dr Caroline Green from the Institute for Ethics in AI. In August, we had an extraordinary conversation with Ambassador Audrey Tang about Plurality, a vision for AI that augments human cooperation rather than replacing it. The response to that episode highlighted a clear need for hopeful, practical alternatives to the dominant narratives of AI conflict or singularity.

Today, we're moving from that broad vision to the specific architecture. Audrey and I have been collaborating here at Oxford on a new book that outlines exactly how we build this future. It's titled "The 6-Pack of Care: Exercising Civic Care in AI Governance." Audrey, welcome back.

Audrey Tang: So happy to be back, and what a week working with you on this new microsite. It's up now. When you're hearing this, please check out "6", as in the number "6", 6pack.care and read our manifesto and the entire microsite.

Caroline Green: It's been such a pleasure working with you on this, and this episode is all about introducing the 6-Pack of Care and the Civic AI approach to the world.

Audrey Tang: Yeah, indeed. I think one response I got from our previous podcast was that people really loved the energy of Plurality, which is about turning conflict into co-creation or co-production. However, people also want a very concrete, very actionable framework that they can put into their AI systems or evaluate the AI system they're using today, to test whether they're actually paying respect to humanity as a whole, and so I think that is why we really went very, very nitty-gritty engineering when it comes to applied ethics here with 6-Pack of Care.

Caroline Green: Absolutely, I agree with you. And that's also what the Accelerator Fellowship Programme at the Institute is all about. It's about translating a philosophy into something very applied, that people can pick up and use in their own lives.

So, let's go to this central concept of civic care, which this whole approach and the 6-Pack of Care is built on. Tell me about this specific framing, and specifically care โ€” it's often a concept seen as very interpersonal between humans caring for each other. But how does it apply to the governance of powerful autonomous systems?

Audrey Tang: Yeah, definitely. When we talk about the ability for machines to achieve certain outcomes, most of those machine learning algorithms just maximise for that outcome. For example, if you teach a social media algorithm to keep people engaged, that is to say, glued to it, then as we mentioned in the previous podcast, most of those algorithms just show you the most conflicting, the most extreme, the most hateful content because it triggers the part of the brain that wants to keep arguing with it and therefore people become glued to their screen.

On the other hand, while maximising that outcome, it really puts in a lot of negative externalities, polluting the relational situation between people. What we used to do is follow the same people on social media, on blogs, and so on โ€” we can have a shared common ground to talk about. But now, when optimised for engagement, everybody lives in their own world and the social fabric became so strip-mined that there's no common ground to talk about.

So how do we describe the class of AI algorithm that does not just optimise for a particular outcome, but rather for something that more resembles healthiness in a society? I looked around in the various different ethics traditions and found that there is this tradition called "virtue ethics" that wants to imbue in an actor โ€” a human or machine โ€” some attributes, some properties. And care is the property that makes us listen at the speed of the actual needs of the people getting affected by those algorithms. So instead of the designer deciding everything, care says to us that it needs to be from the needs of the people affected. And "civic care" means that it's not just the individual needs, but also the needs of the healthy relations between people.

Caroline Green: Okay, so this is really interesting to me; as I told you in the last episode, my research background is in social care, which refers to a practice of supporting people with illness or living with disability or so on with activities of daily living. But it has got a lot of aspects to it**: it's really supporting people practically, but also emotionally on so many different levels in their lives. And within social care, the field here in the UK, people in that community**: people using care services, people offering care and so on**: they want to make the policy landscape much better for everybody, and quite recently, this new concept of co-production has emerged, and it reminds me of what you just said.

So, co-production really is a mindset, a way of working together where different actors come together, find a common ground, and then co-create and co-produce solutions. And it was born from the fact that often there are some groups โ€” specifically people using care services and offering care โ€” who are less powerful; their voices are not heard as much as more powerful groups, which may be care providers, policy makers, and so on. So, co-production is all about bringing people together, making them listen to each other, and then create solutions that work for each other.

So, it's a mindset that challenges power structures, then puts it into practice to create something that is not only really fitting for what people need and want, but it also creates a sense of community, that relational health. I'm thinking about how from my context it bridges to the civic care approach in quite a practical way.

Audrey Tang: Definitely. And I think the word "civic" here means that we treat this ability for people who are affected to involve themselves in the decision-making process as the main outcome, instead of maximising like profit, engagement, or things like that. If there's something we must optimise, then it must be this civic muscle, this ability for people to act in solidarity, to co-determine, co-produce the future, and also the service from the AI machines and vendors and so on, that is part of this future, but does not dictate the future.

Caroline Green: Wonderful. So yeah, going back to that example of social care**: social care and AI, this is very much a reality already. People are using AI systems in their everyday lives, whether it's people who are needing care and support, perhaps in their own homes, or it may be care providers in care homes, and we're seeing these AI systems there.

But I think there is a concern at the moment among various people; we've seen that in the work that we have done in social care, that these AI systems don't really fit people's realities, that they don't necessarily meet their needs. And there is that need and wish to hold on to the relationality of care. People don't want robots to take over care, but they want AI and technology to help them to live better lives, to help them care better.

So, I can see that the civic care approach here could be really helpful in making sure that we're leveraging the power of AI for the relationality of care, to help people in their care situation, but to also really build systems that actually fit people's needs.

Audrey Tang: Yeah, I mean, ethics to me is about telling a good system from a bad system, a morally right system from a morally wrong system. But many ethical traditions**: those starting with consequences, with outcomes**: start to produce a lot of side effects when we're facing a lot of power asymmetry. And power asymmetry is kind of the defining characteristic when we interact with AI systems.

Nowadays, people who want to talk to their therapist or their coach find that AI chatbots can produce 10,000 times more text at the same time unit at a fraction of the cost. And they all sound very convincing. But is it really care that it's providing? Or is it simply reliance and sycophancy? It's then very difficult to tell if you're just measuring by the outcome alone โ€” how many words of support can they generate, how do people self-report as feeling better after seeing the chatbots' conversations?

So then we need something else. At a speed of thousands of times the difference between a machine actor and a human actor, traditional ethics sometimes fail. But there's the framework of care which assumes this asymmetry from the start, because Joan Tronto โ€” who has worked on "civic care" or "caring with" โ€” the idea is that a good gardener must till to the tune of the garden, at the speed of the garden. And that is the fundamental of the 6-Pack of Care: for the robot that takes care of people's relationship with each other to work at the speed of those relations.

Caroline Green: Okay. One thing I'd really like to explore with you more is how the "civic care approach" in AI ethics challenges the way that we are looking at AI and the role of AI in our lives at the moment. And I've been fascinated working with you in the past week, just seeing how you integrate AI into your own life.

And that idea of co-presence โ€” that AI now has this seat at the table โ€” I find really interesting. You challenge the norms of a physical presence in your own work. Can you tell us about your experience at the UN Internet Governance Forum in Geneva back in 2017, where you participated via a telepresence robot?

Audrey Tang: Yeah, definitely. Back then in 2017, I was invited to talk in UN Geneva in the Internet Governance Forum about empowering people in small islands, in landlocked countries in which internet access isn't for granted. So, this is an area where Taiwan has extensive experience, but because Taiwan is not currently a member at the UN, at the door of the UN Geneva building, if you show a passport from Taiwan, you cannot get in. This is a classic case of something about us without us. And so, it just turns out that Double Robotics, which is a company that makes those machine doubles**: their machines do not need a passport to get into the door.

I was in Taipei and just remote controlling that machine. When somebody raised their hands, I could turn the machine to face at them. And it became the first time since 1971 where a representative from Taiwan and a representative from Beijing spoke on the record in the same UN meeting together.

So that created a diplomatic precedent that we then keep referring to, and that enabled a lot more participation. So, I think co-presence is about looking at who might be missing from the table, systematically, structurally, because of some weird political or geopolitical reasons, but then finding out ways for proxies to re-present their actual needs and their actual ideas from the place of need instead of just somebody else to represent them. This re-presence, this co-presence, is very important for co-production.

And so today in my life, I use AI agents all the time. In fact, as we're talking, we have this AI agent running on a local computer โ€” this is called Daylight Computer. But I think the way I think about this is that they're all assistive to the relational health. This has no camera, so I would not take pictures. This doesn't even have colour, so the reality around me is always more vivid than this screen. And it just sits in the background in a very calm way, but if I do need some reminder, looking things up, and so on, then it also has the embodied context of what I'm hearing and what I'm thinking so that it can offer me real-time assistance.

Caroline Green: I definitely need a robot like that. The "civic care approach," in relation to presence, is all about challenging the idea that it's too difficult to raise people's voices who are maybe not heard or who are hidden. And there are so many ways that we can do that these days to really strengthen that relational health.

Audrey Tang: Yeah, definitely. When I was Minister for Digital Affairs in Taiwan, I often say that I take all the different sides. And so, if there is a part in a multi-stakeholder conversation that I do not understand**: either because I don't have the lived experience, or I don't see things the same way, or simply because I don't speak that language**: it's my fault. It's not the fault of those stakeholders. And so, I found assistive intelligence, or AI use for civic care, really helpful in that, first of all, it can help topicalise and summarise what we have talked about, and it can also translate across cultures so that even in a place where I really don't have the lived experience, nowadays language models can find metaphors that help me to understand.

So, re-telling their story from something that I have personally experienced โ€” and now people have applied this method to what's called the sense-making tool, so that you can have like 10 people sitting around the table talking about their lived experience. And almost in real time, it can make sense of that conversation and reflect back to the people so that people who do not have the same lived experience can nevertheless understand what they're even talking about. This way, it is not to replace the people-to-people connection, but rather augmenting our civic muscle to make sure that we can understand and be more empathetic across longer cultural distances.

Caroline Green: Great. Now, let's dive into the book's framework, the 6-Pack of Care: Attentiveness, Responsibility, Competence, Responsiveness, Solidarity, Symbiosis.

Let's start with the foundation, Attentiveness. You write, "Before we optimise anything, we choose what to notice." Why is this the crucial first step?

Audrey Tang: Yeah, I think of attentive AI as an AI system that is curious about what the actual needs are. Instead of starting by optimising for certain things, like a score of engagement or something, it first asks, what is the local context? What are the marginalised groups? What are their lives like? So instead of prescribing anything, it just engages community using the methods we talked about**: the bridging maps, the uncommon ground, the idea of sense-making**: all of them are about creating a group selfie rather than just a statistical average.

And so, in the context of online social media, for example, there's now more and more social media trying out community notes, which instead of just showing the most viral post that stirred up outrage, it's now crowdsourcing what is the most helpful local context to attach to a viral post. And the most viral posts are then accompanied with something that's called the "uncommon ground" โ€” where people who usually disagree manage to agree that some notes are helpful. And this kind of bridging map is a much more accurate group selfie because it shows what differences there are in different groups, what splits us, but also what unifies us despite the splitting.

And so attentiveness is to both the unique needs of various different sub-groups, but also the overarching bridges that those groups may not themselves be aware of โ€” but after attending to those common needs, they all become aware: these are the common needs that, if the AI system goes and overcomes those issues with us together, it will not infringe on the important things that we also consider sacred and an AI system should not disrupt, like human dignity.

Caroline Green: So, when we do listen, but then there's no action after that, you say that is theatre, right? Would you say once a need is noticed, the next step**: Responsibility**: is how we move from recognising a problem to making a verifiable commitment?

Audrey Tang: Indeed. So nowadays, when people are developing AI, many frontier labs have this thing called "model specification" or "model spec." And such a spec is a way to make a public pledge to the relevant community about what this AI system is trying to do**: and also what it pledges not to do.

And a few years ago, we introduced this idea of alignment assemblies in which a statistically representative slice of the public come together, and in groups, they look at each other's suggestions for the AI system code of conduct, integrating those into their own lived experience and upvoting or downvoting them. After a few rounds of deliberation, they settle on these kinds of bridges of what AI systems should and should not do in their community.

If we can bring those attentive results into the pledge that the vendors make, then those vendors can be called responsible AI vendors because they take responsibility on fulfilling the parts of the alignment assembly co-production that they can do โ€” and also draw a very clear boundary saying, we're just fulfilling this much, and beyond this is not our scope.

Caroline Green: Going back to when I listen to you explain the 6-Pack of Care, I always go back to the work that I mentioned earlier that I've been doing in the past two years, trying to find out through a process of co-production what the responsible use of AI in social care means.

What we have done in those past two years is bring people from the care community โ€” people drawing on care, people providing care, care providers, policy makers โ€” around the table to get that common ground, to talk to each other, to make sense. And within that process, the responsibility was about the various groups of people really finding themselves within that landscape of responsibility, and what it means for them to be responsible.

We had tech providers in this co-production as well โ€” people who are building AI systems specifically for caregiving. And they said, well, we really want to show people that we are responsible. So based on our definition of the responsible use of AI in social care, which is all about supporting the values of care, not undermining or harming values of care and human rights, they said, yes, that's what we want to do, but we actually need to think what that means for us in practice. So they created a pledge. And now tech providers are signing up to that pledge, and not only signing up to it โ€” they also then need to be accountable to it.

Audrey Tang: Yeah, definitely. I think what you're describing is exactly the right process in which the pledge includes not just the scope of commitment, but also real timelines**: we will deliver this by this time**: the guardrails, remedies, accountability mechanisms, and hopefully also independent oversight. And the civic care mechanism I just described is more about automated translation. So it's not just that people come together and have a conversation, but rather in real time they see a bridging map from the attentive AI that reflects it back to them without having to wait for a few months for human sense-makers to code this up.

And once this is done, then we translate those very legible specifications into something that AI systems can already comprehend. So, instead of human programmers programming each of those pledges into their systems logic, it is becoming a curriculum, so that when AI systems train on those data and materials, the data that goes in has already been filtered by the lens of those pledges โ€” so that the AI, once trained, intuitively understands what is moral for this group of people.

Caroline Green: So, there's one question that I am burning to ask you. How could we re-envision the co-production process on AI in social care, which has happened mostly without AI**: how could we do it now using AI?

Audrey Tang: Yeah, definitely. So, one very quick introduction we can make is**: because you do have the transcript of all those deliberative polling sequences**: instead of just taking the quantitative data of the final poll, we can also take the qualitative data of that conversation and feed it into the sense-maker, which is open source. The sense-maker would then be able to annotate the votes that people give at the end of the deliberation and nuance those decisions into, for example, people agree that these are important principles, but people start saying: when deploying, you need to watch out for this, watch out for that.

So you generate a much richer, much more thick data set when it goes to those vendors. They still sign on to the same pledge that was voted in from the deliberative polling. But then when they're training their AI systems, those systems can additionally learn from the thick context provided by the deliberative process itself. I'd love to introduce the use of sense-making and the bridging maps โ€” and more importantly, reflect that back to the people so that people can maybe chat to a care assistant and ask what was agreed in the previous deliberation, and now I have this real situation, how do you compare against that?

Caroline Green: Okay, yeah, I'm really excited about the next step using the Stanford AI Deliberation Platform in this process.

Let's go on to the next pack, the third pack called Competence. Here it's really about engineering meeting ethics. Tell us more about competence.

Audrey Tang: Competence to me means empowering the people receiving the care, not just addressing their needs, but also making sure that their relations also flourish, also thrive. So that means, for example, that if you want to serve the needs of people when learning, it is not about introducing an AI that can do all their homework for them. A competent teacher is not just about solving all the homework for the students, but rather working at the speed of the student and making sure that the student is curious about these disciplines, these different fields, and also introducing the student to more communities of knowledge, of practice, introducing students to more peers for peer-based learning, more purposes for purpose-based learning, and so on.

A competent teacher, or a gardener, is a very good metaphor for a competent caregiver. When AI is in this situation, it's very important for us to see whether the AI is taking away the existing human relations because people come to depend on AI more, or whether it is encouraging the human to grow more other human-to-human relations.

Caroline Green: That makes me think of another example that I have recently come across**: systems that are supposed to support people with disability or people with frailty, or living with stages of dementia, to support them in their own homes so that they can stay more independent for longer. What I've seen here is that the intentions behind these systems are great, and generally people are very excited because people want to keep on living in their own homes as independently as possible for as long as possible. But concretely**: I'm thinking of a case where somebody on one day asked their AI for help remembering something, and the system helped them very well. But the other day, that same person fell, and they had someone there to support them. The supporter tried to get some help through the AI: what can I do to help someone up? And the AI told them, oh, you should call an ambulance. So they called the ambulance, but there was no ambulance to come because we've got a shortage of emergency services, especially during winter months.

So here, one day the system really helped the person, but on another day, in another context, it wasn't very helpful. And people are concerned that AI might mean that they will be cut off from important relationships they have with caregivers coming into their homes โ€” because now the policy is, oh, we only need a caregiver to come in once a day instead of twice a day because there's an AI system supporting. So, how important is it to really think about how these AI systems work for people in their contexts and their situations?

Audrey Tang: Yeah, definitely. And I think that is the fundamental difference from the previous age of reinforcement learning with human feedback in which ChatGPT was born, because they ranked the various possible outcomes by asking individuals to rank this response as better than that response. And once they worked with a lot of people to label good and bad responses, it produced something that would consistently get good scores from human labellers**: but it also made ChatGPT extremely flattering, extremely sycophantic, because most people, when they are in this kind of individual dyadic relationship, they optimise for this short-term satisfaction. And so, if something flatters me, of course I'm going to let it pass. But if something really checks my understanding, really pushes back against my hallucination, maybe I'll give it a thumbs down. Right? And so an AI that is trained from individual human feedback may not prioritise the relational health of that human embedded in a community.

So now we're seeing more advances in multi-agent settings, where the agent is learning not from individual feedback, but from community feedback โ€” and so the situation you describe, where a person is not just alone in their home, but rather has a community of helpers, prioritising these relationships of culture, family, and so on as the main unit of analysis, and then doing reinforcement learning to get the better score, not from a thumbs up or down, but from a mutual recognition and mutual assurance from those communities. This is a very active area of multi-agent research.

Caroline Green: Yeah, and I can also see here how we then need to adopt a mindset of building systems that is about people's needs and learning from their needs and wishes rather than based on assumptions**: for example, of particular groups. So, there's a growing community now interested in building systems specifically for older people. But we don't just have a group of older people where everybody is the same. If we're wanting to build systems that help older people in their situations, we should think more about those localised, individual needs, family health, relational health, and so on.

Audrey Tang: That's exactly right. Like if I am a very old person, but I do not have this particular frailty, for an AI system to simply assume that I have and provide all sorts of help is really taking away my agency**: and it is a kind of discrimination, really. And so, to overcome those very simple, very thin labels and stereotypes, it's exactly why we need this communal AI alignment, because that's the only way to provide the thick context from that particular community.

Caroline Green: Let's move on to the next pack. Once we know that a system isn't perfect**: and no system is perfect**: it needs to be changed. It needs to adjust. How do we design systems that are correctable and adaptable at the speed of society?

Audrey Tang: I think the most important thing here is to empower people closest to the pain. That is to say, people who are suffering the harm should be the people who define the harm. So instead of just the designers trying to figure out how to make sure that it does no harm to everyone**: it is simply not possible; all the unforeseen consequences will harm someone at some point**: what we want is that people do not mitigate against those harms thinking, "this is just how tech works."

The idea here is that we need to make something like a Wikipedia for evaluating those harms. So the Collective Intelligence Project, which I work with, has this website called weval.org, where communities can author localised benchmarks for harm. So there you will find, for example, people in Sri Lanka crowdsourcing what is important for them in a local context so that the chatbot can understand what's important to that community. There are communities that are suffering from psychosis, mental health issues, and they want to make sure that the AI chatbots do not exacerbate those issues.

And so this then closes the loop โ€” because once those individual harms are foregrounded and made into something that everybody can see, then we can continuously run those benchmarks against the frontier models so that when they release a new version, it doesn't only compete on the scoreboard of, for example, solving the international Olympiad on mathematics, but also makes sure that it doesn't regress on those benchmarks where it really did cause people harm.

Caroline Green: Okay, yeah**: that's actually our next step in the responsible AI and social care project, to create a database where people can report their experiences, be it harms but also what they find helpful. So I think we need a Weval here.

Audrey Tang: Yeah, we do need a Weval here so that when the next version of those technologies comes, you can run in a simulated environment**: a simulated person in exactly the same situation**: and then evaluate how the next version of the robots will respond to it, making sure they continuously run this test so that they don't cause more harm once they release the second version out to the wild.

Caroline Green: Okay, so let's move on to the last two packs, which really look at the ecosystem that we need for the civic care approach. Solidarity is the 5th pack, and it's about making cooperation the path of least resistance. How do we build this infrastructure to counter the trend of monopolistic power?

Audrey Tang: Yeah, so the first four packs form a loop, right? You become attentive to the group's needs. You respond to those needs. You're competent, so you deliver and fulfil some of those needs. And in fulfilling it, you maybe cause some harm, and then very quickly pivot and in a responsive way go back to attentiveness.

One thing though is that if the power asymmetry is too big โ€” if we do not have a choice between receiving care from a tech vendor, if there's no way to switch to some other vendor, no way to bring our data from this conversation to another โ€” then there's simply no incentive for the monopolistic provider to be attentive anymore. And this happens again and again: when a service is small, like when it's an upcoming social media platform, usually it's very attentive, it's very responsive. But at some point, when the majority of population is on that platform, suddenly it doesn't care about you anymore. And this is not something that can be overcome with engineering alone โ€” usually it takes policy.

Here, people already talk about "interoperability" a lot, for big tech systems. Imagine, without number portability mandates, it would simply be impossible for new telecoms to enter the market, because it's just too much inconvenience to switch from one vendor to another. And so nowadays, we're now seeing policymakers โ€” for example, in the state of Utah โ€” mandating that starting next July, you can switch from one social network to another and bring all your existing contacts with you.

And we also need the idea of selective disclosure, or meronymity โ€” partial anonymity โ€” because one part of sharing our lived experience is to identify the harm that is caused by those technologies. But if we have to sign each report with our real name, we're essentially doxing ourselves. And most people who are in a vulnerable situation would not want to contribute their lived experience to course correct those technologies simply because they would not trust the people who process those data.

But if we do meronymity, selective disclosure, then we can make sure that it comes from somebody who claims to be a resident in this country, who claims to be between 18 and 50 years old โ€” but then you do not need to learn anything more about it. Interoperability and meronymity together provide a good infrastructure for agentic AI, so that we can know each chatbot, each technology comes from some kind of vendor, what kind of pledge did it sign, which community is this pledge going into โ€” without revealing the private details of any community members participating in the deliberation or the wiki eval.

Caroline Green: So surely that would also incentivise providers to ensure that their systems work**: that they're competent?

Audrey Tang: The race to the top, right**: not to the bottom of the brainstem.

Caroline Green: So, the last pack is Symbiosis. And that one offers a direct challenge to the singularity narrative. You introduce a really powerful metaphor from Japanese Shinto**: the kami. What are kami in the context of AI, and why is this vision preferable to the singularity narrative?

Audrey Tang: Yeah, kami here is used metaphorically as a local steward**: specifically, an AI system that is bounded to something place-specific or community-specific, that cooperates with the relational health. So all it cares about is the relational health of this particular community. We talk about how community nodes can bridge a community together. And an AI that drafts such community nodes is this kind of kami. It simply does not worry about paving the galaxy with paper clips or with solar panels or anything like that. It's not trying to grab power by itself. All it cares about is the relational health of this community. And if the community has fully healed, if the community has moved on, if there's no such community anymore, then the kami can be switched off with no regret.

So again, in ethic terms, it means that it optimises for the virtue of relational civic care. It is not optimising for the consequential outcome of maximum happiness in the universe. This is the horizontal alternative that we call Plurality โ€” many local kamis, in an overlapping way, taking care of each garden instead of a vertical superintelligence supreme gardener that doesn't know what's enough. This "enoughness," this boundedness, is the defining characteristic of symbiosis. Otherwise, it's parasitic.

Caroline Green: So, before we started working with each other, the institute created a care assistant called Dedicate, rooted in many conversations with family caregivers to understand what it is that they need to feel supported in their everyday life as a caregiver. And that idea behind Dedicate translates quite well onto specifically that symbiosis pack, because it's all about creating connectedness**: using AI to create connectedness between organisations and people that are out there to support family caregivers with advice and information. We built a system where we have that content given to us by them, and then we have an AI on top of that which makes the information easily accessible to family caregivers in the moment when they need it most. But that chatbot also has its limits. So, if somebody types in a question around a very specific medical question, or something not commonly found within caregiving at all, that chatbot will say, oh, this is not my area.

Audrey Tang: Yeah, I tried it out. I went to dedicate.life and asked what equipment should I bring to my podcast. And it says it doesn't know.

Caroline Green: I would have been surprised if it had given you an answer there. But I'm really excited to have found that Dedicate and the idea behind it is so much aligned with the civic care approach**: the way that we went about it, raising people's voices to hear what it is that they need, and then building a system that connects them with what's out there in the world. So they actually also feel like, oh, I'm not alone with this**: and the kami really speaks to it there, because it's about that particular community.

And Dedicate right now is for people living in the UK, or England even specifically. But we could build systems like that in other contexts and other cultures where caregiving might be different.

Audrey Tang: Right. So it forms a federation of kamis, instead of trying to build one simple superintelligence that knows everything and can do everything. Usually when you try to build a super solution that works everywhere, it ends up just coercing every place to look just like the place where the designers lived**: and then just colonises the rest of the world. Starting from symbiosis means knowing that this boundedness, this "enoughness," is actually better for people's relational health.

Caroline Green: So Audrey, we have learned so much about the civic care approach. We have learned about the framework, the 6-Pack of Care, which is ready to be used**: which has been used and tested, actually, in various contexts. So it's something that's out there for people to pick up.

And I think there is a real urgency for this approach. I also work on rights โ€” human rights and older persons. And there's this really exciting new process, a powerful step towards a potential new convention on the rights of older persons within the UN. And working with you, I've been really excited about the possibility of using AI now to have global voices of people of all ages โ€” because we all age, ageing is a matter of every human being, right? Not just older persons โ€” to also raise older persons' voices from around the world to inform this process within the United Nations.

Where do you see urgency, and what would you tell our listeners to do now once they've listened to our podcast and learned more about the civic care approach?

Audrey Tang: Well, first of all, check out our microsite, 6pack.care, which links to our manifesto as well as concrete tools to start using. Anyone can run a local alignment assembly; anyone can tune a local AI model using the result of that alignment assembly, and see for themselves whether this kind of communal model responds better to their community needs. Anyone can advocate for social portability, for interoperability between the providers**: and none of this is theoretical.

In the Frontier Labs โ€” we work quite closely with parts of Google, for example, Jigsaw, and I just gave a talk at DeepMind. Parts of Microsoft Research focus on Plurality research in the collaboratory. OpenAI and Anthropic have both run alignment assemblies with the CIP.

So I think it's not that the Frontier Labs do not know of these approaches. It is just that the policymakers now need to understand that this is not science fiction. This is something that each lab in their research capacity has already done quite a few iterations of. And so now the policymakers' ask must be to make this the baseline. If the AI product is not attentive, then it's broken. If an AI product is not responsive, then it's broken. And if the entire ecosystem of AI agents does not work in solidarity but rather is engaged in a race to destroy each other, then that's not the best idea. So to have symbiosis and solidarity as the bedrock of our tech policy โ€” this is very important. And so I think it's great that we have this podcast, and great that we have this microsite that can refer the policymakers, engineers, researchers, and people who are affected by AI into this shared vocabulary that is at once philosophy, but also policy and engineering.

Caroline Green: And what gives me so much hope with this approach is just the idea that we can build better relationships again in the world where it seems that, often, we're just too polarised, where we don't work with each other, collaborate with each other, be with each other, the way that really makes us happy as humans, as relational beings.

Audrey Tang: Yeah, indeed. This lack of civic care manifests as polarisation. And this feeling that we're all polarised, that we cannot make coordinated actions, that people just hate each other on social media**: it's like a large web that coats the entire tree and denies the tree from sunlight. I think it's time to step out of the decade-long illusion caused by the polarising social media and then try civic care for real.

Caroline Green: Yeah, thank you so much, Audrey, for this podcast, for introducing the architecture of civic care. It really is a very compelling and very necessary vision. And in the spirit of relational health, as you already mentioned, people, anybody listening, coming across this work, is more than invited to engage with us through our microsite and the 6-Pack of Care. We'll also make sure that people have the link to that.

And again, thank you so much. It's so great working with you.

Audrey Tang: Yeah, same here. Live long and prosper.

Home