Diplomatic time runs in years. Algorithmic time runs in milliseconds. AI crisis diplomacy must close that gap.
Good local time. I am Audrey Tang, Taiwan's cyber ambassador, first digital minister, and 2025 Right Livelihood Laureate. My heartfelt thanks to AI Safety Asia for staging this important conversation at the India AI Impact Summit.
In diplomacy, we think in years. It takes time to draft text, build consensus and ratify commitments. In the AI world, crises unfold in milliseconds. What I wish to discuss today is this incompatibility between diplomatic and algorithmic time.
Crises in algorithmic time
These crises are not on the horizon. They are unfolding now. Deepfake videos featuring public figures. Synthetic voice scams spreading across borders. Automated systems amplifying harm before regulators can respond.
We have seen what algorithmic time does to markets — the 2010 flash crash, where U.S. markets plunged and recovered in minutes. We can see what it does to trust — Europol has warned that organised crime is using AI-driven impersonation to scale fraud and evade detection across jurisdictions.
And here is the shift that changes everything. AI is no longer just a tool. It is a participant. NIST's most recent guidance describes AI agent systems as capable of planning and taking autonomous actions that affect real-world systems, as OpenClaw recently demonstrated. Once incidents become agentic, response cannot depend on heroic improvisation. It requires institutionalised, cross-border mechanisms.
Three building blocks for AI crisis diplomacy
So what does AI crisis diplomacy look like? Three building blocks and one regional proposal.
Trust: a whitelist for public integrity
Taiwan's 111 government SMS is a dedicated short-code for official messages, so citizens can instantly verify what is real — a blue checkmark for public communication. Every message shows the agency's name and the last three digits of your phone number: proof that the sender knows who you are and the network guarantees who they are. When people trust the channel, phishing and impersonation lose steam. Every country needs its own version — a low-friction, verifiable trust channel that works even in crisis.
Consensus: AI that listens at scale
Tools like Polis simplify interaction to agree or disagree, removing reply threads that amplify emotions. They surface bridging statements — ideas that people with opposing views still find reasonable — and make them visible. The vTaiwan process combines this with in-person dialogue, transforming polarised issues into workable policy. Tools like Talk to the City push the scale of listening further, with auditability at the core: every theme traces back to original participant statements, so society can verify whether the summary is faithful. In crisis diplomacy, legitimacy comes from speed and verifiability.
Safety: AI incidents as civil defence
Deepfakes do not stop at borders. Market cascades do not either. Work on defining and monitoring AI incidents is emerging, but we still lack the operational connective tissue for cross-border coordination.
A regional proposal
So here is my proposal: establish a regional AI crisis liaison network — a technical hotline for the algorithmic age.
We do not need to start from scratch. In cybersecurity, FIRST provides a global network for incident response teams. APCERT offers a trusted contact framework for the Asia-Pacific. What we need is to extend that capacity to cover AI-specific incidents — whether by deepening existing mandates, embedding AI expertise within the structures, or establishing complementary liaison points that plug into the networks we already have.
The goal is to ensure that when a millisecond-scale crisis hits, cross-border cooperation is not improvised. It is ready to roll. This does not require political alignment. It requires technical trust.
Let Asia supply safety infrastructure
Finally, a window of opportunity: let Asia be not just a rule-taker, but a supplier of safety infrastructure.
This summit is the first event of this scale hosted by the Global South — possible through growing recognition of the institutional power of India's digital public infrastructure. Aadhaar and UPI are increasingly seen as leaders others can learn from. Asia can add an AI governance layer on top of DPI.
The 6-Pack of Care
Let me conclude with this. When the earthquake hits, nobody checks the organisation chart. What matters is whether the building codes were followed, the drills were run, and the neighbours knew how to reach each other.
AI safety is the same. At Oxford Institute for Ethics in AI, we call these civic capacities the 6-Pack of Care: the core muscles a society needs before the earthquake hits. The key question here is not who controls — in the moment of crisis — the organisation, but whether the infrastructure for cooperation has been built before the crisis arrived.
Let us treat AI safety like civil defence: fast, partnered with civil society and capable of cross-border cooperation. Let us make sure the institutional rhythm of AI crisis diplomacy keeps pace with the speed of agentic AI. That is how we can free the future — together.
Thank you. Live long and … prosper.