華文

Chapter 1: Attentiveness in Recognition

← Back to Home

Before we optimise anything, we choose what to notice. That first look sets the stage for every model, metric, and policy that follows. Attentiveness is not just “collecting data.” It is a promise to see needs and to act as if people and places matter.

Quick version

  • What counts is decided at the start. Make that choice fair and revisitable.
  • Some situations come with obvious duties (a crying child at a crossing). Caring begins there.
  • We can build AI that notices well, answers to people, and can be switched off without a fight.

Results we want:

  • AI that serves a community and accepts handover or shutdown.
  • Institutions that earn trust because they can be questioned and corrected.
  • Communities that miss less, regret less, and repair faster.

Why start with attentiveness?

A simple picture

At a crosswalk, drivers slow for a child. No one stops to solve an equation. A need appears; a duty follows. That is attentiveness.

Now scale up. An AI looks at a world full of “crosswalks” — workers, rivers, languages, customs. It can treat them as obstacles or as relationships asking for care. The difference begins with the first look.

At global scale, attention is contested and easily manipulated. So we keep the clarity of the crosswalk example and add simple rules that hold up under complexity and pressure.

Simple ideas behind this chapter

Basic rights and fair oversight

  • Rights baseline. We use the UN’s Universal Declaration of Human Rights (UDHR) plus local constitutional rights. Claims that try to erase someone’s basic standing are recorded but do not set the agenda.
  • Independent oversight. A small board (community members and experts) can pause or veto high-impact changes. They publish reasons and note any conflicts of interest.
  • Clear appeals. Urgent cases answered in 48 hours, standard in 7 days, complex in 30 days. Remedies include correction, rollback, or sometimes compensation.
  • Real independence. Protected budget, term limits, and transparent selection. Power answers to people, not the other way around.

Why this matters for alignment

Many AI plans try to “learn the objective” from old data. But shared goals are bargains among changing lives. When people who were ignored finally speak, the target moves. Guessing a perfect, fixed goal fails.

Attentiveness offers another route: align to a trusted process that listens, explains, adapts, and can be corrected. In practice this means:

Rule of thumb: if a decision is challenged, its fuzzy parts must be made clearer. Everyone should be able to see how they became clearer.

Three simple rules

What good attentiveness looks like

From ideas to everyday practice

Step by step

Plain tools (buildable today)

  • Broad listening. Keep source, language, and uncertainty tags intact.
  • Bridging maps. Charts of overlap and disagreement, with citations.
  • Perspective receipts. Each person can find and correct how they were represented.
  • Rules computers can check. Community data rules written so software can enforce them automatically.
  • Fair queues. Simple algorithms that favour high-risk issues and quiet voices.
  • Appeals buttons. Standard ways to ask for a fix, with timers and audit trails.

The flood-bot story (a running example)

A midsize city is hit by floods. The city launches a simple chatbot to help people apply for emergency cash. Here is what attentiveness looks like in action:

What could go wrong (and quick fixes)

Basic threat model

  • Fake crowds. Lots of copy-paste comments. Use simple source checks and rate limits.
  • Data poisoning. Malicious inputs. Run anomaly checks and quarantine suspicious clusters.
  • Harassment. Protect identities, support moderators, and enforce zero tolerance.
  • Capture by power. Keep oversight independent, rulings public, and funding transparent.

How we keep ourselves honest (what we measure)

  • Coverage and balance. Who took part, who we missed, and how many agenda items came from under-represented groups.
  • Bridging quality. Can people explain the other side fairly? We ask participants on both sides before and after, and we look for steady improvement.
  • Traceability. Can we trace each decision back to its sources? Do contributors accept their receipts? How fast do we fix errors?
  • Trust-under-loss. After a decision, we ask those who disagreed if they can accept the result as fair. If many say “no,” we review the process.
  • Uncertainty discipline. How often we ship with safeguards when we’re unsure, and whether our caution fits reality.
  • Non-extraction. We follow consent rules, share benefits, honour revocations, and protect dignity and privacy.
  • Manipulation resistance. We track and reduce the effect of coordinated campaigns.
  • Exit-with-trust. People can step away and still trust they could re-enter and be heard.

Tools you can adopt now

  • Broad listening. Multi-language, multi-channel input with source and uncertainty kept intact.
  • Bridging maps, not mush. Clear charts of disagreement and workable overlap, with citations.
  • Perspective receipts. People can see how they were represented and ask for fixes.
  • Consent-aware pipelines. Community rules written so software can automatically check and enforce them.
  • Fair attention budgets. A published queue that gives time to high-risk issues and quiet voices.
  • Appeals paths. Standard buttons and forms with deadlines, reasons, and remedies.

How it feels to participate

Interfaces with the other packs

  • To Responsibility: attentiveness hands over the who, what, and why — along with any flags on rights issues and notes on risks or unknowns.
  • To Competence: attentiveness ensures high-caution areas become small, safe-to-fail trials instead of grand bets.
  • To Responsiveness: attentiveness makes repair loops and rollbacks normal, with clear timelines and public explanations.
  • To Solidarity: attentiveness fosters trust across differences (even at scale) by ensuring fair attention and open challenge.
  • To Symbiosis: attentiveness keeps the system grounded in a specific community (place and time), sharing benefits locally and treating shutdown as a success.

Plain-words glossary

  • Attention budget. Simple rules for how to spend limited review time.
  • Bridging map. A map that shows both overlap and disagreement, with sources.
  • Perspective receipt. A notice showing how your words were used, with a way to correct them.
  • Rules computers can check. Community rules written so software can enforce them automatically.
  • Appeals path. A standard, auditable way to ask for a fix — and get an answer on time.
  • Source trail. Records that show where information came from.
  • Trust-under-loss. Whether people who disagreed still accept the process as fair.
  • Enoughness (kami). A system that knows its limits and can let go — built to be switched off gracefully.

A closing image: the hospitable threshold that can still say no

Picture a host who welcomes each guest by name, makes space for their baggage, and shows how their presence changes the seating plan. That is attentiveness. And because some guests try to erase others, the host keeps a firm rule: hospitality within a rights‑respecting home. Teach our systems to be good hosts before great optimisers, and we will keep more of what is precious and create more that is shareable.

Back