Skip to main content

Building Trust Through Transparent AI: A Human-Machine Partnership for Civic Participation

· 9 min read
Jean-Noël Schilling
Locki one / french maintainer

How Ò Capistaine builds trust with citizens and civil servants through measurable AI reliability building trust

Trust Is the Product

When we deploy AI in civic applications, we're not selling software—we're asking for trust. Trust from two groups with very different concerns:

  • Citizens need to trust that their voice matters, that an algorithm won't silently dismiss their contribution
  • Civil servants need to trust that AI is a colleague, not a threat to their expertise and employment

Both forms of trust share a common foundation: demonstrated reliability over time. Not promises. Not marketing. Measured, visible, continuous proof that the system works with humans, not instead of them.

The Fear We Must Address

Let's be direct about the elephant in the room: civil servants worry about AI taking their jobs. This fear is legitimate and we refuse to dismiss it.

Our answer isn't "AI won't replace you"—that's a promise we can't make for all contexts. Our answer is more honest:

In civic participation, AI cannot and should not replace human judgment. It can only earn the right to handle routine tasks by proving itself, freeing humans for the work that truly requires human wisdom.

The goal isn't automation. The goal is augmentation—a team where each member contributes their strengths.

The Human-AI Team Model

We don't implement "AI automation." We build a team with clear roles:

┌─────────────────────────────────────────────────────────────────┐
│ THE COLLABORATION MODEL │
├─────────────────────────────────────────────────────────────────┤
│ │
│ AI CONTRIBUTES │ HUMANS CONTRIBUTE │
│ ───────────────── │ ────────────────── │
│ • Speed (24/7 availability) │ • Judgment (context) │
│ • Consistency (same rules) │ • Empathy (citizen intent) │
│ • Pattern recognition │ • Accountability (decisions) │
│ • Tireless triage │ • Local knowledge (Audierne) │
│ │ • Democratic legitimacy │
│ │
│ AI NEVER DOES │ HUMANS ALWAYS DO │
│ ───────────── │ ───────────────── │
│ • Final rejection │ • Appeal review │
│ • Policy decisions │ • Edge case resolution │
│ • Citizen communication │ • Trust verification │
│ │
└─────────────────────────────────────────────────────────────────┘

This isn't a ladder where AI climbs toward replacing humans. It's a stable partnership where responsibilities are divided by competence, not convenience.

How Trust Is Built: Transparency at Every Step

Trust isn't declared—it's earned through transparency. Every AI decision in Ò Capistaine comes with full disclosure:

┌─────────────────────────────────────────────────────────────────┐
│ CITIZEN CONTRIBUTION: "Le port a besoin de modernisation" │
├─────────────────────────────────────────────────────────────────┤
│ │
│ AI ASSESSMENT (visible to civil servant): │
│ │
│ ✓ Charter Compliant: YES │
│ ✓ Category: économie │
│ ✓ Confidence: 92% │
│ │
│ Reasoning: "Proposition concrète concernant les │
│ infrastructures portuaires, sans attaque personnelle, │
│ en rapport direct avec Audierne." │
│ │
│ Positive aspects detected: │
│ • Concrete proposal │
│ • Local relevance │
│ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ [✓ Approve] [✗ Override] [? Request Review] │ │
│ └─────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘

The civil servant sees everything: the decision, the confidence, the reasoning, and the specific aspects that influenced the AI. They can approve with one click, override with an explanation, or escalate to a colleague.

The AI never hides its work. This transparency serves two purposes:

  1. Civil servants can verify AI judgment against their own expertise
  2. Over time, patterns emerge—humans learn when to trust the AI, and when to double-check

The Trust-Building Cycle: Human Corrections Make AI Better

Here's where the team model becomes powerful: every human correction improves the AI.

┌─────────────────────────────────────────────────────────────────┐
│ THE CONTINUOUS LEARNING CYCLE │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ │
│ │ AI suggests │ │
│ └──────┬───────┘ │
│ │ │
│ ▼ │
│ ┌──────────────┐ ┌──────────────┐ │
│ │ Human reviews│─────▶│ Correction? │ │
│ └──────────────┘ └──────┬───────┘ │
│ │ │
│ ┌─────────────────────┼─────────────────────┐ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐│
│ │ Approve │ │ Override │ │ Escalate ││
│ │ (AI correct)│ │ (AI learned) │ │(Edge case) ││
│ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘│
│ │ │ │ │
│ └─────────────────────┼─────────────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────┐ │
│ │ Training Data│ │
│ │ Updated │ │
│ └──────┬───────┘ │
│ │ │
│ ▼ │
│ ┌──────────────┐ │
│ │ AI improves │ │
│ │ next cycle │ │
│ └──────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘

When a civil servant overrides an AI decision, that correction becomes training data. The AI learns from human expertise. Over weeks and months:

  • Approval rate increases (AI gets better)
  • Override rate decreases (fewer corrections needed)
  • Civil servants spend less time on routine cases
  • Civil servants spend more time on genuinely complex cases

This is not job elimination—it's job elevation. The AI handles the obvious cases; humans focus on the ones that actually require human judgment.

Measurable Trust: The Numbers Don't Lie

We track everything, and the metrics are visible to all team members:

MetricWhat It MeasuresTrust Signal
Approval Rate% of AI decisions humans acceptAI reliability
Override Rate% of AI decisions humans correctLearning opportunities
Confidence CalibrationWhen AI says "90% sure," is it right 90% of the time?AI self-awareness
Time SavedHours of routine work handled by AITeam efficiency
Edge CasesComplex cases requiring human judgmentHuman expertise value

When In Doubt, Respect the Citizen

A fundamental principle that builds citizen trust: when the AI is uncertain, it defers to humans—never silently rejects.

┌─────────────────────────────────────────────────────────────────┐
│ THE "FAIL OPEN" PRINCIPLE │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Scenario: AI is unsure about a contribution │
│ │
│ ❌ WRONG APPROACH: │
│ AI confidence: 60% │
│ → Silently reject │
│ → Citizen never knows why │
│ → Trust destroyed │
│ │
│ ✓ OUR APPROACH: │
│ AI confidence: 60% │
│ → Flag for human review │
│ → Civil servant makes final call │
│ → Citizen gets fair hearing │
│ → AI learns from the decision │
│ │
└─────────────────────────────────────────────────────────────────┘

This principle has a technical implementation:

  • High confidence (\>85%): AI recommendation shown prominently, one-click approval
  • Medium confidence (60-85%): AI shows reasoning, human decision required
  • Low confidence (\<60%): Flagged as "needs review," no AI recommendation shown
  • Error state: Treated as low confidence, never as rejection

Why this matters for citizens: No algorithm silently dismisses their voice. If the AI can't confidently assess a contribution, a human will review it. Democracy requires this guarantee.

Why this matters for civil servants: They're not rubber-stamping AI decisions. On uncertain cases, they exercise genuine judgment. Their expertise matters.

Visible Actions, Traceable Decisions

Every automated action is visible and traceable. When the AI marks a contribution as "conforme charte" (charter compliant), it's not a hidden database flag—it's a visible label on GitHub that anyone can see.

┌─────────────────────────────────────────────────────────────────┐
│ GitHub Issue #42: "Modernisation du port de pêche" │
├─────────────────────────────────────────────────────────────────┤
│ │
│ Labels: │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ économie │ │conforme │ │ validé par │ │
│ │ │ │charte │ │ Forseti 92% │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ │
│ Activity log: │
│ • 14:32 - Contribution submitted by citizen │
│ • 14:32 - Forseti assessed: valid (92% confidence) │
│ • 14:33 - Label "conforme charte" added automatically │
│ • 14:45 - Civil servant reviewed and confirmed │
│ │
└─────────────────────────────────────────────────────────────────┘

The confidence level is visible. Citizens can see "validé par Forseti 92%"—they know an AI helped, and they know how confident it was. No hidden algorithms. No black boxes.

Trust Metrics: Proving the Partnership Works

We don't ask anyone to trust us on faith. We publish metrics that prove the system works:

For Citizens: "Is my voice being heard fairly?"

QuestionMetricCurrent Status
Are valid contributions being accepted?Approval rate for compliant postsTarget: \>98%
Are rejections explained?% of rejections with reasoning100% (by design)
Can I appeal?Human review available?Always

For Civil Servants: "Is this tool actually helping?"

QuestionMetricCurrent Status
Is the AI accurate?Agreement rate with human decisionsTracking
Am I saving time?Hours saved per weekTracking
Am I still needed?Edge cases requiring human judgmentAlways present
Is AI getting better?Trend of override rateShould decrease

For Administrators: "Is this responsible AI deployment?"

QuestionMetricCurrent Status
Is AI self-aware of uncertainty?Confidence calibration curveMonitored via Opik
Are we catching problems?Violation detection recall>90% target
Is the system auditable?Full decision trace available100% (by design)

The Trust Journey: Earning Responsibility Over Time

Trust isn't granted—it's earned. Here's how the AI earns more responsibility over time:

Stage 1: Observer (Current)

AI role: Watch and learn Human role: Do everything, AI takes notes

  • AI analyzes every contribution silently
  • Humans make all decisions
  • AI predictions logged for accuracy tracking
  • No citizen-facing AI actions

Trust level: Zero. We're proving the AI can even understand the task.

Stage 2: Assistant (When: Accuracy \>85%)

AI role: Prepare the work Human role: Review and decide

  • AI pre-fills assessment forms
  • Humans review AI suggestions
  • One-click approve or override
  • Every override improves the AI

Trust level: Limited. AI saves time but humans remain in control.

Stage 3: Colleague (When: Accuracy \>95% for 3 months)

AI role: Handle routine cases Human role: Focus on complex cases

  • High-confidence cases auto-processed
  • Medium-confidence flagged for review
  • Low-confidence escalated to senior staff
  • Daily accuracy dashboards

Trust level: Substantial. AI has proven reliable on routine work.

Stage 4: Trusted Partner (When: 6 months stable at \>95%)

AI role: First-line assessment Human role: Quality assurance and appeals

  • AI handles most validations automatically
  • Humans audit random samples
  • Humans handle all appeals and edge cases
  • Circuit breakers if accuracy drops

Trust level: High, but never complete. Humans always have override authority.

Note: We may never reach Stage 4, and that's fine. The goal isn't maximum automation—it's optimal collaboration.

Why Audierne-Esquibien?

A small Breton municipality might seem like an odd place for AI experimentation. But that's precisely why it works:

  • Human scale: We know the civil servants personally. We can have coffee with them when something goes wrong.
  • Real stakes: These are real citizen contributions about real issues—port infrastructure, housing, local businesses.
  • Manageable complexity: 7 categories, one charter, one community. Enough to be meaningful, small enough to iterate.

AI in civic applications isn't about replacing the human touch that makes local democracy work. It's about ensuring that human touch can scale. When Audierne receives 50 contributions in a week instead of 5, civil servants shouldn't drown in triage work. They should spend their time on what humans do best: understanding context, showing empathy, making judgment calls.

What We're Really Building

Ò Capistaine isn't an AI product. It's a trust-building system that happens to use AI.

The real deliverables are:

  1. For citizens: Confidence that their voice matters, that they'll get a fair hearing, that technology serves democracy rather than filtering it

  2. For civil servants: A colleague that handles routine work, learns from their expertise, and makes their job more meaningful—not obsolete

  3. For the municipality: Proof that responsible AI deployment is possible, with metrics to show it

  4. For other communities: A template for human-AI collaboration in civic applications, battle-tested in a real municipality

The Invitation

This is an open experiment. The code is public. The metrics will be published. The mistakes will be documented alongside the successes.

If you're a civil servant worried about AI in your workplace: we hear you. Come see how we're building this. Tell us what would make you trust it.

If you're a citizen skeptical of algorithmic moderation: we understand. Every AI decision is transparent, explained, and overridable. Your voice matters more than any model's confidence score.

If you're a technologist interested in responsible AI: join us. This is harder than building the model. This is building trust.


The journey toward trusted AI in civic applications starts with small steps, measured carefully, with humans always in the loop.

Follow our progress: audierne2026/participons