Aller au contenu principal

10 articles tagués avec « Encode hackathon »

Information from the Encode hackathon

Voir tous les tags

The Impossible Sprint: Building Civic AI in 4 Weeks

· 5 minutes de lecture
Jean-Noël Schilling
Locki one / french maintainer

"It looks impossible - but it's a hackathon. Cheers!"

This is the story of OCapistaine, a civic transparency AI built during the Encode "Commit to Change" Hackathon. It's a story of blocked pipelines, strategic pivots, 4,000 municipal PDFs, and the belief that AI can help citizens understand their local democracy.

Spoiler: We shipped it. Barely. impossible is not ocapistaine

Forseti461 Feature Architecture: Modular Prompts with Opik Versioning

· 4 minutes de lecture
Jean-Noël Schilling
Locki one / french maintainer

Today we completed a major architectural milestone: modular prompt management for Forseti461. Each feature now has its own versioned prompt in Opik, enabling independent optimization and A/B testing.

astuce

From a single monolithic prompt to a clean separation of concerns — each Forseti feature can now evolve independently while sharing a common persona.

Forseti461 Prompt v1 : Modération IA conforme à la Charte pour Audierne2026

· 8 minutes de lecture
Jean-Noël Schilling
Locki one / french maintainer

Forseti461 est un agent IA qui modère automatiquement les contributions citoyennes sur les plateformes de démocratie participative — approuvant uniquement les idées concrètes, constructives et localement pertinentes, tout en rejetant les attaques personnelles, le spam, les hors-sujets ou la désinformation, et en expliquant toujours ses décisions avec des retours respectueux et actionnables.

astuce

Ce week-end, Facebook nous a rappelé que la démocratie est fragile. Commentaires toxiques, attaques personnelles et diatribes hors-sujet ont envahi les discussions sur les enjeux locaux. Le signal se perd dans le bruit. Les citoyens se désengagent. Les voix constructives abandonnent.

Et si nous pouvions protéger le débat civique à grande échelle ?

First Submission: Building a Charter Validation Testing Framework

· 3 minutes de lecture
Jean-Noël Schilling
Locki one / french maintainer

Goal: Create a systematic approach to test and improve our AI-powered charter validation system.

For the Encode Hackathon first submission, we focused on building the infrastructure to ensure Forseti 461 (our charter validation agent) catches all violations reliably. The key insight: you can't improve what you can't measure.

Technical Strategy: Google Gemini Integration

· 5 minutes de lecture
Jean-Noël Schilling
Locki one / french maintainer

Context: Audierne 2026 Election Platform

This document outlines how we will leverage the Google Gemini ecosystem (AI Studio, Flash models, and Agentic workflows) to accelerate the development of the Locki project. By utilizing these tools, we aim to bridge the gap between human ideation and automated N8N workflows, specifically for the Commit to Change hackathon and the subsequent election period.

Let us choose the stack

· 22 minutes de lecture
Jean-Noël Schilling
Locki one / french maintainer

Project Context Update (Mid-to-Late January 2026)

Project: Locki / Ò Capistaine (audierne2026) - AI-powered civic transparency & participatory democracy platform for Audierne 2026 local elections (France) Core Mission: Build a neutral, source-based RAG chatbot to answer citizen questions about 4 municipal programs, automate contribution validation against charter rules, crawl/process municipal data (150+ links, 4,000+ PDFs), and showcase Opik integration for the "Commit to Change" Hackathon. Critical Timeline:

  • Contributions deadline: ~January 31, 2026
  • Hackathon prototype delivery: ~mid-February 2026 (4-week sprint)
  • Election period: ~March 15-22, 2026

Lets us code

· 3 minutes de lecture
Jean-Noël Schilling
Locki one / french maintainer

Project Overview & Vision

  • The core idea is building an AI-powered civic transparency platform focused on local democracy, citizen engagement, and real-world issues (starting with inspiration from India, but with potential global scalability).
  • Goal: Create a standardized, bullshit-free alternative to Twitter/X discussions for civic topics — avoiding noise and enabling structured transparency for citizens, investors, and "democracy islands" (e.g., places like Audierne).
  • It's tied to the "Commit to Change" AI Agents Hackathon (powered by Opik / Comet), running ~4 weeks starting mid-January 2026, with categories like Community Impact.

Ò Capistaine Kick-off

· 3 minutes de lecture
Jean-Noël Schilling
Locki one / french maintainer

AI-Powered Civic Transparency for Local Democracy

My 2026 Resolution

The Promise

This year, I will finally understand my local elections and get involved as a citizen.

Sound familiar? Every election cycle, millions of citizens want to participate but face the same wall: scattered documents, administrative jargon, and no time to dig through years of municipal decisions.

This January, I stopped just wishing — and started building.

OPIK : Agent & Prompt Optimization for LLM Systems

· 16 minutes de lecture
Jean-Noël Schilling
Locki one / french maintainer

This training consolidates the operational and technical foundations needed to run and execute agent/prompt optimization in team settings (e.g., hackathons and internal workshops).

It includes :

  • eval-driven optimization of LLM agent prompts using measurable metrics and iterative loops,
  • including meta-prompting, genetic/evolutionary methods, hierarchical/reflective optimizers (HRPO), few-shot Bayesian selection, and parameter tuning.

OPIK : AI Evaluation and Observability

· 18 minutes de lecture
Jean-Noël Schilling
Locki one / french maintainer

This lecture, led by Abby Morgan, an AI Research Engineer, introduces AI evaluation as a systematic feedback loop for transitioning prototypes to production-ready systems. It outlines the four key components of a useful evaluation: a target capability, a test set, a scoring method, and decision rules. The session differentiates between general benchmarks and specific product evaluations, emphasizing the need for observability in agent evaluation. It demonstrates using OPIK, an open-source tool, to track, debug, and evaluate LLM agents through features like traces, spans, 'LM as a judge', and regression testing datasets.