The training cycle problem
Digital security training for civil society organisations typically follows a predictable rhythm. A workshop is scheduled, materials are prepared, participants travel, sessions run for a day or two, and participants return to their organisations with notes and slide decks. The model has produced real value over the years. It also has a structural limitation. The threats civil society organisations face do not arrive on the training calendar.
An activist receives a phishing email at three in the morning before an election. A women's rights organisation finds its social media account compromised on a weekend. A journalist needs to verify the authenticity of an unusual login attempt while travelling. None of these moments wait for the next workshop, and few organisations have the budget for a dedicated digital security advisor on call.
What grounded AI agents actually solve
An AI agent grounded in a curated digital security knowledge base addresses the specific gap. The agent is available continuously. It engages in conversation rather than menu navigation, so users can describe what they are seeing in their own words and get back specific, contextual guidance. It draws on a knowledge base curated for the realities of the organisations it serves, rather than generic advice scraped from anywhere on the internet.
The discipline that makes the difference is grounding. A consumer chatbot that occasionally fabricates an answer is a minor irritation. An AI agent advising a journalist on whether to enter their credentials in a suspicious login flow cannot tolerate fabrication. The agent has to draw its answers from the curated source material, return guidance that respects that source, and surface uncertainty when the situation is outside the corpus rather than guessing.
What we built for the CiviConnect community
PANEOTECH delivered DigiGuard, an AI digital security agent for the CiviConnect platform hosted by Jeunes Verts, supported by the Digital Defenders Partnership Sustainable Protection Fund. DigiGuard runs on Rafiki AI, PANEOTECH's proprietary AI agent platform, with a knowledge base curated for the realities of civil society organisations operating across francophone West Africa.
The agent covers the recurring questions the community faces. Account protection and recovery. Phishing identification and response. Secure communications. Data protection. Incident response in the moments after a compromise. Coverage extends to the practical decisions an activist or journalist actually has to make at speed, with contextual caveats where the situation requires escalation to a human responder.
The deployment ran into a real constraint worth noting honestly. The original plan included messaging channel deployment to WhatsApp, Facebook Messenger, and Instagram, where many civil society organisations actually communicate. Meta's business approval policy currently restricts chatbot integrations on these channels to for-profit entities, which prevented deployment for a non-profit civic space platform. The agent is fully operational on the web, with messaging channel deployment held until the policy environment shifts.
The institutional lesson
For civil society organisations operating in hostile digital environments, the always-available AI agent grounded in a curated knowledge base is not a replacement for training. It is the layer that catches the moments training cannot reach. Treat it that way, ground it properly, and it becomes an institutional asset the community returns to in the moments that actually matter.