The public sector legacy problem nobody discusses
A surprisingly common starting point on public sector digital projects is this. There is a live application. It is in production. Citizens or institutional users depend on it. And the source code is not available. The original development contract ended years ago, the original developer has moved on, the version control history is missing, and the only thing left is the deployed binary and whatever data still lives in the production database.
The temptation is to wait until the situation is "cleaner" before doing anything. The reality is that public sector platforms in this state are a permanent risk. Every hour they remain in production without recoverable source is an hour closer to the failure that takes them offline with no recovery path. Modernisation cannot wait for ideal conditions, because ideal conditions are not coming.
The discipline of reconstruction
Modernising a platform without source code is a craft of its own. The work has three concurrent streams.
The first is reconstructing the business logic. This is not reverse engineering in the academic sense. It is structured interview work with the institutional users, document analysis of the operational procedures the platform implements, and careful observation of the live system to map every workflow, validation rule, and decision gate. The output is a functional specification of what the platform actually does, which the institution may never have had in writing.
The second is rescuing the data. Production databases of legacy public sector platforms are typically a mix of intentional schema and historical accident. Tables that started as one thing have become another. Columns are reused for purposes nobody documented. Reference data drifts. The work is to extract the data, normalise it against the reconstructed business logic, clean what can be cleaned, and present the rest with explicit caveats so the institution can make informed decisions about retention.
The third is integrating heterogeneous content. Public sector platforms accumulate content in whatever formats were available when the content was created. PDFs, Word documents, scanned images, audio files in inconsistent formats. The work is to extract the content, normalise it against a coherent data model, and integrate it into the new platform so the institutional knowledge is preserved rather than lost in the transition.
What we did for ICAT and FAO Togo
PANEOTECH faced exactly this situation on the e-Agriconseil+ platform, redeveloped for the Institut de Conseil et d'Appui Technique under the Pro-SADI programme commissioned by FAO Togo. The previous version was technologically obsolete and the source code was not recoverable. The mandate was to rebuild the platform from scratch while preserving the operational knowledge embedded in the existing system.
The reconstruction phase mapped the existing advisory workflows through structured interviews with ICAT institutional staff. The data engineering phase extracted, normalised, and integrated more than 300 technical sheets and over one gigabyte of source material from heterogeneous formats including PDFs, Word documents, and unstandardised audio files. The result is a coherent, structured knowledge base that the new platform serves through a content management system ICAT administrators now operate independently.
The institutional lesson
Lost source code is not the worst thing that can happen to a public sector platform. The worst thing is treating it as a reason to do nothing. Reconstruction is hard, slow, and unglamorous, but it is the work that turns an inherited liability into an institutional asset the institution actually owns.