Our red lines.
These commitments shape every mission we accept and how we deliver it. We publish them so we can be held to them.
The systems we build are powerful and dual-use. We address that tension directly, by writing down what we will not do and committing to live by it.
Seven things we refuse, in advance.
-
01
No surveillance of journalists, opposition, civil society, or peaceful protest.
We design every system to make this kind of misuse harder rather than easier. Where a partner asks otherwise, we refuse and document the refusal.
-
02
No autonomous AI deciding citizens’ rights, benefits, or freedoms.
Every AI system we deploy must be auditable, explainable to the officials who use it, and able to be turned off if it fails or causes harm.
-
03
No predictive policing or social scoring.
No system that uses AI to assign risk scores in ways that determine access to public services or attention from security forces.
-
04
No generative AI that impersonates people or institutions.
No communications attributed to people who did not write them.
-
05
No systems that deny entitlements or enable extrajudicial action.
Nothing that denies citizens access to services they are entitled to under law, and nothing built for extrajudicial detention, deportation outside legal process, or the targeting of civilians in conflict.
-
06
No biometric data on children outside child-protection contexts.
Only with parental and legal authorization, only for the protection and welfare of those children.
-
07
No work for sanctioned regimes, armed groups, or surveillance vendors.
No missions from governments under active UN, AU, EU, or UK human-rights sanction. No missions from non-state armed groups, private military contractors, or commercial entities whose primary business is surveillance technology.
Digital Missions exists to help African states build the foundational infrastructure they need to serve their citizens. We take this work seriously, and that means being clear, in advance, about what we will and will not do.
The systems we build are powerful. The same data infrastructure that allows a Ministry of Health to deliver vaccines can be used to track the people who refuse them. The same border management system that disrupts smuggling can be used to harass travelers from a particular ethnic group. The same artificial intelligence that helps a tax authority detect fraud can be used to deny citizens services they are entitled to. We refuse to pretend this tension does not exist. We address it directly, by writing down what we will not do and committing to live by it.
What we will not build — in full
We will not build systems whose primary purpose is the surveillance of journalists, opposition figures, civil society organizations, religious minorities, or peaceful protest movements. We will design every system we build to make this kind of misuse harder rather than easier. Where a partner asks us to design in a way that would enable it, we will refuse the request and document the refusal.
We will not build autonomous AI systems that make decisions affecting citizens’ rights, benefits, or freedoms without meaningful human review. Every AI system we deploy must be auditable, explainable to the officials who use it, and able to be turned off if it fails or causes harm.
We will not build predictive policing systems, social scoring systems, or any system that uses AI to assign risk scores to individuals or groups in ways that determine access to public services or attention from security forces.
We will not deploy generative AI in any form that impersonates citizens, officials, or institutions, or that produces communications attributed to people who did not write them.
We will not build systems whose primary purpose is to deny citizens access to services they are entitled to under their country’s laws or constitution.
We will not build systems for use in extrajudicial detention, deportation outside legal process, or the targeting of civilians in conflict.
We will not build systems that collect biometric or behavioral data on children for any purpose other than the protection and welfare of those children, with parental and legal authorization.
How we build
Every mission we accept must leave the partner state more capable, more sovereign, and more in control of its own infrastructure than we found it. This is not a soft commitment. It shapes which contracts we accept and how we deliver them.
Every mission we accept must include a deliberate plan for capability transfer to local engineers and institutions. Capability transfer is not a side activity. It is built into the mission design, staffed deliberately, and measured as a deliverable. The depth and pace of transfer will be calibrated to the nature of the mission. For long term system implementations, the expectation is that named individuals inside the partner institution can operate, maintain, and extend the systems we have built without us by the end of the engagement. For shorter or emergency missions where this is not feasible, we will document the gap, the reason for it, and the follow on work needed to close it, and we will commit to returning for that follow on work where the partner wants us to.
Where the partner state has sovereign data infrastructure available and operational, we will host on it by default. Where sovereign infrastructure is not yet available, we will work with whatever hosting environment the partner and funder have chosen, but we will be transparent about where the data lives, who has access to it, and what the implications are. We will design every system we build to be portable, so the partner can move its data and systems to sovereign infrastructure when that capability becomes available. Where AI inference or training requires data to leave the partner’s borders, this will be named explicitly in the mission brief and the partner’s informed agreement will be documented before the work begins.
Where the technology stack involves proprietary tools, we will name them clearly in the mission brief and explain the trade offs to the partner. Where Digital Missions has a meaningful choice in the stack, we will favor open standards and tools that the partner can maintain or replace without our continued involvement. Where the partner or funder has already committed to a proprietary stack before our engagement, we will work within that constraint, but we will document the lock-in implications and, where possible, design our own contributions to be portable so they can be carried forward if the underlying stack changes.
Who we will not work with
We will not accept missions from governments under active sanction by the United Nations, the African Union, the European Union, or the United Kingdom for human rights violations, while those sanctions remain in force.
We will not accept missions from non-state armed groups, private military contractors, or commercial entities whose primary business is surveillance technology sold to governments.
We will not accept missions whose principal funder is known to require, as a condition of funding, capabilities or access that would breach the commitments above.
How we hold ourselves accountable
We publish an annual transparency report. The report names every mission accepted in the prior year and the partner involved. The report names every mission declined and the reason for the refusal, with the partner anonymized only where naming would create direct safety risk. The report includes specific data on capability transfer outcomes, including how many local staff were trained and certified during each mission and what portion of system operation transferred to local control by the end of the engagement.
We commission an independent annual review of our compliance with these red lines, conducted by a reviewer chosen jointly with our advisory board. The review is published with the transparency report.
We maintain an internal escalation channel that allows any member of staff, at any level and in any base, to raise a concern that a mission is approaching one of our red lines. Concerns raised through this channel are reviewed by the mission lead and, if unresolved, by the founder. No staff member is penalized for raising a concern in good faith.
We retain the right to withdraw from a mission already in progress if the partner’s intent or use of the system shifts in a way that crosses one of our red lines. We will publish the reason for any withdrawal in the next transparency report.
Why these lines and not others
These red lines are not exhaustive. They will evolve as the work and the operating environment evolve. They are written to address the specific risks of building data infrastructure and AI systems for African states in this era, and they will be revised as those risks change. The AI clauses in particular are subject to formal review every twelve months.
We have chosen these lines because they are the ones we are most likely to face, and the ones whose violation would most directly damage the people the reimagined state is meant to serve. We hold ourselves to them not because doing so is required by any law, but because the legitimacy of the work depends on it. A reimagined state that cannot operate its own systems, cannot explain decisions made about its citizens, and cannot protect the people it governs is not a reimagined state. It is the same failed state with better software, owned by someone else.
If a mission crosses one of these lines, we walk away.
We retain the right to withdraw from a mission already in progress if the partner’s intent or use of the system shifts in a way that crosses one of our red lines. The reason for any withdrawal is published in the next transparency report.