Stop Surveillance Humanitarianism
Version 0 of 2.
A standoff between the United Nations World Food Program and Houthi rebels in control of the capital region is threatening the lives of hundreds of thousands of civilians in Yemen.
Alarmed by reports that food is being diverted to support the rebels, the aid program is demanding that Houthi officials allow them to deploy biometric technologies like iris scans and digital fingerprints to monitor suspected fraud during food distribution.
The Houthis have reportedly blocked food delivery, painting the biometric effort as an intelligence operation, and have demanded access to the personal data on beneficiaries of the aid. The impasse led the aid organization to the decision last month to suspend food aid to parts of the starving population — once thought of as a last resort — unless the Houthis allow biometrics.
With program officials saying their staff is prevented from doing its essential jobs, turning to a technological solution is tempting. But biometrics deployed in crises can lead to a form of surveillance humanitarianism that can exacerbate risks to privacy and security.
By surveillance humanitarianism, I mean the enormous data collection systems deployed by aid organizations that inadvertently increase the vulnerability of people in urgent need.
Despite the best intentions, the decision to deploy technology like biometrics is built on a number of unproven assumptions, such as, technology solutions can fix deeply embedded political problems. And that auditing for fraud requires entire populations to be tracked using their personal data. And that experimental technologies will work as planned in a chaotic conflict setting. And last, that the ethics of consent don’t apply for people who are starving.
[If you’re online — and, well, you are — chances are someone is using your information. We’ll tell you what you can do about it. Sign up for our limited-run newsletter.]
Biometric and digital identity technologies can seriously disrupt the lives of displaced people. Interviewing dozens of migrants and refugees in Europe who fled conflict in East Africa, I was told how minor discrepancies in identity databases can cause bureaucratic chaos. A misspelled name, for example, can be used as a threat to separate a child from her parents or reject an asylum application.
It is not a simple matter of getting the database to work correctly — for some government officials, biometrics were used to carry out policies that discriminate against displaced people. Fearing surveillance technologies, some refugees avoided camps requiring fingerprint scans in exchange for food and shelter.
For humanitarian organizations, monitoring and collecting data are essential for delivering the right amount of aid to the right people at the right place and time. When these organizations collect data, they are trusted more than companies or governments because their mandates include the principles of humanity, neutrality, impartiality and independence.
Yet to date, the humanitarian sector has not developed the calculus to weigh the benefits of digital identity systems against the costs to fundamental rights. A report commissioned by Oxfam released last year found a concerning lack of evidence to support the assumption that biometrics will reduce fraud at key points in the food distribution process. If most of the Yeminis burdened with proving their identity in the food lines are not the masterminds behind Houthi schemes to divert aid, is the biometric response proportional?
One might say that in a war zone, the risk to privacy is insignificant compared with the dangers of going without food. This may be true in the immediate moment. But potential harms related to data are often latent or shifted to a later time.
If an individual or group’s data is compromised or leaked to a warring faction, it could result in violent retribution for those perceived to be on the wrong side of the conflict. When I spoke with officials providing medical aid to Syrian refugees in Greece, they were so concerned that the Syrian military might hack into their database that they simply treated patients without collecting any personal data. The fact that the Houthis are vying for access to civilian data only elevates the risk of collecting and storing biometrics in the first place.
Data collectors and data brokers hold power. This is why surveillance technologies can be so alluring. International organizations that deploy large-scale identity collection systems can become the largest data brokers in a crisis region.
The responsibility these organizations have to those they serve goes beyond data protection and privacy. It’s about upholding the human dignity of those who have been stripped of the ability to provide food for themselves and their families. Forcing them to submit biometrics further erodes their senses of agency.
There are ways for organizations like the United Nations World Food Program to safeguard against surveillance humanitarianism. Identity systems can be used that are not as invasive as biometric iris and fingerprint scanners. Systems can minimize the collection of personally identifiable information. Databases can be not only encrypted but also engineered so that no one has access to the data.
More important, accountability mechanisms can be put in place, before deployment, to redress harms when they do occur. Sufficient staff can be allocated to sort out technical or bureaucratic errors. Data-sharing agreements and tech vendors can be vetted, and human rights impact assessments undertaken.
There need to be opt-out alternatives for individuals that will not result in the denial of aid. And organizations should consider phasing out the biometric system once the threats are gone. Having a theory of harm and acknowledging the risks of technology would go a long way in building trust.
When decision makers turn to technology as a solution they need to be aware of both the immediate trade-offs and unintended consequences. Without this awareness, the situation in Yemen could become an extreme example of a larger problem — the creation of a digital underclass who are forced to hand over their personal data in exchange for basic needs like food without dignity and choice.
Mr. Latonero is a research lead at Data & Society and a fellow at the Harvard Kennedy School’s Carr Center for Human Rights Policy.
Follow @privacyproject on Twitter and The New York Times Opinion Section on Facebook and Instagram.