Participatory AI: Ensuring Ethical Deployment in Humanitarian Crises and Forced Displacement
Explore the paradox of participatory AI in humanitarian contexts. Learn how power dynamics and "participation washing" risk harm, and discover pathways for accountable, human-centered AI deployment.
The Growing Call for Participatory AI
In an era increasingly shaped by artificial intelligence, a global consensus is emerging: public engagement is crucial for responsible AI development and deployment. Technology leaders, academics, governments, and multilateral agencies are actively exploring "participatory AI" – methodologies designed to involve citizens and communities whose lives are directly impacted by algorithmic tools. This approach seeks to embed diverse perspectives, identify potential risks and biases, and foster public trust in AI systems. Initiatives range from surveys and community consultations to co-designing AI models, all driven by the belief that inclusivity and transparency are foundational to ethical technology. The integration of public views is particularly vital in high-stakes environments, such as those where human rights are at risk or where AI directly influences access to essential services and the collection of sensitive personal data.
The Critical Gap: AI in Humanitarian Crises
While the Global North champions participatory AI, a significant gap exists in its application within the Global South, especially in contexts of humanitarian crises and forced displacement. Here, where over 180 million people urgently require life-saving aid, the deployment of AI and algorithmic tools is rapidly accelerating. This presents a profound paradox: AI solutions are being introduced to some of the world's most vulnerable populations, yet efforts to genuinely consult and involve these communities remain limited. The stakes in these settings could not be higher, as AI decisions directly affect individuals' access to humanitarian aid, public services, and the management of personally identifiable information, including biometric data. Without meaningful community engagement, AI innovations in these critical sectors risk becoming experimental and, worse, extractive.
A recent academic paper, "From experimentation to engagement: on the paradox of participatory AI and power in contexts of forced displacement and humanitarian crises" published as a preprint in March 2026 by Stella Suge et al., critically examines this challenge (Source: arXiv:2604.06219). The paper highlights that despite humanitarian agencies' long-standing commitments to community participation, accountability, and shifting power dynamics, only a handful are actively seeking the views of those directly affected by AI solutions. This oversight not only undermines ethical principles but also hinders the development of more effective and responsible AI.
Unpacking "Participation Washing": Risks and Realities
The research, based on a pilot exercise with communities in Kakuma Refugee Camp in northwestern Kenya, reveals critical limitations in current participatory AI approaches. These limitations could lead to what the authors term "participation washing" – a superficial engagement that provides an illusion of involvement without genuine influence or impact. Such tokenistic participation significantly increases the risk of algorithmic harm, where AI systems inadvertently or directly cause negative consequences for displaced and crisis-affected individuals. The study emphasizes that these risks are not primarily due to varying levels of community understanding or awareness of AI. Instead, they are deeply rooted in the fundamental power dynamics inherent within the humanitarian sector.
These power imbalances create an environment where genuine dialogue is difficult. When organizations prioritize rapid deployment over careful, collaborative design, they risk creating solutions that are not only less effective but actively harmful. For enterprises seeking to deploy AI in sensitive environments, understanding this risk is paramount. Solutions must be designed to prioritize data privacy and local control, reducing dependency on external cloud infrastructures where data sovereignty might be compromised. For example, ARSA Technology offers AI Box Series and AI Video Analytics Software which can be deployed on-premise, ensuring full data ownership and processing within the client’s own infrastructure, a critical consideration in privacy-sensitive applications.
Navigating Power Dynamics in AI Governance
The core of the "participation washing" problem lies in the complex web of power differentials within the humanitarian ecosystem. This includes the unequal relationship between humanitarian aid recipients, service providers, donor governments, and host nations. Additionally, there are significant power disparities and conflicting incentives between commercial AI companies and humanitarian actors. AI companies might prioritize scale and technological advancement, while humanitarian organizations might be driven by efficiency targets set by donors, often overlooking the nuanced needs and perspectives of the communities they serve. This structural imbalance fundamentally obstructs meaningful community engagement.
Overcoming these deeply entrenched power dynamics requires more than just goodwill; it demands a deliberate architectural shift in how AI solutions are designed, implemented, and governed. It necessitates a commitment to transferring not just information about AI, but also agency over its application to crisis-affected communities. This means moving beyond simple consultations to genuine co-creation and oversight, ensuring that the voices of those most impacted are heard, respected, and acted upon throughout the entire lifecycle of an AI system. For critical identity and access control systems, leveraging robust, on-premise biometric solutions like ARSA's Face Recognition & Liveness SDK can help ensure data remains under the control of the deploying entity, bolstering trust and compliance in sensitive operations. ARSA Technology has been experienced since 2018 in developing production-ready systems that prioritize operational reliability and data privacy, serving various industries including government and public safety.
Towards Accountable and Ethical AI Deployment
To avoid the pitfalls of experimentation and tokenism, organizations deploying AI in humanitarian contexts must commit to long-term, deep, and purposeful engagement methods. This includes rigorous participatory processes that actively account for existing power dynamics and facilitate knowledge transfer about AI to crisis-affected communities, empowering them to shape the technology that impacts their lives. Critically, the paper advocates for independent governance architecture capable of holding humanitarian AI accountable. This structure would ensure transparency, provide redress mechanisms, and allow for contestability of AI decisions, mirroring global responsible AI governance norms that are still largely absent in this vulnerable sector.
Such an independent framework could ensure that AI systems, whether used for resource allocation, identity verification, or security monitoring, adhere to the highest ethical standards. This aligns with ARSA Technology's philosophy of building practical, proven, and profitable AI solutions designed for real-world operations where accuracy, reliability, and data control are non-negotiable. Our focus on self-hosted deployment options, from AI software to turnkey edge systems, reflects a commitment to empowering clients with full control over their data and infrastructure, a principle that is especially vital in sensitive environments.
Conclusion: Building Trust Through True Engagement
The deployment of AI in humanitarian crises presents both immense opportunities and significant ethical challenges. While the potential for AI to improve efficiency and reach in aid delivery is undeniable, the risks of harm from poorly governed and non-participatory systems are equally substantial. The paradox of accelerating AI adoption without commensurate community engagement highlights a critical governance gap. By embracing genuine participatory methods, acknowledging and mitigating power imbalances, and establishing independent accountability frameworks, the humanitarian sector can move beyond mere experimentation to truly impactful and human-centered AI. For enterprises and institutions, embracing such a responsible approach is not just an ethical imperative but a strategic necessity for building trust and ensuring the long-term legitimacy and effectiveness of AI solutions.
To explore how ARSA Technology's AI and IoT solutions can be responsibly deployed to meet your organization's specific needs, we invite you to contact ARSA for a free consultation.
Source: Suge, S., Spencer, S. W., Moorosi, N., McElhinney, H., Loane, G., & Black, S. (2026). From experimentation to engagement: on the paradox of participatory AI and power in contexts of forced displacement and humanitarian crises. arXiv preprint arXiv:2604.06219.