The real problem is not being late but being unprepared
For many international NGOs, the first encounter with generative AI has taken the form of anxiety rather than strategy. The prevailing fear is that other organisations have already mastered the field while development actors are still trying to separate practical value from noise. Yet the picture that emerges from Matt Haikin’s account is more sober and more useful: most NGOs are not behind a mature curve, but standing at the beginning of the same uncertain one. Beneath the rhetoric of transformation, the sector remains defined by scattered experimentation, informal use, and internal ambiguity about what AI is for, who should govern it, and where its risks begin.
That diagnosis matters because it shifts the leadership challenge. The question is not how an NGO can become “AI-first” before its peers. It is how it can respond to a fast-moving technology without mistaking urgency for clarity. Haikin’s comparison with earlier waves of digital adoption is persuasive up to a point. The underlying organisational issues are familiar: capability gaps, uneven infrastructure, ethics concerns, vendor dependence, and the temptation to pilot solutions before defining the actual problem. What makes this moment different is pace. Generative AI compresses decision cycles from years into months, leaving slow-moving institutions exposed not because they lack vision, but because their normal governance rhythms are poorly suited to the speed of the tools.
Shadow AI is already reshaping how NGOs work
One of the article’s sharpest observations is that the most significant AI activity in NGOs is often the least visible. Staff are already using ChatGPT, Claude, Gemini, Copilot, and similar tools to draft reports, translate text, summarise meetings, structure proposals, and clean data. This informal activity, sometimes hidden from managers, has created a situation in which organisations may already be benefiting from AI productivity gains while simultaneously absorbing risks they have not named, measured, or managed. The sector’s first AI problem is therefore not invention but visibility.
That makes Haikin’s emphasis on “shadow AI” especially important. Informal experimentation is not inherently reckless; in many cases it reflects staff trying to solve real operational problems with accessible tools. The danger lies in secrecy. When use remains underground, institutions lose the chance to learn what is genuinely useful, where harms are emerging, and what kinds of guidance would help. In effect, AI begins to alter knowledge production, drafting practices, decision support, and internal workflows without ever appearing in formal strategy. For NGOs, which operate in complex and often high-stakes environments, that is not a minor governance gap. It is the starting point of a potentially serious accountability problem.
The safest first uses are internal, but the lessons are strategic
Haikin argues that NGOs should resist the urge to begin their AI journey with community-facing deployments. That is a sound judgment. External tools may promise visibility and impact, but they also bring the heaviest ethical burden, especially where language, literacy, consent, and local context affect how systems are understood and who gets excluded. By contrast, internal applications such as search across institutional documents, report synthesis, translation, and knowledge management offer a more controlled environment in which organisations can test both the usefulness of AI and the quality of their own foundations.
What makes these early internal uses valuable is not only that they are comparatively lower risk. It is that they reveal weaknesses NGOs have often deferred confronting: fragmented data, poor metadata, unclear ownership, inconsistent standards, and uncertain access controls. In that sense, a modest internal AI project can become a diagnostic tool for the institution itself. It forces leaders to ask whether the organisation’s information architecture is fit for any intelligent system, whether proprietary knowledge is being handled responsibly, and whether staff understand enough to interrogate outputs rather than merely consume them. Starting internally is therefore not a retreat from ambition. It is a way to build competence before exposing communities to the consequences of institutional immaturity.
Governance and literacy will matter more than any single tool
The article is strongest when it insists that the real work of AI adoption is organisational rather than technical. Governance cannot be treated as a static policy exercise updated every 18 months while models, risks, and practices change every quarter. Nor can leadership assume that AI is the domain of specialists alone. Haikin’s proposed division between users, integrators, and strategists usefully captures a point many NGOs still understate: AI literacy is becoming a baseline operational skill, not a niche capability. Staff do not need to become engineers, but they do need enough fluency to write effective prompts, notice hallucinations, question bias, and know when escalation is necessary.
For leadership, the implication is clear. The first stage of an NGO’s generative AI journey should not be defined by a flagship product or a polished strategy deck. It should be defined by whether the institution can make sensible decisions under uncertainty, surface and learn from informal practice, govern pilots in real time, and build confidence across the organisation rather than concentrate it in a few vocal enthusiasts or external vendors. The NGOs that navigate generative AI best are unlikely to be those that move most theatrically, but those that learn fastest without losing institutional discipline. In a sector built around public trust and human consequence, that is not caution for its own sake. It is the only credible foundation for responsible adoption.
Source: Where Should International NGOs Start Their Generative AI Journey?
