Select Page

AI is changing our humanitarian work

by IFRC AI Community of Practice | Dec 12, 2023 | Innovation Stories

With the incessant noise about AI and generative AI, teams from the IFRC network are continually exploring. How can we adjust our work with these new tools? What are the needs and restrictions to the use of AI in our work?

AI is changing our humanitarian work, and IFRC has an agreement with Microsoft to support our AI journey. PMER (Planning, Monitoring, Evaluation And Reporting) practitioners from the IFRC network organised a pilot training on generative AI. The primary objective of the two-day workshop was to build a shared understanding of how AI can be leveraged in PMER, demystifying the complexity around AI, and fostering cross-functional collaboration. 

Participants from six different National Societies, two regional offices and the Secretariat got insights into AI’s potential to streamline PMER processes, improve data quality and analysis, enhance predictive capabilities, and transform reporting. The workshop also included practical exploratory learning; participants demonstrated how they use generative AI in different parts of their work, from report analysis to statistical analysis, and made recommendations to improve workflows and the quality of delivery. IFRC IT security and legal experts participated in the workshop to provide guidance and observe how staff innovated with these tools. 

What our leaders need to know about the future of work and AI

How can we keep up with the rapid technological changes as humanitarians? We outlined the following recommendations for you: 

Move forward on research, testing and upskilling.” 

  1. The future of work is now – Technologies are changing rapidly. People will use it. We need to prepare ourselves. Some of our workforce have never tried and are afraid of using AI. There is a tremendous opportunity to inform our work with clear guidance and appropriate application. Your teams are likely exploring, so it is best to learn with them. Staff will need to learn new skills, and we will need to evolve our processes continuously to be more effective. 
  2. Observe, Test and Discuss – Host your internal discussions and demonstrations about these tools and practices to experience and collaborate with our best efforts. One exercise that could provide fruitful hands-on learning is to have a ‘prompt-a-thons’. Don’t just include your most technical staff. We need to find out how and when we use these tools. Challenge and critically think about the results and always verify. Collaborate on use case generation and then determine the appropriate workflows. The German Red Cross is conducting AI training classes at the federal level for leaders and practitioners to learn about these tools and discuss how to integrate them into their work appropriately. 
  3. Iterate and refine  – Consider when and why these tools would be used. The ethical and security concerns are paramount. As we change and adapt our workflows, being proactive and informed with staff and volunteers can ensure we take the opportunities shaped by our values. 

National Societies and IFRC are innovating with AI and Generative AI  

Across the network, National Societies are innovating with AI and Generative AI. Here are some examples from American Red Cross, Australian Red CrossBritish Red Cross, Asia Pacific Disaster Resilience Center (APDRC) (IFRC and Republic of Korean Red Cross) , Japanese Red Cross, Netherlands Red Cross (510 team), IFRC GO team. There are many examples of data science across the network. Let us know how you are using these tools in your work!

What does your prompt look like?

Leaders asked the Asia Pacific PMER team to prepare an analysis of the past 5 years of evaluations and lessons learned from emergencies and operations in the region for the upcoming IFRC Asia Pacific Conference. They had hundreds of pages of analysis.

Using ChatGPT they compiled the top insights and verification steps. It took a few days of analysis but saved substantial time. The output even cited lessons from activities that had not originally been cited. Guided by skilled PMER experts, this shows the potential changes to our workflows with generative AI. Leaders can draw on the insights to inform overall decision-making. 

The team shared how they refined the Generative AI prompts to interrogate the evaluations content: 

Prompt: Review the provided material. Identify and articulate key insights, experiences, and practices that have emerged from these materials. Instead of framing these as recommendations for future actions, present them as lessons learned. Emphasize what has been understood and gained from past experiences, and how these lessons could inform and enlighten future efforts. 

Present in sequentially numbered bullet points that are concise, clear, contextual and readable. Make the list exhaustive: (or alternately summarize to x no. of points, depending on the level of analysis we were at).

Training the Model

While there are amazing opportunities to make work more efficient, we must remember that the AI works based on the data that we input. Depending on the version of the AI platform you are using and the settings you have applied, the data you input could be used to train the model and could appear in responses it gives to other individuals and organisations. So, it is critical that we first consider whether the data that we would like to use is confidential or if it contains personal or other sensitive data. And, even if the data you input is safe to use in a model, will the results be accurate, fair, and the best for the scenario we have in mind? We must verify outputs and assess whether following an AI’s plan will put vulnerable individuals and communities at risk.  

“Responsible experimentation also requires careful consultation with legal and data protection teams to develop clear staff guidelines. Don’t take a shortcut on governance and oversight of AI at this stage, with a view to implementing when your custom solutions are ready. Bad habits are hard to break, and behaviours that reinforce the responsible use of AI tools that put the onus of responsibility on employees must be established as early as possible.”

TechRadar, November 2023

Thanks to the AI community of practice and the global PMER teams for sharing their input. Stay tuned for more AI news!

Join the AI Community of Practice

We want to start by empowering the most digitally skilled in our organizations – data scientists and not – with the space to experiment with and learn these emergent technologies, so that we can ensure that they are used safely, effectively, and to solve the right problems. We welcome you to join the AI Community of Practice to make change together. We are already planning activities for you.

Some useful Resources

Check out those links to explore more!

+ posts

0 Comments

Leave a Reply