In a recent workshop on Generative Artificial Intelligence in the IFRC network, participants asked “how can we ensure that AI-driven humanitarian tools adhere to ethical guidelines, avoid biases, and prioritize the well-being of the communities they aim to serve?”. Picking up the bias conversation, we share a scary experiment on ChatGPT’s perception of humanitarian workers. This is an invitation to reflect on the systemic biases that Gen AI tools surface, and on how we use these tools.
Navigating AI biases: ChatGPT, DALL-E and humanitarian workers
read more