Navigating AI biases: ChatGPT, DALL-E and humanitarian workers

Navigating AI biases: ChatGPT, DALL-E and humanitarian workers

In a recent workshop on Generative Artificial Intelligence in the IFRC network, participants asked “how can we ensure that AI-driven humanitarian tools adhere to ethical guidelines, avoid biases, and prioritize the well-being of the communities they aim to serve?”. Picking up the bias conversation, we share a scary experiment on ChatGPT’s perception of humanitarian workers. This is an invitation to reflect on the systemic biases that Gen AI tools surface, and on how we use these tools.

read more
Innovation benefits and risks – balancing the aspects

Innovation benefits and risks – balancing the aspects

Innovation, efficiency, cybersecurity, resilience, are some of the values all humanitarian organisations strive to excel at. There is a price to pay: one cannot be good at everything; the art is to find the right balance. Extreme efficiency tends to be vulnerable to disrupting factors, thus may be less resilient, just like an innovation workstream that needs to deal with a higher likelihood of failure can bear a higher cybersecurity risk by nature.

read more