Why effective disaster management needs responsible AI

The use of artificial intelligence holds promise in helping avert, mitigate and manage disasters by analyzing swaths of data, but more efforts are required to ensure that technologies are deployed in a responsible, equitable manner.
According to UNDDR, about 1.2 million lives have been lost worldwide and more than 4 billion people affected in disasters that took place between 2000 and 2019.
Faster data labelling
Cameron Birge, Senior Program Manager Humanitarian Partnerships at Microsoft, says their work in using AI for humanitarian missions has been human-centric. "Our approach has been about helping the humans, the humans stay in the loop, do their jobs better, faster and more efficiently," he noted.
One of their projects in India uses roofing as a proxy indicator of households with lower incomes who are likely to be more vulnerable to extreme events like typhoons. Satellite imagery analysis of roofs are used to inform disaster response and resilience-building plans. A simple yet rewarding avenue of using AI has been around data labelling to train AI models to assist disaster management.
One challenge, he noted, has been around "unbiased, good, clean, trusted data". He also encouraged humanitarian organizations to understand their responsibilities when making use of AI models to support decision-making. "You have to ensure you sustain, train and monitor these models," he advised. Microsoft also wants to promote more sharing of data with its 'Open Data' campaign.
Precise decision support
AI is becoming increasingly important to the work of the World Meteorological Organization (WMO). Supercomputers crunch petabytes of data to forecast weather around the world. The WMO also coordinates a global programme of surface-based and satellite observations. Their models merge data from more than 30 satellite sensors, weather stations and ocean-observing platforms all over the planet, explained Anthony Rea, Director of the Infrastructure Department at WMO.
AI can help interpret resulting data and help with decision support for forecasters who receive an overwhelming amount of data, said Rea. "We can use AI to recognize where there might be a severe event or a risk of it happening, and use that in a decision support mechanism to make the forecaster more efficient and maybe allow them to pick up things that couldn't otherwise be picked up."
Understanding the potential impact of extreme weather events on an individual or a community and assessing their vulnerability requires extra information on the built environment, population, and health.
"We need to understand where AI and machine learning can help and where we are better off taking the approach of a physical model. There are many examples of that case as well. Data curation is really important," he added.
WMO also sets the standards for international weather data exchange, including factors such as identifying the data, formats, and ontologies. While advocating for the availability of data, Rea also highlighted the need to be mindful of privacy and ethical considerations when dealing with personal data. WMO is revising its own data policies ahead of its Congress later this year, committing to free and open exchange of data beyond the meteorological community.
'Not a magic bullet'
Rea believes that AI cannot replace the models built on physical understanding and decades of research into interactions between the atmosphere and oceans. "One of the things we need to guard against in the use of AI is to think of it as a magic bullet," he cautioned.
Instead of vertically integrating a specific dataset and using AI to generate forecasts, Rea sees a lot of promise in bringing together different datasets in a physical model to generate forecast information. "We use machine learning and AI in situations where maybe we don't understand the underlying relationships. There are plenty of places in our area of science and service delivery where that is possible."
Rakesh Bharania, Director of Humanitarian Impact Data at Salesforce.org, also sees the potential of artificial or augmented intelligence in decision support and areas where a lot of contextual knowledge is not required. "If you have a lot of data about a particular problem, then AI is certainly arguably much better than having humans going through that same mountain of data. AI can do very well in answering questions where there is a clear, right answer," he said.
One challenge in the humanitarian field, Bharania noted, is scaling a solution from a proof of concept to something mature, usable, and relevant. He also cautioned that data used for prediction is not objective and can impact results.
"It's going to be a collaboration between the private sector who typically are the technology experts and the humanitarians who have the mission to come together and actually focus on determining what the right applications are, and to do so in an ethical and effective and impactful manner," he said. Networks such as NetHope and Impactcloud are trying to build that space of cross-sectoral collaboration, he added.
Towards 'white box AI’
Yasunori Mochizuki, NEC Fellow at NEC Corporation, recalled how local governments in Japan relied on social networks and crowd-behaviour analyses for real-time decision-making in the aftermath of 2011’s Great East Japan Earthquake and resulting tsunami.
Their solution analyzed tweets to extract information and identify areas with heavy damage and need for immediate rescue, and integrated it with information provided by public agencies. "Tweets are challenging for computers to understand as the context is heavily compressed and expression varies from one user to another. It is for this reason that the most advanced class of natural language processing AI in the disaster domain was developed," Mochizuki explained.
Mochizuki sees the need for AI solutions in disaster risk reduction to provide management-oriented support, such as optimizing logistics and recovery tasks. This requires “white box AI” he said, also known as ‘explainable AI’. "While typical deep learning technology doesn't tell us why a certain result was obtained, white box AI gives not only the prediction and recommendation, but also the set of quantitative reasons why AI reached the given conclusion," he said.
Webinar host and moderator Muralee Thummarukudy, Operations Manager, Crisis Management Branch at the United Nations Environment Programme (UNEP), also acknowledged the value of explainable AI. "It will be increasingly important that AI is able to explain the decisions transparently so that those who use or are subject to the outcome of these black box technologies would know why those decisions were taken," he said.
[Source: ITU]

Leave a Reply