Standardisation of Cybersecurity for Artificial Intelligence

The European Union Agency for Cybersecurity (ENISA) publishes an assessment of standards for the cybersecurity of AI and issues recommendations to support the implementation of upcoming EU policies on Artificial Intelligence (AI).

This report focuses on the cybersecurity aspects of AI, which are integral to the European legal framework regulating AI, proposed by the European Commission last year dubbed as the “AI Act“.

What is Artificial Intelligence?

The draft AI Act provides a definition of an AI system as “software developed with one or more (…) techniques (…) for a given set of human-defined objectives, that generates outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.” In a nutshell, these techniques mainly include: machine learning resorting to methods such as deep learning, logic, knowledge-based and statistical approaches.

It is indeed essential for the allocation of legal responsibilities under a future AI framework to agree on what falls into the definition of an 'AI system'.

However, the exact scope of an AI system is constantly evolving both in the legislative debate on the draft AI Act, as well in the scientific and standardisation communities.

Although broad in contents, this report focuses on machine learning (ML) due to its extensive use across AI deployments. ML has come under scrutiny with respect to vulnerabilities particularly impacting the cybersecurity of an AI implementation.

AI cybersecurity standards: what’s the state of play?

As standards help mitigate risks, this study unveils existing general-purpose standards that are readily available for information security and quality management in the context of AI. In order to mitigate some of the cybersecurity risks affecting AI systems, further guidance could be developed to help the user community benefit from the existing standards on AI.

This suggestion has been based on the observation concerning the software layer of AI. It follows that what is applicable to software could be applicable to AI. However, it does not mean the work ends here. Other aspects still need to be considered, such as:

  • a system-specific analysis to cater for security requirements deriving from the domain of application;
  • standards to cover aspects specific to AI, such as the traceability of data and testing procedures.

Further observations concern the extent to which the assessment of compliance with security requirements can be based on AI-specific horizontal standards; furthermore, the extent to which this assessment can be based on vertical/sector specific standards calls for attention.

Key recommendations include:

  • Resorting to a standardised AI terminology for cybersecurity;
  • Developing technical guidance on how existing standards related to the cybersecurity of software should be applied to AI;
  • Reflecting on the inherent features of ML in AI. Risk mitigation in particular should be considered by associating hardware/software components to AI; reliable metrics; and testing procedures;
  • Promoting the cooperation and coordination across standards organisations’ technical committees on cybersecurity and AI so that potential cybersecurity concerns (e.g., on trustworthiness characteristics and data quality) can be addressed in a coherent manner.

Regulating AI: what is needed?

As for many other pieces of EU legislation, compliance with the draft AI Act will be supported by standards. When it comes to compliance with the cybersecurity requirements set by the draft AI Act, additional aspects have been identified. For example, standards for conformity assessment, in particular related to tools and competences, may need to be further developed. Also, the interplay across different legislative initiatives needs to be further reflected in standardisation activities – an example of this is the proposal for a regulation on horizontal cybersecurity requirements for products with digital elements, referred to as the “Cyber Resilience Act”.

Building on the report and other desk research as well as input received from experts, ENISA is currently examining the need for and the feasibility of an EU cybersecurity certification scheme on AI. ENISA is therefore engaging with a broad range of stakeholders including industry, ESOs and Member States, for the purpose of collecting data on AI cybersecurity requirements, data security in relation to AI, AI risk management and conformity assessment.

ENISA advocated the importance of standardisation in cybersecurity today, at the RSA Conference in San Francisco in the ‘Standards on the Horizon: What Matters Most?’ in a panel comprising the National Institute of Standards and Technology (NIST).

Autonomous driving systems: A long road ahead

Substantive regulatory progress has been made since last year, despite the global COVID-19 pandemic that paralyzed supply chains in some industries around the world and shifted the mobility landscape considerably.
Still, progress towards fully autonomous driving has been slow. Five levels have been established within the industry for assisted, automated and autonomous driving. Fully autonomous driving is represented by only Level 5.
SAE levels of automation
Here are the top three takeaways from the recent Symposium on the Future Networked Car 2021:
1. Regulatory efforts are advancing in preparation for Autonomous Driving Systems (ADS)
The past year has seen considerable progress at the global, regional and national levels. The shared nature of most transport infrastructure and automotive supply chains means that common standards and interoperability in the manufacture and communication capabilities of different types of vehicles will be vital.
At the global level, two new regulations were introduced recently from United Nations’ Economic Commission for Europe (UNECE) on Cybersecurity (UN Regulation 155) and Software Updates (UN Regulation 156). A new UN Regulation 157 on Automated Lane Keeping Systems for highly automated driving up to 60 kph on motorways was recently approved.
Regulatory preparedness is mostly being developed at the regional level, with vehicle type approval, product liability and general product safety, and roadworthiness tests developed by the European Union and also in the Asia-Pacific region.
At the national level, developments include liability, traffic rules, regulatory mandates, trials, and infrastructure. For example, Finland has authorized Level 5 driving, and Germany has already authorized the use of automated vehicles on its motorways.
2. Fully Autonomous Driving Systems (ADS) are still a long way off
Currently, mainly only Level 2 vehicles are available on the market (other than autonomous shuttles and an autonomous taxi service operating in Phoenix, Arizona, the United States since October 2020). However, Honda recently announced its first Level 3 driving system, due to be launched later this year.
The car industry, highways agencies and transport regulators are working together to overcome the significant challenges introduced by autonomous driving. Chief among these are safety considerations – and what constitutes ‘acceptable risk’ for car occupants, as well as the broader public.
Data challenges also persist, from the capture and preservation of data to its interpretation and protection. Improving the physical environment with markers to make a more intelligent environment for automated, let alone autonomous, vehicles is another challenge, as well as collaboration that would enable intelligent vehicles to function across borders.
Other major challenges include the introduction of self-learning artificial intelligence (AI) systems in automated driving systems, as well as cybersecurity considerations – how to prevent unauthorized or illegal intrusions into connected cars or their networks.
3. The communication and data demands of ADS will be enormous
The changes driven by the advent of ADS are many and large. Even cars already on the road today are said to be running over 150 million lines of code. Many participants emphasized the changes needed in physical infrastructure, such as 5G masts and improved road markings, as well as the information needs and data demands, for mapping and object identification, for instance.
5G will be instrumental in improving automated driving and its communication needs like smart parking, but also V2V (vehicle-to-vehicle) and V2I (vehicle-to-infrastructure) communications. A host of innovations and improvements are needed throughout the vehicle ecosystem to help create an optimal real-world environment for automated driving systems. ITU is working with all stakeholders to help realize these innovations in the interests of smarter and safer mobility.
[Source: ITU]

How AI will shape smart cities

Cities worldwide are not just growing, but also trying to reconfigure themselves for a sustainable future, with higher quality of life for every citizen. That means capitalizing on renewable power sources, maximizing energy efficiency and scaling up electrified transport on an unprecedented scale.
In parallel, artificial intelligence (AI) and machine learning are emerging as key tools to bring that future into being as global temperatures creep upward.
The 2015 Paris Agreement called for limiting the rise in average global temperatures to 1.5oC compared to pre-industrial levels, implying a massive reduction of greenhouse gas (GHG) emissions.
Meeting the ambitious climate goal would require a near-total elimination of emissions from power generation, industry, and transport by 2050, said Ariel Liebman, Director of Monash Energy Institute, at a recent AI for Good webinar convened by an ITU Focus Group studying AI and environmental efficiency.
A key role in renewables
Renewable energy sources, including the sun, wind, biofuels and renewable-based hydrogen, make net-zero emissions theoretically possible. But solar and wind facilities – whose output varies with seasons, the weather and time of day – require complex grid management and real-time responsiveness to work 24/7.
Smart grids incorporating data analytics, however, can operate smoothly with high shares of solar and wind power.
"AI methods – particularly optimization, machine learning, time series forecasting and anomaly detection – have a crucial role to play in the design and operation of this future carbon-free electricity grid," explained Liebman.
One power grid in Indonesia could reach 50 per cent renewables by 2030 at no extra cost compared to building new coal- and gas-fired plants, according to a modelling tool used at Monash. Renewable power generation costs have plummeted worldwide in recent years.
Anticipating future needs
Shifts in consumer demand for heat, light, or mobility can create further uncertainties, especially in urban environments. But reinforcement learning, combined with neural networks, can aid the understanding of how buildings consume energy, recommending adjustments and guide occupant behaviour.
"AI can make our existing assets more effective and efficient, but also help us in developing new business models, both in terms of cleaner technology, and also for our customers," said Dan Jeavons, General Manager, Data Science, at Shell.
The global energy giant put over 65 AI applications into service last year, enabling the company to monitor 5,700 pieces of equipment and generate real-time data feeds from across its asset base.
A data-driven approach
Digital consultancy Capgemini uses satellite data to understand fire risks and devise rescue plans. Another project uses data from Copernicus satellites to detect plastic waste in our natural environment.
“Deep learning algorithms simulate the shape and movement of plastic waste in the ocean and then train the algorithm to efficiently detect plastic waste," said Sandrine Daniel, head of the company’s scientific office.
Electric vehicle start-up Arrival takes a data-driven approach to decisions over the entire product lifecycle. Produced in micro-factories with plug-and-play composite modules, its vehicle designs reduce the environmental impact of manufacturing and use.
"We design things to be upgradable," said Jon Steel, Arrival’s Head of Sustainability. Functional components facilitate repair, replacement, or reuse, while dedicated software monitors energy use and performance, helping to extend each vehicle’s useful life.
Digital twins for urban planning
Real-time virtual representations – known as digital twins – have been instrumental in envisioning smart, sustainable cities, said Kari Eik, Secretary General of the Organization for International Economic Relations (OiER).
Under the global United for Smart Sustainable Cities (U4SSC) initiative, a project with about 50 cities and communities in Norway uses digital twins to evaluate common challenges, model scenarios and identify best practices.
"Instead of reading a 1,000-page report, you are looking into one picture,” Eik explained. “It takes five seconds to see not just a challenge but also a lot of the different use cases."
For digital twins, a privacy-by-design approach with transparent, trusted AI will be key to instil trust among citizens, said Albert H. Seubers, Director of Global Strategy IT in Cities, Atos. He hopes the next generation of networks in cities is designed to protect personal data, reduce network consumption, and make high-performance computing more sustainable. "But this also means we have to build a data management function or responsibility at the city level that really understands what it means to deploy data analytics and manage the data."
Seubers called for open standards to enable interoperability, a key ingredient in nurturing partnerships focused on sustainable city building. "Implementing minimal interoperability mechanisms means that from design, we have private data security and explainable AI. In the end, it's all about transparency and putting trust in what we do," he said.
[Source: ITU]

Using AI to better understand natural hazards and disasters

As the realities of climate change take hold across the planet, the risks of natural hazards and disasters are becoming ever more familiar. Meteorologists, aiming to protect increasingly populous countries and communities, are tapping into artificial intelligence (AI) to get them the edge in early detection and disaster relief.
Al shows great potential to support data collection and monitoring, the reconstruction and forecasting of extreme events, and effective and accessible communication before and during a disaster.
This potential was in focus at a recent workshop feeding into the first meeting of the new Focus Group on AI for Natural Disaster Management. The group is open to all interested parties, supported by the International Telecommunication Union (ITU) together with the World Meteorological Organization (WMO) and UN Environment.
“AI can help us tackle disasters in development work as well as standardization work. With this new Focus Group, we will explore AI’s ability to analyze large datasets, refine datasets and accelerate disaster-management interventions,” said Chaesub Lee, Director of the ITU Telecommunication Standardization Bureau, in opening remarks to the workshop.
New solutions for data gaps
"High-quality data are the foundation for understanding natural hazards and underlying mechanisms providing ground truth, calibration data and building reliable AI-based algorithms," said Monique Kuglitsch, Innovation Manager at Fraunhofer Heinrich-Hertz-Institut and Chair of the new Focus Group.
In Switzerland, the WSL Institute for Snow and Avalanche Research uses seismic sensors in combination with a supervised machine-learning algorithm to detect the tremors that precede avalanches.
“You record lots of signals with seismic monitoring systems,” said WSL researcher Alec Van Hermijnen. “But avalanche signals have distinct characteristics that allow the algorithm to find them automatically. If you do this in continuous data, you end up with very accurate avalanche data."
Real-time data from weather stations throughout the Swiss Alps can be turned into a new snowpack stratigraphy simulation model to monitor danger levels and predict avalanches.
Modelling for better predictions
Comparatively rare events, like avalanches, offer limited training data for AI solutions. How models trained on historical data cope with climate change remains to be seen.
At the Pacific Northwest Seismic Network, Global Navigation Satellite System (GNSS) data is monitored in support of tsunami warnings. With traditional seismic systems proving inadequate in very large magnitude earthquakes, University of Washington research scientist Brendan Crowell wrote an algorithm, G-FAST (Geodetic First Approximation of Size and Timing), which estimates earthquake magnitudes within seconds of earthquakes’ time of origin.
In north-eastern Germany, deep learning of waveforms produces probabilistic forecasts and helps to warn residents in affected areas. The Transformer Earthquake Alerting Model supports well-informed decision-making, said PhD Researcher Jannes Münchmeyer at the GeoForschungsZentrum Potsdam.
Better data practices for a resilient future
How humans react in a disaster is also important to understand. Satellite images of Earth at night - called "night lights" – help to track the interactions between people and river resources. The dataset for Italy helps to manage water-related natural disasters, said Serena Ceola, Senior Assistant Professor at the University of Bologna.
Open data initiatives and public-private partnerships are also using AI in the hope of building a resilient future.
The ClimateNet repository promises a deep database for researchers, while the CLINT (Climate Intelligence) consortium in Europe aims to use machine learning to detect and respond to extreme events.
Some practitioners, however, are not validating their models with independent data, reinforcing perceptions of AI as a “black box”, says Carlos Gaitan, Co-founder and CTO of Benchmark Labs and a member of the American Meteorological Society Committee on AI Applications to Environmental Science. "For example, sometimes, you have only annual data for the points of observations, and that makes deep neural networks unfeasible."
A lack of quality-controlled data is another obstacle in environmental sciences that continue to rely on human input. Datasets come in different formats, and high-performing computers are not available to all, Gaitan added.
AI to power community-centred communications
Communications around disasters require high awareness of communities and their comprising connections.
"Too often when we are trying to understand the vulnerability and equity implications of our work, we are using data from the census of five or ten years ago,” said Steven Stichter, Director of the Resilient America Program at the US National Academies of Science (NAS). “That's not sufficient as we seek to tailor solutions and messages to communities."
A people-centered mechanism is at the core of the Sendai Framework for Disaster Risk Reduction, a framework providing countries with concrete actions that they can take to protect development gains from the risk of disaster.
If AI can identify community influencers, it can help to target appropriate messages to reduce vulnerability, Stichter said.
With wider internet access and improved data speeds, information can reach people faster, added Rakiya Babamaaji, Head of Natural Resources Management at Nigeria’s National Space Research and Development Agency and Vice Chair of the Africa Science and Technology Advisory Group on Disaster Risk Reduction (Af-STAG DRR).
AI can combine Earth observation data, street-level imagery, data drawn from connected devices, and volunteered geographical details. However, technology alone cannot solve problems, Babamaaji added. People need to work together, using technology creatively to tackle problems.
With clear guidance on best practices, AI will get better and better in terms of accessibility, interoperability, and reusability, said Jürg Luterbacher, Chief Scientist & Director of Science and Innovation at WMO. But any AI-based framework must also consider human and ecological vulnerabilities. "We have also to identify data biases, or train algorithms to interpret data within an ethical framework that considers minority and vulnerable populations," he added.
Image credit: ITU-Camptocamp.org via Wikimedia Commons

Latest issue of World Security Report has arrived

The Spring 2021 issue of World Security Report for the latest industry views and news, is now available to download.
In the Spring 2021 issue of World Security Report:
- Phenomena or Just a ‘Bad Karma’
- Towards 2021 – Upcoming Organisation Risk & Resiliency Trends
- Maritime Domain Awareness - An Essential Component of a Comprehensive Border Security Strategy
- Security and Criminology- Risk Investigation and AI
- Resilience and Social Unrest
- State Sponsored Terror
- IACIPP Association News
- Industry news
Download your copy today at www.cip-association.org/WSR

NSCAI Report presents strategy for winning the artificial intelligence era

The 16 chapters in the National Security Commission on Artificial Intelligence (NSCAI) Main Report provide topline conclusions and recommendations. The accompanying Blueprints for Action outline more detailed steps that the U.S. Government should take to implement the recommendations.
The NSCAI acknowledges how much remains to be discovered about AI and its future applications. Nevertheless, enough is known about AI today to begin with two convictions.
First, the rapidly improving ability of computer systems to solve problems and to perform tasks that would otherwise require human intelligence—and in some instances exceed human performance—is world altering. AI technologies are the most powerful tools in generations for expanding knowledge, increasing prosperity, and enriching the human experience. AI is also the quintessential “dual-use” technology. The ability of a machine to perceive, evaluate, and act more quickly and accurately than a human represents a competitive advantage in any field—civilian or military. AI technologies will be a source of enormous power for the companies and countries that harness them.
Second, AI is expanding the window of vulnerability the United States has already entered. For the first time since World War II, America’s technological predominance—the backbone of its economic and military power—is under threat. China possesses the might, talent, and ambition to surpass the United States as the world’s leader in AI in the next decade if current trends do not change. Simultaneously, AI is deepening the threat posed by cyber attacks and disinformation campaigns that Russia, China, and others are using to infiltrate our society, steal our data, and interfere in our democracy. The limited uses of AI-enabled attacks to date represent the tip of the iceberg. Meanwhile, global crises exemplified by the COVID-19 pandemic and climate change highlight the need to expand our conception of national security and find innovative AI-enabled solutions.
Given these convictions, the Commission concludes that the United States must act now to field AI systems and invest substantially more resources in AI innovation to protect its security, promote its prosperity, and safeguard the future of democracy.
Full report is available at https://reports.nscai.gov/final-report

ITU to advance AI capabilities to contend with natural disasters

The International Telecommunication Union (ITU) – the United Nations specialized agency for information and communication technologies – has launched a new Focus Group to contend with the increasing prevalence and severity of natural disasters with the help of artificial intelligence (AI).
In close collaboration with the World Meteorological Organization (WMO) and the United Nations Environment Programme (UNEP), the ITU Focus Group on 'AI for natural disaster management' will support global efforts to improve our understanding and modelling of natural hazards and disasters. It will distill emerging best practices to develop a roadmap for international action in AI for natural disaster management.
"With new data and new insight come new powers of prediction able to save countless numbers of lives," said ITU Secretary-General Houlin Zhao. "This new Focus Group is the latest ITU initiative to ensure that AI fulfils its extraordinary potential to accelerate the innovation required to address the greatest challenges facing humanity."
Clashes with nature impacted 1.5 billion people from 2005 to 2015, with 700,000 lives lost, 1.4 million injured, and 23 million left homeless, according to the Sendai Framework for Disaster Risk Reduction 2015-2030 developed by the UN Office for Disaster Risk Reduction (UNDRR).
AI can advance data collection and handling, improve hazard modelling by extracting complex patterns from a growing volume of geospatial data, and support effective emergency communications. The new Focus Group will analyze relevant use cases of AI to deliver technical reports and accompanying educational materials addressing these three key dimensions of natural disaster management. Its study of emergency communications will consider both technical as well as sociological and demographical aspects of these communications to ensure that they speak to all people at risk.
"This Focus Group looks to AI to help address one of the most pressing issues of our time," noted the Chair of the Focus Group, Monique Kuglitsch, Innovation Manager at ITU member Fraunhofer Heinrich Hertz Institute. “We will build on the collective expertise of the communities convened by ITU, WMO and UNEP to develop guidance of value to all stakeholders in natural disaster management. We are calling for the participation of all stakeholders to ensure that we achieve this."
Muralee Thummarukudy, Operations Manager for Crisis Management at UNEP explained: "AI applications can provide efficient science-driven management strategies to support four phases of disaster management: mitigation, preparedness, response and recovery. By promoting the use and sharing of environmental data and predictive analytics, UNEP is committed to accelerating digital transformation together with ITU and WMO to improve disaster resilience, response and recovery efforts."
The Focus Group's work will pay particular attention to the needs of vulnerable and resource-constrained regions. It will make special effort to support the participation of the countries shown to be most acutely impacted by natural disasters, notably small island developing states (SIDS) and low-income countries.
The proposal to launch the new Focus Group was inspired by discussions at an AI for Good webinar on International Disaster Risk Reduction Day, 13 October 2020, organized by ITU and UNDRR.
"WMO looks forward to a fruitful collaboration with ITU and UNEP and the many prestigious universities and partners committed to this exciting initiative. AI is growing in importance to WMO activities and will help all countries to achieve major advances in disaster management that will leave no one behind," said Jürg Luterbacher, Chief Scientist & Director of Science and Innovation at WMO. "The WMO Disaster Risk Reduction Programme assists countries in protecting lives, livelihoods and property from natural hazards, and it is strengthening meteorological support to humanitarian operations for disaster preparedness through the development of a WMO Coordination Mechanism and Global Multi-Hazard Alert System. Complementary to the Focus Group, we aim to advance knowledge transfer, communication and education – all with a focus on regions where resources are limited."

How artificial intelligence can help transform Europe’s health sector

A high-standard health system, rich health data and a strong research and innovation ecosystem are Europe’s key assets that can help transform its health sector and make the EU a global leader in health-related artificial intelligence applications.
The use of artificial intelligence (AI) applications in healthcare is increasing rapidly.
Before the COVID-19 pandemic, challenges linked to our ageing populations and shortages of healthcare professionals were already driving up the adoption of AI technologies in healthcare.
The pandemic has all but accelerated this trend. Real-time contact tracing apps are just one example of the many AI applications used to monitor the spread of the virus and to reinforce the public health response to it.
AI and robotics are also key for the development and manufacturing of new vaccines against COVID-19.
A fresh JRC analysis shows that European biotech companies relying on AI have been strong partners in the global race to deliver a COVID-19 vaccine.
Based on this experience, the analysis highlights the EU’s strengths in the “AI in health” domain and identifies the challenges it still has to overcome to become a global leader.
High standard health system safeguards reliability of AI health applications
Europe’s high standard health system provides a strong foundation for the roll out of AI technologies.
Its high quality standards will ensure that AI-enabled health innovations maximise benefits and minimise risks.
The JRC study suggests that, similarly to the General Data Protection Regulation (GDPR), which is now considered a global reference, the EU is in a position to set the benchmark for global standards of AI in health in terms of safety, trustworthiness, transparency and liability.
The European Commission is currently preparing a comprehensive package of measures to address issues posed by the introduction of AI, including a European legal framework for AI to address fundamental rights and safety risks specific to the AI systems, as well as rules on liability related to new technologies.
Strong European research ecosystem supported by EU funding
At the moment, the EU is already well positioned in the application of AI in the healthcare domain - slightly behind China but on par with the US.
But judging from the EU’s research capacities, there is more potential.
The JRC analysis notes the strong investment of European biotech companies in research: in the EU, almost two thirds of all medical AI players are involved in research, against approximately one-third in China.
Consequently, Europe has a strong and diversified research and innovation ecosystem in the area of AI in health.
European companies are particularly strong in health diagnostics, health technology assessment, medical devices and pharmaceuticals.
The EU’s research framework programmes play an important role in the European research and innovation landscape in this domain.
A JRC report published in 2020 indicates that 146 projects linked to AI in health have been launched under the Horizon 2020 framework programme.
The funding of AI in health related projects has been increasing over time, reaching over €100 million in 2020.

Why effective disaster management needs responsible AI

The use of artificial intelligence holds promise in helping avert, mitigate and manage disasters by analyzing swaths of data, but more efforts are required to ensure that technologies are deployed in a responsible, equitable manner.
According to UNDDR, about 1.2 million lives have been lost worldwide and more than 4 billion people affected in disasters that took place between 2000 and 2019.
Faster data labelling
Cameron Birge, Senior Program Manager Humanitarian Partnerships at Microsoft, says their work in using AI for humanitarian missions has been human-centric. "Our approach has been about helping the humans, the humans stay in the loop, do their jobs better, faster and more efficiently," he noted.
One of their projects in India uses roofing as a proxy indicator of households with lower incomes who are likely to be more vulnerable to extreme events like typhoons. Satellite imagery analysis of roofs are used to inform disaster response and resilience-building plans. A simple yet rewarding avenue of using AI has been around data labelling to train AI models to assist disaster management.
One challenge, he noted, has been around "unbiased, good, clean, trusted data". He also encouraged humanitarian organizations to understand their responsibilities when making use of AI models to support decision-making. "You have to ensure you sustain, train and monitor these models," he advised. Microsoft also wants to promote more sharing of data with its 'Open Data' campaign.
Precise decision support
AI is becoming increasingly important to the work of the World Meteorological Organization (WMO). Supercomputers crunch petabytes of data to forecast weather around the world. The WMO also coordinates a global programme of surface-based and satellite observations. Their models merge data from more than 30 satellite sensors, weather stations and ocean-observing platforms all over the planet, explained Anthony Rea, Director of the Infrastructure Department at WMO.
AI can help interpret resulting data and help with decision support for forecasters who receive an overwhelming amount of data, said Rea. "We can use AI to recognize where there might be a severe event or a risk of it happening, and use that in a decision support mechanism to make the forecaster more efficient and maybe allow them to pick up things that couldn't otherwise be picked up."
Understanding the potential impact of extreme weather events on an individual or a community and assessing their vulnerability requires extra information on the built environment, population, and health.
"We need to understand where AI and machine learning can help and where we are better off taking the approach of a physical model. There are many examples of that case as well. Data curation is really important," he added.
WMO also sets the standards for international weather data exchange, including factors such as identifying the data, formats, and ontologies. While advocating for the availability of data, Rea also highlighted the need to be mindful of privacy and ethical considerations when dealing with personal data. WMO is revising its own data policies ahead of its Congress later this year, committing to free and open exchange of data beyond the meteorological community.
'Not a magic bullet'
Rea believes that AI cannot replace the models built on physical understanding and decades of research into interactions between the atmosphere and oceans. "One of the things we need to guard against in the use of AI is to think of it as a magic bullet," he cautioned.
Instead of vertically integrating a specific dataset and using AI to generate forecasts, Rea sees a lot of promise in bringing together different datasets in a physical model to generate forecast information. "We use machine learning and AI in situations where maybe we don't understand the underlying relationships. There are plenty of places in our area of science and service delivery where that is possible."
Rakesh Bharania, Director of Humanitarian Impact Data at Salesforce.org, also sees the potential of artificial or augmented intelligence in decision support and areas where a lot of contextual knowledge is not required. "If you have a lot of data about a particular problem, then AI is certainly arguably much better than having humans going through that same mountain of data. AI can do very well in answering questions where there is a clear, right answer," he said.
One challenge in the humanitarian field, Bharania noted, is scaling a solution from a proof of concept to something mature, usable, and relevant. He also cautioned that data used for prediction is not objective and can impact results.
"It's going to be a collaboration between the private sector who typically are the technology experts and the humanitarians who have the mission to come together and actually focus on determining what the right applications are, and to do so in an ethical and effective and impactful manner," he said. Networks such as NetHope and Impactcloud are trying to build that space of cross-sectoral collaboration, he added.
Towards 'white box AI’
Yasunori Mochizuki, NEC Fellow at NEC Corporation, recalled how local governments in Japan relied on social networks and crowd-behaviour analyses for real-time decision-making in the aftermath of 2011’s Great East Japan Earthquake and resulting tsunami.
Their solution analyzed tweets to extract information and identify areas with heavy damage and need for immediate rescue, and integrated it with information provided by public agencies. "Tweets are challenging for computers to understand as the context is heavily compressed and expression varies from one user to another. It is for this reason that the most advanced class of natural language processing AI in the disaster domain was developed," Mochizuki explained.
Mochizuki sees the need for AI solutions in disaster risk reduction to provide management-oriented support, such as optimizing logistics and recovery tasks. This requires “white box AI” he said, also known as ‘explainable AI’. "While typical deep learning technology doesn't tell us why a certain result was obtained, white box AI gives not only the prediction and recommendation, but also the set of quantitative reasons why AI reached the given conclusion," he said.
Webinar host and moderator Muralee Thummarukudy, Operations Manager, Crisis Management Branch at the United Nations Environment Programme (UNEP), also acknowledged the value of explainable AI. "It will be increasingly important that AI is able to explain the decisions transparently so that those who use or are subject to the outcome of these black box technologies would know why those decisions were taken," he said.
[Source: ITU]