Spanish EU Council Presidency: CoESS and APROSER make proposals for a future-oriented, more resilient, European Union

On 01 July 2023, Spain took over the rotating Presidency of the Council of the EU. It will thereby be responsible to lead the work in Brussels on important matters such as negotiations on the EU Artificial Intelligence (AI) Act and initiatives in the context of the EU Year on Skills.

In a Joint Statement, CoESS and APROSER declare the commitment of the European security industry to support the efforts of the Spanish Presidency on a large range of matters impacting not only the security services, but public security overall.

The timing of the Spanish Presidency comes at a particularly decisive stage. First, EU lawmakers will have to find agreement on a large range of open dossiers before the European elections in 2024, notably the EU AI Act. At the same time, European businesses and societies are confronted with a range of challenges, such as labour shortages and increasing threats to the protection of Critical Infrastructure and supply chains – to name only a few.

In their Joint Statement, the representatives of the European and Spanish private security industry, CoESS and APROSER, confirm their commitment to support the Spanish Presidency in its efforts to build a more future-oriented and resilient EU and make respective proposals for the way forward. These are grouped along four key messages:

- Recognising the value of private security services to European citizens and economy
- Adapt legislation to realities in a changing security landscape
- Public security empowered through qualified workers
- Enforce the provision of high-quality security services to European citizens

Important recommendations include the hosting of a private security roundtable in Brussels, principles of human-centred AI and legal certainty in the context of the future EU AI Act, and a call for a revision of the EU Public Procurement Directives.

UK cyber chief: "AI should be developed with security at its core"

SECURITY must be the primary consideration for developers of artificial intelligence (AI) in order to prevent designing systems that are vulnerable to attack, the head of the UK’s cyber security agency (NCSC) has today warned.

In a major speech, Lindy Cameron highlighted the importance of security being baked into AI systems as they are developed and not as an afterthought. She also emphasised the actions that need to be taken by developers to protect individuals, businesses, and the wider economy from inadequately secure products.

Her comments were delivered to an audience at the influential Chatham House Cyber 2023 conference, which sees leading experts gather to discuss the role of cyber security in the global economy and the collaboration required to deliver an open and secure internet.

She said:

“We cannot rely on our ability to retro-fit security into the technology in the years to come nor expect individual users to solely carry the burden of risk. We have to build in security as a core requirement as we develop the technology.

“Like our US counterparts and all of the Five Eyes security alliance, we advocate a ‘secure by design’ approach where vendors take more responsibility for embedding cyber security into their technologies, and their supply chains, from the outset. This will help society and organisations realise the benefits of AI advances but also help to build trust that AI is safe and secure to use.

“We know, from experience, that security can often be a secondary consideration when the pace of development is high.

“AI developers must predict possible attacks and identify ways to mitigate them. Failure to do so will risk designing vulnerabilities into future AI systems.”

The UK is a global leader in AI and has an AI sector that contributes £3.7 billion to the economy and employs 50,000 people. It will host the first ever summit on global AI Safety later this year to drive targeted, rapid, international action to develop the international guardrails needed for safe and responsible development of AI.

Reflecting on the National Cyber Security Centre’s role in helping to secure advancements in AI, she highlighted three key themes that her organisation is focused on. The first of these is to support organisations to understand the associated threats and how to mitigate against them. She said:

“It’s vital that people and organisations using these technologies understand the cyber security risks – many of which are novel.

“For example, machine learning creates an entirely new category of attack: adversarial attacks. As machine learning is so heavily reliant on the data used for the training, if that data is manipulated, it creates potential for certain inputs to result in unintended behaviour, which adversaries can then exploit.

“And LLMs pose entirely different challenges. For example - an organisation's intellectual property or sensitive data may be at risk if their staff start submitting confidential information into LLM prompts.”

The second key theme Ms Cameron discussed was the need to maximise the benefits of AI to the cyber defence community. On the third, she emphasised the importance of understanding how our adversaries – whether they are hostile states or cyber criminals – are using AI and how they can be disrupted. She said:

“We can be in no doubt that our adversaries will be seeking to exploit this new technology to enhance and advance their existing tradecraft.

“LLMs also present a significant opportunity for states and cyber criminals too. They lower barriers to entry for some attacks. For example, they make writing convincing spear-phishing emails much easier for foreign nationals without strong linguistic skills.”

Standardisation of Cybersecurity for Artificial Intelligence

The European Union Agency for Cybersecurity (ENISA) publishes an assessment of standards for the cybersecurity of AI and issues recommendations to support the implementation of upcoming EU policies on Artificial Intelligence (AI).

This report focuses on the cybersecurity aspects of AI, which are integral to the European legal framework regulating AI, proposed by the European Commission last year dubbed as the “AI Act“.

What is Artificial Intelligence?

The draft AI Act provides a definition of an AI system as “software developed with one or more (…) techniques (…) for a given set of human-defined objectives, that generates outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.” In a nutshell, these techniques mainly include: machine learning resorting to methods such as deep learning, logic, knowledge-based and statistical approaches.

It is indeed essential for the allocation of legal responsibilities under a future AI framework to agree on what falls into the definition of an 'AI system'.

However, the exact scope of an AI system is constantly evolving both in the legislative debate on the draft AI Act, as well in the scientific and standardisation communities.

Although broad in contents, this report focuses on machine learning (ML) due to its extensive use across AI deployments. ML has come under scrutiny with respect to vulnerabilities particularly impacting the cybersecurity of an AI implementation.

AI cybersecurity standards: what’s the state of play?

As standards help mitigate risks, this study unveils existing general-purpose standards that are readily available for information security and quality management in the context of AI. In order to mitigate some of the cybersecurity risks affecting AI systems, further guidance could be developed to help the user community benefit from the existing standards on AI.

This suggestion has been based on the observation concerning the software layer of AI. It follows that what is applicable to software could be applicable to AI. However, it does not mean the work ends here. Other aspects still need to be considered, such as:

  • a system-specific analysis to cater for security requirements deriving from the domain of application;
  • standards to cover aspects specific to AI, such as the traceability of data and testing procedures.

Further observations concern the extent to which the assessment of compliance with security requirements can be based on AI-specific horizontal standards; furthermore, the extent to which this assessment can be based on vertical/sector specific standards calls for attention.

Key recommendations include:

  • Resorting to a standardised AI terminology for cybersecurity;
  • Developing technical guidance on how existing standards related to the cybersecurity of software should be applied to AI;
  • Reflecting on the inherent features of ML in AI. Risk mitigation in particular should be considered by associating hardware/software components to AI; reliable metrics; and testing procedures;
  • Promoting the cooperation and coordination across standards organisations’ technical committees on cybersecurity and AI so that potential cybersecurity concerns (e.g., on trustworthiness characteristics and data quality) can be addressed in a coherent manner.

Regulating AI: what is needed?

As for many other pieces of EU legislation, compliance with the draft AI Act will be supported by standards. When it comes to compliance with the cybersecurity requirements set by the draft AI Act, additional aspects have been identified. For example, standards for conformity assessment, in particular related to tools and competences, may need to be further developed. Also, the interplay across different legislative initiatives needs to be further reflected in standardisation activities – an example of this is the proposal for a regulation on horizontal cybersecurity requirements for products with digital elements, referred to as the “Cyber Resilience Act”.

Building on the report and other desk research as well as input received from experts, ENISA is currently examining the need for and the feasibility of an EU cybersecurity certification scheme on AI. ENISA is therefore engaging with a broad range of stakeholders including industry, ESOs and Member States, for the purpose of collecting data on AI cybersecurity requirements, data security in relation to AI, AI risk management and conformity assessment.

ENISA advocated the importance of standardisation in cybersecurity today, at the RSA Conference in San Francisco in the ‘Standards on the Horizon: What Matters Most?’ in a panel comprising the National Institute of Standards and Technology (NIST).

Autonomous driving systems: A long road ahead

Substantive regulatory progress has been made since last year, despite the global COVID-19 pandemic that paralyzed supply chains in some industries around the world and shifted the mobility landscape considerably.
Still, progress towards fully autonomous driving has been slow. Five levels have been established within the industry for assisted, automated and autonomous driving. Fully autonomous driving is represented by only Level 5.
SAE levels of automation
Here are the top three takeaways from the recent Symposium on the Future Networked Car 2021:
1. Regulatory efforts are advancing in preparation for Autonomous Driving Systems (ADS)
The past year has seen considerable progress at the global, regional and national levels. The shared nature of most transport infrastructure and automotive supply chains means that common standards and interoperability in the manufacture and communication capabilities of different types of vehicles will be vital.
At the global level, two new regulations were introduced recently from United Nations’ Economic Commission for Europe (UNECE) on Cybersecurity (UN Regulation 155) and Software Updates (UN Regulation 156). A new UN Regulation 157 on Automated Lane Keeping Systems for highly automated driving up to 60 kph on motorways was recently approved.
Regulatory preparedness is mostly being developed at the regional level, with vehicle type approval, product liability and general product safety, and roadworthiness tests developed by the European Union and also in the Asia-Pacific region.
At the national level, developments include liability, traffic rules, regulatory mandates, trials, and infrastructure. For example, Finland has authorized Level 5 driving, and Germany has already authorized the use of automated vehicles on its motorways.
2. Fully Autonomous Driving Systems (ADS) are still a long way off
Currently, mainly only Level 2 vehicles are available on the market (other than autonomous shuttles and an autonomous taxi service operating in Phoenix, Arizona, the United States since October 2020). However, Honda recently announced its first Level 3 driving system, due to be launched later this year.
The car industry, highways agencies and transport regulators are working together to overcome the significant challenges introduced by autonomous driving. Chief among these are safety considerations – and what constitutes ‘acceptable risk’ for car occupants, as well as the broader public.
Data challenges also persist, from the capture and preservation of data to its interpretation and protection. Improving the physical environment with markers to make a more intelligent environment for automated, let alone autonomous, vehicles is another challenge, as well as collaboration that would enable intelligent vehicles to function across borders.
Other major challenges include the introduction of self-learning artificial intelligence (AI) systems in automated driving systems, as well as cybersecurity considerations – how to prevent unauthorized or illegal intrusions into connected cars or their networks.
3. The communication and data demands of ADS will be enormous
The changes driven by the advent of ADS are many and large. Even cars already on the road today are said to be running over 150 million lines of code. Many participants emphasized the changes needed in physical infrastructure, such as 5G masts and improved road markings, as well as the information needs and data demands, for mapping and object identification, for instance.
5G will be instrumental in improving automated driving and its communication needs like smart parking, but also V2V (vehicle-to-vehicle) and V2I (vehicle-to-infrastructure) communications. A host of innovations and improvements are needed throughout the vehicle ecosystem to help create an optimal real-world environment for automated driving systems. ITU is working with all stakeholders to help realize these innovations in the interests of smarter and safer mobility.
[Source: ITU]

How AI will shape smart cities

Cities worldwide are not just growing, but also trying to reconfigure themselves for a sustainable future, with higher quality of life for every citizen. That means capitalizing on renewable power sources, maximizing energy efficiency and scaling up electrified transport on an unprecedented scale.
In parallel, artificial intelligence (AI) and machine learning are emerging as key tools to bring that future into being as global temperatures creep upward.
The 2015 Paris Agreement called for limiting the rise in average global temperatures to 1.5oC compared to pre-industrial levels, implying a massive reduction of greenhouse gas (GHG) emissions.
Meeting the ambitious climate goal would require a near-total elimination of emissions from power generation, industry, and transport by 2050, said Ariel Liebman, Director of Monash Energy Institute, at a recent AI for Good webinar convened by an ITU Focus Group studying AI and environmental efficiency.
A key role in renewables
Renewable energy sources, including the sun, wind, biofuels and renewable-based hydrogen, make net-zero emissions theoretically possible. But solar and wind facilities – whose output varies with seasons, the weather and time of day – require complex grid management and real-time responsiveness to work 24/7.
Smart grids incorporating data analytics, however, can operate smoothly with high shares of solar and wind power.
"AI methods – particularly optimization, machine learning, time series forecasting and anomaly detection – have a crucial role to play in the design and operation of this future carbon-free electricity grid," explained Liebman.
One power grid in Indonesia could reach 50 per cent renewables by 2030 at no extra cost compared to building new coal- and gas-fired plants, according to a modelling tool used at Monash. Renewable power generation costs have plummeted worldwide in recent years.
Anticipating future needs
Shifts in consumer demand for heat, light, or mobility can create further uncertainties, especially in urban environments. But reinforcement learning, combined with neural networks, can aid the understanding of how buildings consume energy, recommending adjustments and guide occupant behaviour.
"AI can make our existing assets more effective and efficient, but also help us in developing new business models, both in terms of cleaner technology, and also for our customers," said Dan Jeavons, General Manager, Data Science, at Shell.
The global energy giant put over 65 AI applications into service last year, enabling the company to monitor 5,700 pieces of equipment and generate real-time data feeds from across its asset base.
A data-driven approach
Digital consultancy Capgemini uses satellite data to understand fire risks and devise rescue plans. Another project uses data from Copernicus satellites to detect plastic waste in our natural environment.
“Deep learning algorithms simulate the shape and movement of plastic waste in the ocean and then train the algorithm to efficiently detect plastic waste," said Sandrine Daniel, head of the company’s scientific office.
Electric vehicle start-up Arrival takes a data-driven approach to decisions over the entire product lifecycle. Produced in micro-factories with plug-and-play composite modules, its vehicle designs reduce the environmental impact of manufacturing and use.
"We design things to be upgradable," said Jon Steel, Arrival’s Head of Sustainability. Functional components facilitate repair, replacement, or reuse, while dedicated software monitors energy use and performance, helping to extend each vehicle’s useful life.
Digital twins for urban planning
Real-time virtual representations – known as digital twins – have been instrumental in envisioning smart, sustainable cities, said Kari Eik, Secretary General of the Organization for International Economic Relations (OiER).
Under the global United for Smart Sustainable Cities (U4SSC) initiative, a project with about 50 cities and communities in Norway uses digital twins to evaluate common challenges, model scenarios and identify best practices.
"Instead of reading a 1,000-page report, you are looking into one picture,” Eik explained. “It takes five seconds to see not just a challenge but also a lot of the different use cases."
For digital twins, a privacy-by-design approach with transparent, trusted AI will be key to instil trust among citizens, said Albert H. Seubers, Director of Global Strategy IT in Cities, Atos. He hopes the next generation of networks in cities is designed to protect personal data, reduce network consumption, and make high-performance computing more sustainable. "But this also means we have to build a data management function or responsibility at the city level that really understands what it means to deploy data analytics and manage the data."
Seubers called for open standards to enable interoperability, a key ingredient in nurturing partnerships focused on sustainable city building. "Implementing minimal interoperability mechanisms means that from design, we have private data security and explainable AI. In the end, it's all about transparency and putting trust in what we do," he said.
[Source: ITU]

Using AI to better understand natural hazards and disasters

As the realities of climate change take hold across the planet, the risks of natural hazards and disasters are becoming ever more familiar. Meteorologists, aiming to protect increasingly populous countries and communities, are tapping into artificial intelligence (AI) to get them the edge in early detection and disaster relief.
Al shows great potential to support data collection and monitoring, the reconstruction and forecasting of extreme events, and effective and accessible communication before and during a disaster.
This potential was in focus at a recent workshop feeding into the first meeting of the new Focus Group on AI for Natural Disaster Management. The group is open to all interested parties, supported by the International Telecommunication Union (ITU) together with the World Meteorological Organization (WMO) and UN Environment.
“AI can help us tackle disasters in development work as well as standardization work. With this new Focus Group, we will explore AI’s ability to analyze large datasets, refine datasets and accelerate disaster-management interventions,” said Chaesub Lee, Director of the ITU Telecommunication Standardization Bureau, in opening remarks to the workshop.
New solutions for data gaps
"High-quality data are the foundation for understanding natural hazards and underlying mechanisms providing ground truth, calibration data and building reliable AI-based algorithms," said Monique Kuglitsch, Innovation Manager at Fraunhofer Heinrich-Hertz-Institut and Chair of the new Focus Group.
In Switzerland, the WSL Institute for Snow and Avalanche Research uses seismic sensors in combination with a supervised machine-learning algorithm to detect the tremors that precede avalanches.
“You record lots of signals with seismic monitoring systems,” said WSL researcher Alec Van Hermijnen. “But avalanche signals have distinct characteristics that allow the algorithm to find them automatically. If you do this in continuous data, you end up with very accurate avalanche data."
Real-time data from weather stations throughout the Swiss Alps can be turned into a new snowpack stratigraphy simulation model to monitor danger levels and predict avalanches.
Modelling for better predictions
Comparatively rare events, like avalanches, offer limited training data for AI solutions. How models trained on historical data cope with climate change remains to be seen.
At the Pacific Northwest Seismic Network, Global Navigation Satellite System (GNSS) data is monitored in support of tsunami warnings. With traditional seismic systems proving inadequate in very large magnitude earthquakes, University of Washington research scientist Brendan Crowell wrote an algorithm, G-FAST (Geodetic First Approximation of Size and Timing), which estimates earthquake magnitudes within seconds of earthquakes’ time of origin.
In north-eastern Germany, deep learning of waveforms produces probabilistic forecasts and helps to warn residents in affected areas. The Transformer Earthquake Alerting Model supports well-informed decision-making, said PhD Researcher Jannes Münchmeyer at the GeoForschungsZentrum Potsdam.
Better data practices for a resilient future
How humans react in a disaster is also important to understand. Satellite images of Earth at night - called "night lights" – help to track the interactions between people and river resources. The dataset for Italy helps to manage water-related natural disasters, said Serena Ceola, Senior Assistant Professor at the University of Bologna.
Open data initiatives and public-private partnerships are also using AI in the hope of building a resilient future.
The ClimateNet repository promises a deep database for researchers, while the CLINT (Climate Intelligence) consortium in Europe aims to use machine learning to detect and respond to extreme events.
Some practitioners, however, are not validating their models with independent data, reinforcing perceptions of AI as a “black box”, says Carlos Gaitan, Co-founder and CTO of Benchmark Labs and a member of the American Meteorological Society Committee on AI Applications to Environmental Science. "For example, sometimes, you have only annual data for the points of observations, and that makes deep neural networks unfeasible."
A lack of quality-controlled data is another obstacle in environmental sciences that continue to rely on human input. Datasets come in different formats, and high-performing computers are not available to all, Gaitan added.
AI to power community-centred communications
Communications around disasters require high awareness of communities and their comprising connections.
"Too often when we are trying to understand the vulnerability and equity implications of our work, we are using data from the census of five or ten years ago,” said Steven Stichter, Director of the Resilient America Program at the US National Academies of Science (NAS). “That's not sufficient as we seek to tailor solutions and messages to communities."
A people-centered mechanism is at the core of the Sendai Framework for Disaster Risk Reduction, a framework providing countries with concrete actions that they can take to protect development gains from the risk of disaster.
If AI can identify community influencers, it can help to target appropriate messages to reduce vulnerability, Stichter said.
With wider internet access and improved data speeds, information can reach people faster, added Rakiya Babamaaji, Head of Natural Resources Management at Nigeria’s National Space Research and Development Agency and Vice Chair of the Africa Science and Technology Advisory Group on Disaster Risk Reduction (Af-STAG DRR).
AI can combine Earth observation data, street-level imagery, data drawn from connected devices, and volunteered geographical details. However, technology alone cannot solve problems, Babamaaji added. People need to work together, using technology creatively to tackle problems.
With clear guidance on best practices, AI will get better and better in terms of accessibility, interoperability, and reusability, said Jürg Luterbacher, Chief Scientist & Director of Science and Innovation at WMO. But any AI-based framework must also consider human and ecological vulnerabilities. "We have also to identify data biases, or train algorithms to interpret data within an ethical framework that considers minority and vulnerable populations," he added.
Image credit: ITU-Camptocamp.org via Wikimedia Commons

Latest issue of World Security Report has arrived

The Spring 2021 issue of World Security Report for the latest industry views and news, is now available to download.
In the Spring 2021 issue of World Security Report:
- Phenomena or Just a ‘Bad Karma’
- Towards 2021 – Upcoming Organisation Risk & Resiliency Trends
- Maritime Domain Awareness - An Essential Component of a Comprehensive Border Security Strategy
- Security and Criminology- Risk Investigation and AI
- Resilience and Social Unrest
- State Sponsored Terror
- IACIPP Association News
- Industry news
Download your copy today at www.cip-association.org/WSR

NSCAI Report presents strategy for winning the artificial intelligence era

The 16 chapters in the National Security Commission on Artificial Intelligence (NSCAI) Main Report provide topline conclusions and recommendations. The accompanying Blueprints for Action outline more detailed steps that the U.S. Government should take to implement the recommendations.
The NSCAI acknowledges how much remains to be discovered about AI and its future applications. Nevertheless, enough is known about AI today to begin with two convictions.
First, the rapidly improving ability of computer systems to solve problems and to perform tasks that would otherwise require human intelligence—and in some instances exceed human performance—is world altering. AI technologies are the most powerful tools in generations for expanding knowledge, increasing prosperity, and enriching the human experience. AI is also the quintessential “dual-use” technology. The ability of a machine to perceive, evaluate, and act more quickly and accurately than a human represents a competitive advantage in any field—civilian or military. AI technologies will be a source of enormous power for the companies and countries that harness them.
Second, AI is expanding the window of vulnerability the United States has already entered. For the first time since World War II, America’s technological predominance—the backbone of its economic and military power—is under threat. China possesses the might, talent, and ambition to surpass the United States as the world’s leader in AI in the next decade if current trends do not change. Simultaneously, AI is deepening the threat posed by cyber attacks and disinformation campaigns that Russia, China, and others are using to infiltrate our society, steal our data, and interfere in our democracy. The limited uses of AI-enabled attacks to date represent the tip of the iceberg. Meanwhile, global crises exemplified by the COVID-19 pandemic and climate change highlight the need to expand our conception of national security and find innovative AI-enabled solutions.
Given these convictions, the Commission concludes that the United States must act now to field AI systems and invest substantially more resources in AI innovation to protect its security, promote its prosperity, and safeguard the future of democracy.
Full report is available at https://reports.nscai.gov/final-report

ITU to advance AI capabilities to contend with natural disasters

The International Telecommunication Union (ITU) – the United Nations specialized agency for information and communication technologies – has launched a new Focus Group to contend with the increasing prevalence and severity of natural disasters with the help of artificial intelligence (AI).
In close collaboration with the World Meteorological Organization (WMO) and the United Nations Environment Programme (UNEP), the ITU Focus Group on 'AI for natural disaster management' will support global efforts to improve our understanding and modelling of natural hazards and disasters. It will distill emerging best practices to develop a roadmap for international action in AI for natural disaster management.
"With new data and new insight come new powers of prediction able to save countless numbers of lives," said ITU Secretary-General Houlin Zhao. "This new Focus Group is the latest ITU initiative to ensure that AI fulfils its extraordinary potential to accelerate the innovation required to address the greatest challenges facing humanity."
Clashes with nature impacted 1.5 billion people from 2005 to 2015, with 700,000 lives lost, 1.4 million injured, and 23 million left homeless, according to the Sendai Framework for Disaster Risk Reduction 2015-2030 developed by the UN Office for Disaster Risk Reduction (UNDRR).
AI can advance data collection and handling, improve hazard modelling by extracting complex patterns from a growing volume of geospatial data, and support effective emergency communications. The new Focus Group will analyze relevant use cases of AI to deliver technical reports and accompanying educational materials addressing these three key dimensions of natural disaster management. Its study of emergency communications will consider both technical as well as sociological and demographical aspects of these communications to ensure that they speak to all people at risk.
"This Focus Group looks to AI to help address one of the most pressing issues of our time," noted the Chair of the Focus Group, Monique Kuglitsch, Innovation Manager at ITU member Fraunhofer Heinrich Hertz Institute. “We will build on the collective expertise of the communities convened by ITU, WMO and UNEP to develop guidance of value to all stakeholders in natural disaster management. We are calling for the participation of all stakeholders to ensure that we achieve this."
Muralee Thummarukudy, Operations Manager for Crisis Management at UNEP explained: "AI applications can provide efficient science-driven management strategies to support four phases of disaster management: mitigation, preparedness, response and recovery. By promoting the use and sharing of environmental data and predictive analytics, UNEP is committed to accelerating digital transformation together with ITU and WMO to improve disaster resilience, response and recovery efforts."
The Focus Group's work will pay particular attention to the needs of vulnerable and resource-constrained regions. It will make special effort to support the participation of the countries shown to be most acutely impacted by natural disasters, notably small island developing states (SIDS) and low-income countries.
The proposal to launch the new Focus Group was inspired by discussions at an AI for Good webinar on International Disaster Risk Reduction Day, 13 October 2020, organized by ITU and UNDRR.
"WMO looks forward to a fruitful collaboration with ITU and UNEP and the many prestigious universities and partners committed to this exciting initiative. AI is growing in importance to WMO activities and will help all countries to achieve major advances in disaster management that will leave no one behind," said Jürg Luterbacher, Chief Scientist & Director of Science and Innovation at WMO. "The WMO Disaster Risk Reduction Programme assists countries in protecting lives, livelihoods and property from natural hazards, and it is strengthening meteorological support to humanitarian operations for disaster preparedness through the development of a WMO Coordination Mechanism and Global Multi-Hazard Alert System. Complementary to the Focus Group, we aim to advance knowledge transfer, communication and education – all with a focus on regions where resources are limited."

How artificial intelligence can help transform Europe’s health sector

A high-standard health system, rich health data and a strong research and innovation ecosystem are Europe’s key assets that can help transform its health sector and make the EU a global leader in health-related artificial intelligence applications.
The use of artificial intelligence (AI) applications in healthcare is increasing rapidly.
Before the COVID-19 pandemic, challenges linked to our ageing populations and shortages of healthcare professionals were already driving up the adoption of AI technologies in healthcare.
The pandemic has all but accelerated this trend. Real-time contact tracing apps are just one example of the many AI applications used to monitor the spread of the virus and to reinforce the public health response to it.
AI and robotics are also key for the development and manufacturing of new vaccines against COVID-19.
A fresh JRC analysis shows that European biotech companies relying on AI have been strong partners in the global race to deliver a COVID-19 vaccine.
Based on this experience, the analysis highlights the EU’s strengths in the “AI in health” domain and identifies the challenges it still has to overcome to become a global leader.
High standard health system safeguards reliability of AI health applications
Europe’s high standard health system provides a strong foundation for the roll out of AI technologies.
Its high quality standards will ensure that AI-enabled health innovations maximise benefits and minimise risks.
The JRC study suggests that, similarly to the General Data Protection Regulation (GDPR), which is now considered a global reference, the EU is in a position to set the benchmark for global standards of AI in health in terms of safety, trustworthiness, transparency and liability.
The European Commission is currently preparing a comprehensive package of measures to address issues posed by the introduction of AI, including a European legal framework for AI to address fundamental rights and safety risks specific to the AI systems, as well as rules on liability related to new technologies.
Strong European research ecosystem supported by EU funding
At the moment, the EU is already well positioned in the application of AI in the healthcare domain - slightly behind China but on par with the US.
But judging from the EU’s research capacities, there is more potential.
The JRC analysis notes the strong investment of European biotech companies in research: in the EU, almost two thirds of all medical AI players are involved in research, against approximately one-third in China.
Consequently, Europe has a strong and diversified research and innovation ecosystem in the area of AI in health.
European companies are particularly strong in health diagnostics, health technology assessment, medical devices and pharmaceuticals.
The EU’s research framework programmes play an important role in the European research and innovation landscape in this domain.
A JRC report published in 2020 indicates that 146 projects linked to AI in health have been launched under the Horizon 2020 framework programme.
The funding of AI in health related projects has been increasing over time, reaching over €100 million in 2020.
1 2