An EU Approach to AI: from Ethics Guidelines to Policy and Investment Recommendations | IIEA
Hit enter to search or ESC to close

An EU Approach to AI:  from Ethics Guidelines to Policy and Investment Recommendations

Author: Deirdre Ní Cheallacháin

 

On 26 June 2019, the EU’s High-Level Expert Group on Artificial Intelligence (AI HLEG), consisting of fifty two experts drawn from industry, academia and civil society, published Policy and Investment Recommendations for Trustworthy Aritificial Intelligence.  They set out the group’s broad vision for a regulatory and financial framework conducive to the development and deployment of trustworthy AI.

The Recommendations reflect and expand on the three tenets of the European Commission’s AI strategy, Artificial Intelligence for Europe (25 April, 2018) and the seven key requirements for trustworthy AI listed in the Ethics Guidelines for Trustworthy Artificial Intelligence, (8 April 2019).

Trust as a Prerequisite – Seven Key Requirements

The Ethics Guidelines are based on the EU’s founding principles and values and list seven key requirements, which AI  systems should meet in order to be deemed trustworthy: human agency and oversight; technical robustness and safety; privacy and data governance; transparency;  diversity, non-discrimination and fairness; environmental and societal well-being; and accountability.

Trustworthiness is at the core of the approach proposed in the Ethics Guidelines and is characterised as essential for the societal uptake, acceptability and sustainability of AI. The Guideline also contain a non-exhaustive list of questions under each requirement to ensure their application. A pilot programme has been launched to trial the Ethics Guidelines and will run until December 2019 and will then be reviewed by the AI High Level Group.

The Recommendations on Policy and Investment in Trustworthy AI

The seven requirements above are woven through the subsequent ‘Recommendations on Policy and Investment in Trustworthy AI’, which, in turn, build on the three pillars of the original strategy paper (April 2018), namely:

  • to establish a legal and ethical framework;
  • to prepare for socio-economic changes; and
  • to ramp up technological and financial capacity.

The Recommendations underline the importance of stimulating the use of AI in industrial and commercial operations, and identify the uptake of AI by SMEs, as crucial, as the sector accounts for 56% of Europe’s turnover. An in-depth analysis of the ecosystems of AI in different sectors of the economy is also proposed. Overall, thirty three recommendations are outlined.  They are analysed under the following seven headings:

Fostering Industry/Research Partnerships

The EU’s strong B2B (business-to-business) and B2C (business-to-customer) markets are highlighted as fertile areas for researching AI applications. The AI High Level Groups recommends incentivising research partnerships between industry and research institutions by easing the establishment of such partnerships and facilitating access to funding and technical support.  It proposes the development of a research roadmap on AI, and sketches out ways in which to further foster a common European data space.

Modernising Public Services

The recommendations also underline the potential of AI to modernise the provision of public services and general public administration systems and point to the need to harness this potential. Member States are also encouraged to digitise public data and to improve the data literacy levels of government bodies, as set out in Tallinn Declaration of 2017, the Ministerial Declaration on e-Government.

Increasing Digital Literacy Levels

The Recommendations consolidate the call made by the European Commission to increase digital literacy levels and a general understanding of AI across the EU by establishing an AI Competence Framework and organizing an annual AI Awareness Day in order to equip the current and future workforces with the required skillset. The development of more up-skilling strategies under the European Social Fund (ESF) and the European Globalisation Adjustment Fund (EGF) is also recommended.

Securing Sustainable Investment

With regard to investment, the Recommendations propose: (i) establishing a European Transition Fund, (ii) drawing on the InvestEU programme, and (iii) reinforcing the European Globalisation Adjustment Fund to support reskilling initiatives. On 7 December 2018, the European Commission, in collaboration with Member States, published a Coordinated Plan on AI to step up investment in AI to reach a total (public and private sectors combined) of at least €20 billion per year over the next ten years. Furthermore, the AI HLEG urges Member States to swiftly adopt the Commission’s proposed initiatives in the ongoing negotiations on the EU’s long term budget, the Multiannual Financial Framework (MFF).

Towards A Human-Centric AI

The overarching principle of a human-centric approach is reinforced throughout the Recommendations, which state that the proposed reskilling initiatives are designed to foster inclusiveness and reduce existing socio-economic inequalities. Furthermore, they urge that safeguards against discriminatory bias due to limited data sets should be in place to minimise the risk to individuals, especially those belonging to vulnerable groups; and that data used in AI systems should not compromise the right of children to a “clean data slate” when they transition to adulthood.

A Principles-based Approach versus Prescriptive Regulation

The AI HLEG proposes a “principle-based approach” and warns against “prescriptive regulation” on the basis that overregulation could stymie innovation. However, a thorough mapping and evaluation of all existing AI-relevant EU laws is recommended, while the attribution of legal personality to AI systems or robots is discouraged.

A note of caution is sounded on the disproportionate use of AI-enabled surveillance and the mass-scale scoring of individuals, with the express call for the latter to be banned. This could be read as an implicit disapproval of the Chinese government’s current trial of such a scoring system for its citizens.

Striking the Balance between Regulation and Risk

The use of lethal autonomous weapons systems (LAWS) and cyber-attack tools is also flagged, as applications of AI with “adverse impacts” on humans and society as a whole. In this regard, the Recommendations suggest the development of  “risk classes”, so that regulatory responses are developed that are proportionate to risk. The Recommendations also call for stakeholders to “define red lines” to ensure that fundamental rights and EU founding values are safeguarded, although they caution against excessive regulation.

Conclusion: Prospects for EU leadership in AI

Although the Recommendations are neither binding nor granular, they provide a solid foundation for the European Commission to develop an EU policy on AI and identify funding reservoirs, as well as developing a basis on which to evaluate and, where necessary, update legislation. The European Commission has stated that over 300 organisations have committed to trialling the recommendations thus far.

While the EU currently trails behind the US and China in terms of investment in AI and technology, the concept of ‘Trustworthy AI’, anchored both in the Ethics Guidelines (April, 2019) and the Recommendations for Policy and Investment (June, 2019), has the potential to set the EU apart as a global standard-setter in terms of ethics.