Artificial Intelligence and civil liability. Who pays the damages?
The progressive diffusion of technologies and in general of AI tools in business realities is a sure progress and in a perspective of economic revival and relaunch, in particular post-Covid-19, their use seems unavoidable. However, there are several problems and risks related to their use. One of the main challenges relates to the liability for damages which may arise from the tools. The European Institutions have recently addressed the issue, trying to give some guidelines, but there is still some lack of legislation that must be filled in.
Especially after Covid - 19 and in the view to the relaunch of the economy, the use of artificial intelligence in public and private sectors seems unavoidable. The industries that might have more advantages by using AI are manufacturing industry (see industrial robots), transport (see self-driven vehicle), financial markets, health and medical care (see medical robots and diagnostic – assistive technology) and general care (see self-cleaning of public places). However, there are several problems and risks related to the use of AI. Indeed, as we shall see, what makes this technology unique (and risky) is its “opacity” or – as often said – its “black box” features.
The European institutions have been addressing this and related issues for quite a few years now, but recently the attention on AI has significantly increased. Among the many documents, it is worth remembering the White Paper on Artificial Intelligence of the European Commission of February 19th 2020 and the Draft Report with recommendations to the Commission on a Civil liability regime for artificial intelligence, which includes a motion for a European Parliament Resolution and detailed recommendations for drawing up a European Parliament and Council Regulation on liability for the operation of artificial intelligence-systems, issued by the European Parliament on the April 27th; and finally the Study - commissioned by the Policy Department C at the request of the Committee on Legal Affairs - on Artificial Intelligence and Civil Liability, published on the July 14th 2020.
In all these documents, the European Institutions and expert groups stress the fact that one of the key issues of using artificial intelligence (whether by public administration or by private entities, such as businesses) is the liability for damages that can arise in relation to the use of AI tools and in particular caused by the defects of the AI tools. However, as mentioned, one of the main problem of using AI is its “opacity”: indeed, due to the fact that the AI-systems may depend on external data and be vulnerable to cybersecurity breaches and have more and more high degree of autonomy, it may be difficult or even impossible to identify the liable person and as a result, the harmed person could face difficulties to get compensation.
The general framework which is at the moment available for governing such liability is the Product Liability Directive (85/374/CEE), as it has been implemented in the national member states. This Directive, as well known, provides a liability of the producer of the product for the damages that are caused by a defect of the product itself. The consumer – and generally the injured person – has to give evidence of the causal link between the defect of the product and the damage; this of course, in case of damages caused by a tool of artificial intelligence, is not so easy to establish.
However, the Commission and the expert groups underline that it is not necessary a complete review of the general European legal framework on civil liability but it is indeed necessary to adapt the legislation in force and to introduce few new provisions.
In this light, the Draft Report of the European Parliament includes a Proposal for a Regulation of the European Parliament and of the Council on liability for the operation of Artificial Intelligence-systems. This proposed Regulation, if approved, would thus introduce a new form of liability for the deployer of AI-system, which is defined as the person who decides on the use of AI-systems, exercises control over the associated risks and benefits from its operation.
In particular, the proposed regulation provides for a strict liability for high-risk AI-systems, which are the systems that display intelligent behavior (see art. 3 and 4). In accordance with other legislations regarding civil liability in critical and high-risks sectors, the proposed regulation provides for a compulsory insurance cover. Also in relation to this, the proposed regulation establishes the maximum amount of compensation damages.
By contrast, according to art. 8 of the proposed regulation, the deployer of an AI-system that is not defined as a high-risk AI-system in accordance to the provisions of the Regulation, shall be subjected to fault-based liability for any harm or damage that was caused by a physical or virtual activity, device or process driven by the AI-system. This means that there would be a double track of liability, depending on the risk of the activity.
Leaning on the traditional principles of civil liability, the proposed regulation introduces other provisions regarding damages, limitation period, multiple tortfeasors, and so on; as well as the other documents that face the issue of liability, the regulation tries to find a balance between the protection of user rights and collectivity and the incentives of the creation of new and innovative technologies.
However, technologies run much faster than legislation and it is not excluded that even the newest legislation will not cover all the challenges that AI poses. In relation to this, the only solution is to apply the general rules and principles that are in force in each legal system.