Journal of Legal, Ethical and Regulatory Issues (Print ISSN: 1544-0036; Online ISSN: 1544-0044)

Research Article: 2022 Vol: 25 Issue: 2

The Extent of the Civil Liability of Artificial Intelligence Technologies for the Infection and the Spread of Covid-19

Raed Mohammed Flieh Alnimer, Royal University for Women

Eman Naboush, Qatar University

Citation Information: Alnimer, R.M.F., & Naboush, E. (2022). The extent of the civil liability of technologies for the infection and the spread of covid-19. Journal of Legal, Ethical and Regulatory Issues, 25(2), 1-16

Abstract

With the on-going situation of the global pandemic coronavirus (COVID-19) and the implementation of social-distancing measures by the higher authorities in countries around the world, the use of artificial intelligence (AI) technologies increased especially with the rapid technological developments. AI are not harm free, thus various questions were raised by legal practitioners. The first question raised was regarding the classification of the legal status of AI bearing in mind that national and international legislations did not consider AI as a subject of law yet. The second question raised was who shall be legally liable in case harm was caused by AI technology for instance an injury caused by a robot to individuals. Hence, this paper will shed light on the concept of AI by providing the definition of AI and exploring its legal status in addition to analysing the basis and the rules of liability and the defences to avoid liability for injury resulted by AI technologies.

Keywords

Artificial Intelligence (AI), Legal Status, Liability, Technology.

Introduction

During the COVID-19 pandemic and in order to keep social distancing to protect humans from being infected by the virus, the use of artificial intelligence technologies (AI) has increased. The enormous benefits of using such technologies come at a risk of causing harm to the users. In 2016 the first case of an injury to an audience by a robot (an AI technology) was cited in the ‘18th China International Hi-Tech Fair’ and a series of queries on legal status and legal liabilities of intelligent robots aroused (Li et al., 2019). Mainly, the question was what the legal status of AI is; what kind of liability would a robot hold if it causes an injury to a human; who is to blame for such injuries, the producer, the operator or the user? The emergence of AI raised new challenges in terms of product safety and liability issues. The EU Expert Group on Liability and New Technologies warned that new challenges may arise in relation to the AI related to safety and liability. In particular, the level of protection of victims of AI should be similar to the protection of victims of traditional technologies. However, there is a need for encouraging technological innovation and creating investment stability. ‘According to the Report from the New Technologies formation of the Expert Group on Liability and New Technologies, the operation of some autonomous AI devices and services could have a specific risk profile in terms of liability, because they may cause significant harm to important legal interests like life, health and property, and expose the public at large to risks. This could mainly concern AI devices that move in public spaces’ (European Commission, 2020).

In this research we will explain the concept of AI by defining it and exploring its legal status followed by studying the basis and the rules to establish liability and the defences to avoid liability for injury caused by AI technologies.

The Concept of Artificial Intelligence

To understand the notion of AI, it is important to identify what does the phrase artificial intelligence mean, its scope and which products and services does it include. It is also important to discuss the legal status of AI and whether it should enjoy legal personality and should be treated as an entity or it is just a machine or a product. In the following sections, these points will be analysed.

Definition of Artificial Intelligence

Defining what AI means is not an easy task. There is no single definition which has been agreed on to refer to this kind of technology or product. Therefore, there are many definitions from different perspectives to AI. From a computer sciences perspective, Amazon Web Services defines AI as the field of computer science dedicated to solving cognitive problems commonly associated with human intelligence, such as learning, problem solving, and pattern recognition (Services & Amazon Web). Indeed AI, at the present at least, is closely connected with computer sciences. However, not all computer sciences are considered to be an AI. Machine learning services for problems related to human intelligence are the essence of AI. Thus, AI is also defined as a computer technology that allows something to be done in a way that is similar to the way a human would do it, Cambridge Dictionary adds several features to the definition of AI  to refer to the use of such computer programs and the study of how to produce such machines. Thus, the main feature of AI is the similarity between it and human minds in recognising pictures, solving problems, learning from experience...etc.  

On a regulatory level, defining what exactly artificial intelligence means is more complicated (Scherer, 2016). On May 2016 the European Union recognised the need for adopting a generally accepted, flexible and is not hindering innovation definition of robot and AI (Parliament, 2017). 0n Dec 2018, the Independent High-Level Expert Group On Artificial Intelligence set up by the European Commission updated the definition of AI to refer to systems designed by humans that … act in the physical or digital world by perceiving their environment, interpreting the collected structured or unstructured data, reasoning on the knowledge derived from this data and deciding the best action(s) to take (according to pre-defined parameters) to achieve the given goal. AI systems can also be designed to learn to adapt their behaviour by analysing how the environment is affected by their previous actions (Smuha, 2019). In Singapore, the Model Governance Framework Artificial Intelligence defined AI as a set of technologies that seek to simulate human traits such as knowledge, reasoning, problem solving, perception, learning and planning, and, depending on the AI model, produce an output or decision (Singapore, 2020). AI is used to describe computer systems which display certain capabilities associated with human intelligence, such as perception, learning, understanding, reasoning, and problem solving (Singapore 2020). One of the unique characteristics in AI Technology is that the level of control that a human being will have on the tasks performed by it is much less than the one over other machines and Technologies “Where AI applications are able to act autonomously, they perform a task without every step being pre-defined and with less or eventually entirely without immediate human control or supervision. Algorithms based on machine-learning can be difficult, if not impossible, to understand (the so-called 'black-box effect')” (European Commission, 2020).

AI involves both products and services. Taking into consideration that due to the drastic development in technology and industry, the dividing line between products and services may no longer be as clear-cut as it was. For example, computers and smartphones would not function in the way they do without software (European Commission, 2020). Ultimately, the purpose of AI is to provide new goods and services which lead to economic growth and a better quality of life. Promoting the protection of the interests of human beings and their well-being and safety is the primary consideration in developing AI (Singapore, 2020).

The fact that there is not an agreed unified definition to the AI Technologies, the main feature that must exist in a Technology that will be classified as AI is the level of control of human being over the it functions is very low. The question therefore is to what extent the indecency from human factor affects its legal nature and makes it a separate legal person in the eyes of the law.

The Legal Nature of Artificial Intelligence

In order to enjoy rights and duties, a person should be recognised in the eyes of the law. The concept of artificial or juristic or legal person was adopted to enable an entity to be treated as a person for such as corporations by granting them the right to sue and be sued, own property, and enter into contracts. (Legal Information Institute). The crucial question here is what makes a person, a person? The  least controversial answers that are more intuitive might be that personhood means having a sense of having a mental life, selfhood or having an “I,” making meaningful choices, some notion of interiority, an internal mental theatre (Sanders and Wood, 2019). Being independent from its creators might be the vital feature in recognising a legal person. In Trustees of Dartmouth Coll. v. Woodward, 17 U.S. 518, 667 (1819), the U.S. Supreme Court held that a corporation is an artificial person, existing in contemplation of law and endowed with certain powers and franchises which, though they must be exercised through the medium of its natural members, are yet considered as subsisting in the corporation itself as distinctly as if it were a real personage. Applying this decision on AI and taking into consideration the fact that the main feature in AI is its independency in its functions from the control of human beings might support the conclusion of recognising legal personhood by AI technologies. However, the independency here is not in making decisions as the AI technology cannot think nor it has a representative to express its will. The independence here is merely in performing the tasks assigned to it through programming it by a human being. On the other hand, the lack of human control over the functions of AI technologies is not enough to make it eligible to enjoy legal personhood. The features that give rise to the formation of a legal entity requires several elements such as the free will of the entity, acquiring substantive rights, existence of an economic interests, and acting as a holder of the powers or duties for which a legal person is liable in a legal relationship (Adriano, 2015). Indeed enjoying legal personality requires more than being independent in its functions from the control of human being. In our opinion, being able to express its free will and the accountability for its own actions are the main features to create a legal person. To what extent these features can be applied in AI technologies? Regarding the free will, it is difficult to apply it on AI technologies which are programmed, and they follow the algorithms which created them. Further, if an injury was caused by an AI technology, which would be responsible to indemnify the injured party. In other words, does AI technology own assets or have financial credit to pay compensation to cover injuries caused by it? In order to be eligible to own money, it should be recognised as a person by the law.  

Another questionable issue is whether the concept of legal person refers only to corporations and legal entities or the recognition of a legal person has been extended to cover more than entities. Based on the recent advances in this field, the answer to this question seems to be that the legal personhood extends beyond entities and corporations. For example, article 14/1 of the Te Awa Tupua (Whanganui River Claims Settlement) Act 2017 in New Zealand granted legal personhood to Te Awa Tupua giving it all the rights, powers, duties, and liabilities of a legal person (New Zealand Legislation, 2017).

In support to the view that AI technology fulfils the requirements of being a legal person, the European Parliament Resolution of 16 February 2017 with Recommendations to the Commission on Civil Law Rules on Robotics asked the Commission to consider creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently (Library of Congress, 2020). In May of the same year, Committee on Legal Affairs of EU Parliament submitted to the Commission of European Union a “Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics”, which appeals for conferment of legal status as “the electric persons” to automatic machines as well as of “specific rights and obligations” to the robots who acquire the qualification of legal subject by law. The motion also suggests that separate accounts be opened to intelligent robots so that man can prescribe legal liabilities for robots, pay taxes, submit fees, and draw the pension. In January 2017, Committee on Legal Affairs of EU Parliament adopted a resolution which required the Commission of European Union to legislate for robots and AI. One month later, the EU Parliament passed this motion. The statement published by NHTSA of the USA and EU’s resolution encourages AI technology development and will surely bring shock to the affirming system of traditional legal subject qualification. (Li et al., 2019) the Resolution suggested a system of registration for specific categories of robots (Resolution, 2017). In the U.S. however, the existing legal regulations suggest that intelligent robot is not the subject in legal relation yet (Li et al., 2019).The matter has been already manifested in relation to pilotless cars where in February 2016, the U.S. Supreme Administrative Institution in Automobile Security recognised an AI system of a Google pilotless car as a virtual legal subject and granted it a driver certificate (Li et al., 2019). This decision can be applied to other intelligent machines which are subject to human commands but in the same time independent of man qualifying them to become legal persons.

One might question do we need to apply the classical rules of legal personhood on AI technologies, or it is better to incorporate a new principle of electronic personality to suit the nature and the functions of AI technologies. A way out of this dilemma might be by incorporating a mandatory insurance system on AI technologies like the one applied on cars and vehicles. This system would overcome the problems of the lack of some elements to grant legal personhood and in the same time would enable the injured persons to claim damages for injuries caused by AI technologies. The suggested system provides a direct link between the AI technologies and the insurance contract rather than linking the operator to the insurance contract. Another suggestion is to apply the vicarious liability regime on AI technologies by assuming these technologies as virtual employees and therefore, holding the employer, who is the operator here, accountable for damages caused by them. Doing so would guarantee the injured party compensation in cases of injuries caused by AI technologies. Vicarious liability is usually imposed on the employer for the faults committed by the employees (or servants) since the employee is under the control of the employer in the matters related to his/her job. AI technologies are indeed under the control of the operator through programming. Adopting this regime, in our opinion, would create certainty for all parties; the operator of AI technologies who would have the chance to arrange for insurance in advance and the injured party by knowing who to sue.

The Basis of Civil Liability for the Damage Caused by Artificial Intelligence Technologies

There are three regimes of civil liability, whether based on tort or on contract, depending on requirement of fault; fault- based liability, presumed fault liability and strict liability regimes. The importance of deciding which liability regime is applied would help persons (natural, legal) to know their liability risks and reduce or prevent them and even insure themselves effectively against these risks. Despite the main goal of civil liability rules in ensuring compensation to the injured party, it encourages the liable party to avoid causing such damage. Liability rules always have to strike a balance between protecting citizens from harm while enabling businesses to innovate (European Commission, 2020). In this section, we will explain briefly the three regimes of civil liability followed by investigating what regime of liability is or should be adopted for injuries and damages caused by AI technologies.

Theories of Liability

In fault-based liability regime, victims of damage need to prove the elements of the fault of the liable person, the injury/damage and the causal link between the fault and the damage in order to establish a successful liability claim. The liable person would be allowed to avoid the consequences of liability by disproving fault in addition to other defences such as contributory negligence and force majeure. The second regime of liability is the presumed-fault liability where fault is still a required element. However, in such a regime of liability, the burden of proof is shifted from the victim to the defendant to disprove any fault in his acts. Strict liability or risk-based liability, on the other hand, is the liability which is independent of fault. It ensures that whenever that risk exists, the victim is compensated regardless of the fault on the part of the liable person.

Strict liability regime is applied in EU Product Liability Directive on the producer for damage caused by a defect in their products. The injured party is entitled to compensation if he or she proves the physical or material damage, a defect in the product (i.e. that it did not provide the safety that the public is entitled to expect) and a causal link between the defective product and the damage. In strict liability regime the risk is attributed to a specific person, without the need for a victim to prove fault/defect or causality between fault/defect and the damage (European Commission, 2020). Liability in this regime could be reduced if the injured party does not perform safety relevant updates which may be considered as contributory negligence by the injured person.

Liability Regime for Damages Caused by AI Technologies

The European Commission is aiming to examine whether and how to adapt civil law liability rules to the needs of the digital economy through evaluating the Product Liability Directive and exploring risk-based liability regimes (Library of Congress, 2020). All products put on the market should be safe, throughout their lifecycle as well as for the use of the product that can reasonably be expected. Therefore, the manufacturer would have to make sure that a product using AI respects certain safety parameters. The features of AI do not preclude that there is an entitlement to safety expectations for products, whether they are automatic lawnmowers or surgery robots. It is a question under what conditions self-learning features prolong liability of the producer and to what extent should the producer have foreseen certain changes (European Commission, 2020).

In order to protect the interests of humans against the injuries caused by AI technologies, clear and predictable legal liability framework should exist. The establishment of an adequate legal framework for AI related issues, including accountability, was encouraged by the European Union on April 10, 2018, in the Declaration of Cooperation on Artificial Intelligence to develop a European approach to AI based on EU fundamental rights and values (Library of Congress, 2020). The impact of choosing who should be strictly liable for AI operations on the development and uptake of AI would need to be carefully assessed and a risk-based approach be considered (European Commission, 2020).

Several issues should be taken into account when deciding the proper framework of liability of AI technologies. Firstly, to what extent the concept of fault would apply effectively to damage caused by AI? In addition, the burden of proof is an important factor that needs to be considered. The requirement of insurance, on the other hand, would have a huge impact on AI liability-related issues. Finally, in case the AI itself is not recognised as a legal person, who is the liable person for such damages caused by AI technologies will be analysed in the following section.

Fault Requirement and the Burden of Proof

The burden of proof and how to demonstrate the fault of AI technologies are key issues. Undoubtedly, AI technologies are complex and therefore, it is costly and difficult for victims of damage caused by AI to prove all necessary conditions for a successful claim. This complexity might discourage victims from claiming compensation in such cases. The key point is whether and to what extent alleviating or reversing the burden of proof required by the injured party for damage caused by the operation of AI applications would mitigate the consequences of complexity of those AI. If the risk-based liability regime was adopted for AI liability, similarly to the regime of liability in case of product liability, it will shift the burden of proof to the liable person and limit in the same time defences available to him to exonerate himself from liability.

The Importance of Insurance Scheme for AI Technologies

It is important that victims of AI enjoy similar level of protection like other products and services. Adopting a strict liability regime for damages caused by AI technologies and imposing a mandatory  insurance scheme in order to ensure compensation to the injured party would help reducing the costs of damage (European Commission, 2020). Besides imposing clear liability rules would help insurance companies to calculate their risks and therefore, providing smooth compensation for the victim of AI technologies.  However, the characteristics of emerging digital technologies like AI could make it hard to trace the damage back to a human behaviour, which could give grounds for a fault-based claim in accordance with national rules. Of course, fault-based liability regime may be difficult or overly costly to prove and consequently victims may not be adequately compensated.  To conclude, creating a mandatory insurance system that is implemented on what can be classified as AI technologies would be an efficient mechanism to protect injured persons caused by AI technologies. Besides, this system would create more legal certainty and would encourage the development of AI technologies. 

Rules for the Establishment of Civil Liability for Damage Caused by AI Technologies

Artificial intelligence (AI) and other new digital innovations, such as the Internet of Things (IoT) have the ability to transform our communities and economies during this pandemic in the fighting against infections or spreading COVID-19 into smart communities. Our rollout must, therefore, come with appropriate protections to mitigate the chance of these technologies causing harm, for example, personal injury, spreading of COVID-19, or other harm. In the EU, product safety regulations ensure this is the case (Aidukas). However, such regulations cannot completely exclude the possibility of damage resulting from the operation of these technologies. Before that, it should be kept in mind that a number of variations will depend on the liability regimes apply. This will rely on the cause of the damage: an error (by the maker, the operator, the consumer ... and the form of error) or the AI malfunction (due to the maker, due to the AI new learning skills ...), at AI level, type of AI (especially: open or closed AI) and so on. That is why various possibilities should be studied, and why there are so many potential hypotheses (Van Rossum).

In tangible ways, confronting the danger of the increased independence of artificial intelligence and the resulting difficulty in assigning the responsibility that it occurs during the fight against COVID-19, makes us ask about the possibility of establishing an insurance system that covers the damages caused by artificial intelligence, taking into account all potential responsibilities across the chain of actors. This is reinforced in these difficult times, with an increasing consensus on the importance of using AI to counter COVID-19,  we need a predictive technology to help us decide how the virus spreads, what are its mutation trends, and who are the people most at risk without causing any damages to the others.

Today's AI, as path breaking as they are, all have a common feature which is critical in the liability assessment. In any case, the computer works and making decisions in ways that can be directly traced back to the human design, programming, and information embedded in the computer Either explicitly or because of the system override and capture control capability. They are, as sophisticated as these machines are, semi-autonomous at best. They are instruments used by humans, albeit extraordinarily sophisticated devices (Beiker, 2012).

Early warning and alerts, prediction and identification of disease outbreaks, real-time disease tracking worldwide, analysis and visualisation of spreading patterns, prediction of infection rate and pattern of infection, quick decision-making to identify successful drugs, pathogens research and analysis, and drug discovery are the various roles played by AI during pandemics. Both these are done with AI at a greater speed. Also, due to lack of historical training data, there are few AI models which are a hit and miss. Although AI has not fully evolved to overcome a pandemic, the role of AI during COVID‐19 is noticeably high compared to that of previous pandemics and is rightly used as a tool to complement human intelligence.

As already mentioned in the preceding sections, artificial intelligence (AI) systems are human-designed software capable of collecting and elaborating data to make decisions on the best steps to be taken to achieve the intended target. AI systems demand that large quantities of data be processed and observations be revealed. Then these sources are combined with complex learning algorithms which recognize images, track risks and identify trends, etc. All these roles will contribute decisively to the treatment of the virus. While AI can be of great benefit, it can be dangerous too. The risks may concern health, privacy and/or the infringement of fundamental rights and freedoms (Wischmeyer and Rademacher, 2020).

It is worth noting that, particularly in relation to the development of AI technologies to fight the pandemic, the legislator is required to pay great attention to the principles and security systems. Risks associated to AI relate both to rights and technical functionalities. EU member states intending to use AI against COVID-19 will also need to ensure that any AI technology is ethical and is construed and operates in a safe way.

With the aim of ensuring that fundamental rights are complied with, the legislator should consider whether an AI system will maintain respect for human dignity, equality, non-discrimination and solidarity. Some of these rights may be restricted for extraordinary and overriding reasons – such as fighting against a pandemic – but this should take place under specific legal provisions and only so far as is necessary to achieve the main purpose. Indeed, the use of tracking apps and systems that profile citizens in order to determine which ones may suffer from COVID-19 entails the risk that an individual’s freedom and democratic rights could be seriously restricted. For instance, in December 2019, artificial intelligence technologies entered the front line against the emerging corona outbreak "COVID 19" such as: AI to identify, track and forecast outbreaks, AI to help diagnose the virus, Process healthcare claims, Drones deliver medical supplies, Robots sterilise, deliver food and supplies and perform other tasks, develop drugs, AI to identify non-compliance or infected individuals, Chabot’s to share information.

The most significant results from this study on how liability laws can be structured – and, where possible, updated – to address the challenges that new digital technologies carry with them are described below (Van Rossum).

A person who operates a legal technology but who carries an increased risk of harm to others, such as AI-driven robots in public spaces during spraying or sterilization operations against the spread of COVID-19 or digital diagnosis, should be subject to strict liability for damage arising from their service(Elvy, 2016).

In cases where a service provider providing the requisite technical infrastructure has a higher degree of control than the owner or customer of an actual AI-equipped product or service, this should be taken into consideration when deciding who controls the technology in the main.

An individual using a technology that does not pose an increased risk of harm to others should still be expected to perform duties of properly choosing, running, controlling and maintaining the technology in use and, failing that, should be liable for violation of such duties if it is at fault.

Product manufacturers or digital content utilizing new digital technologies would be responsible for harm caused by defects in their goods, even though the fault was caused by modifications made to the product under the supervision of the maker after it was put on the market.

When facing COVID-19 to prevent infected or spreading, the adequacy and comprehensiveness of liability regimes in the face of technical changes is critical for society. However, if the system is inadequate or flawed or has deficiencies in dealing with damage caused by emerging digital technologies, victims may end up totally or partially uncompensated, even if an overall equitable analysis can provide the case for indemnifying them.

The overall impact of a potential inadequacy in existing legal regimes, acknowledging risk factors created by emerging digital technologies in dealing with the outbreak of CVID19, could compromise the benefits expected. Some factors, such as the ever-increasing prevalence of new digital technology in all facets of social life, can also intensify the damage done by these technologies (Scherer, 2016).

The principle of strict producer liability for personal injury and consumer property damage caused by defective products has been an important part of the consumer protection system for more than three decades. At the same time, the harmonization of strict liability laws has helped to bring in a fair playing field for manufacturers who sell their goods to various countries (Dignum, 2019).

The context of the product liability regime rests on the AI product concept. For the purposes of the declaration, even when incorporated into another mobile or immovable object, products are defined as mobile objects and include electricity. Until now, the distinction between products and services has not encountered insurmountable challenges. But emerging digital technologies, in specific AI systems challenge simple distinctions and pose unanswered questions. Products and services interact indefinitely in AI systems, and a clear distinction between them is impracticable. It is also questionable whether software is covered by the product or product component legal concept (Micheler and Whaley, 2020). Whether the answer should be different for both embedded and non-embedded software, including over-the-air software updates or other data feeds, is particularly discussed. Where such updates or other data feeds are provided from anywhere, in any event. The victim may not have someone in his country to contact, because in the case of direct downloads usually there would not be an intermediary importer domiciled within the country (Smith, 2016).

The notion of a fault is the second main aspect of the product liability system. Deficiency is measured according to typical consumer's health standards, taking into account all applicable circumstances. Material and device interconnectivity makes it impossible to distinguish the defects (Pasquale, 2017).

Elegant autonomous AI systems with self-learning capabilities to address the issue of whether unexpected anomalies can be viewed as defects in the decision-making process. Even if they constitute a flaw, it can apply the state-of-the-art defence. The complexity and opacity of emerging digital technologies further complicate the victim's chances of discovering and proving the defect and prove causation (Dignum, 2019).

The Directive 89/374 concerning liability for defective products  establishes that the producer shall be liable for damage caused by a defect in his product (Directive, 1985), and that the claimant seeking compensation shall be required to prove the damage, the defect and the causal relationship between defect and damage»(Directive 1985)", thence not fault or negligence on the side of the defendant (Bertolini, 2018).

 Despite being often described as a theory of strict liability, the PLD is in reality setting up a scheme of semi-strict liability, provided the basic defences laid down in Article 7, PLD, in particular the creation risk protection, whereby the supplier may avoid liability if it appears to be liable. That the state of scientific and technical knowledge at the time when he put the product into circulation was not such as to enable the existence of the defect to be discovered (Directive, 1985).

Moreover, a product is defective when it «does not offer the safety that a person is entitled to expect, considering all circumstances such as the presentation of the product, its reasonably expected use and the time in which it was put into circulation (Directive, 1985).

A product could be deemed defective in three separate sets of occasions: a single specimen could deviate from the intended design and thus from the other mass-production specimens, thereby constituting a "manufacturing defect;" alerts about possible hazards resulting from the use of the system could not be properly transmitted or indicated as a result of which an information defect could be determined; lastly, the product's very nature could be faulty, because it does not provide the requisite protection or an unacceptable dangerous nature defect (Crowe, 2002).

The US case of Kociemba v Searle held a pharmaceutical company responsible for failing to alert consumers that the use of a specific medication was associated with a pelvic inflammatory disorder, even though the Food and Drug Administration had approved the substance as "safe and successful." Therefore it seems that the boundary of where an alert would fairly be needed depends on information rather than regulatory approval (Kingston, 2016).

Kingston addresses ethical liability problems for AI systems that are correlated with the recognition and selection of human experts. They cite two cases where hospitals were found to be responsible for failing to pick doctors with adequate competence to deliver the medical services they were required to provide; AI developers may also be held liable unless they select experts with sufficient competency in the chosen domain or warn users that the competency of the expert does not extend to other domains where the system is likely to be utilized (Kingston, 2016). However, Kingston suggested approach is to hire qualified and trained experts. They point out that the requirements set by licensing bodies are often used to assess if the performance of a professional is up to the anticipated level. We also say that having the AI program itself approved may be beneficial (Kingston, 2016).

Software can make a tangible product defective and lead to physical damage (cf. box on software in the part on safety). This could eventually result in the liability of the producer of the product under the Product Liability Directive (European Commission, 2020). For instance, the European legislature recognized examples of the responsibility of operating a robot such as:

Manufacturer – Fabricant

Here, the robot manufacturer asks about the machine’s faults resulting from defect  manufacturing that has led to the robot’s breakdown and its actions that are outside the framework of its normal use, such as if a defect in a medical care robot (Sullivan and Schweikart 2019), for example, causes the patient to move in a wrong way and worsen his health, as another example while using the robot In sterilization operations in public places during the COVID-19 pandemic and the infection is transmitted to others, as well as harm to the patient with COVID-19  due to poor communication of the medical robot with the analysis laboratory, or neglecting the maintenance of the robot from the manufacture (European Parliament, 2017). While The French judiciary is strict about waiting for the results of medical analyses before conducting any treatment and any complacency in this matter will assess the responsibility for compensation for negligence according to the decision of the French Court of Cassation of 2018. (CSL style error: reference with no printed form), An example of the user's lawsuit against the negligence of the robot operator in the US judiciary is the case of Cristono Almonte vs. Averna Vision & Robotics, INC.

Current liability laws may well lead to inefficient delays in manufacturers introducing AI technologies. The gradual shift in responsibility for automobile operation from the user to the AI may lead to a similar shift in liability for crashes from the users to the manufacturer (Anderson, et al., 2016).

Operator: we can define operator here as the professional person who is exploiting the AI during fighting spread COVID 19, such as fintech company and drones operator.

Owner: He is that person who operates the robot to serve it and serve its customers, such as hospitals that use robots now to deal with Corona patients (CSL STYLE ERROR: reference with no printed form), or sterilize places that have been exposed to Corona disease, in the event that the robot poses a threat to the lives of patients or transmits the infection to others.

User: The follower person who use the AI without the owner or the operator, and who is responsible for the behavior of the AI that caused harm to people and is the transmission of infection with COVID 19.

Current liability laws may well lead to inefficient delays in manufacturers introducing AI technologies. The gradual shift in responsibility for operation from the user to the AI may lead to a similar shift in liability for crashes from the users to the manufacturer. Courts' responsibility under the AIDA framework would be to adjudicate individual tort claims arising from harm caused by Al, harnessing courts' institutional strength and experience in fact-finding. In accordance with AIDA's liability framework, courts would apply the rules governing negligence claims to cases involving certified Al and the rules of strict liability for cases involving uncertified Al in the latter category of cases, the most important part of this task will be allocating responsibility between the designers, manufacturers, distributors, and operators of harm causing Al (Leroux et al., 2012). For multiple-defendant cases and actions for indemnity or contribution, allocation of responsibility should be determined in the same manner as in ordinary tort cases. It seems almost certain that certification process and licensing requirements notwithstanding, parties in many cases will dispute whether the version of the Al system at issue was one that had been certified by the Agency, or will dispute at what point modifications took the Al outside the scope of the certified versions. In such cases, the court would hold a pre-trial hearing to determine whether the product conformed to a certified version of the system at the time it caused harm and, if it did not, the point at which the product deviated from the certified versions. That modification point will then serve as the dividing line between the defendants who enjoy limited liability and the defendants who are subject to strict liability (Bertolini, 2018).

The question comes into play if, and only if, fully AI cause injury in ways wholly untraceable and unattributable to the hand of man. In my opinion this question crystallizes the HAL issue. It is safe to expect that, if AI is the standard, there will be incidents, maybe few and far between, that cannot be reasonably attributed to a design, manufacturing or programming defect, and where it may be difficult to explain even an assumption of defects. What should be the rule at that stage, particularly when the AI is behaving in a manner that is at odds with its creators' instructions? Tort law is generally reluctant to require injured persons to bear expenses incurred by others because of no fault of their own. So then, the question becomes, who pays? Apparently, the only feasible approach would be to infer a defect of some sort on the theory that the accident itself is a proof of defect, even if there is compelling evidence that cuts against a theory of defect (Shifton, 2002). There is precedent for courts to make such an inference, which is essentially a restatement of res ipsa loquitor. If this is the right choice to make, then there is the secondary question as to how, if anything, should the law assign liability among designers, programmers, manufacturers and others involved in the development of AI? Or should the responsibility be strictly transferred to the AI itself, as mentioned above?

The remedy proposed by current legislation would of course would be to keep the  manufacturer of the AI shall be responsible and shall cause the manufacturer to seek compensation or  contribution by other equally liable actors, where possible, But that approach can just be an empty gesture(Barry, 2020). If indeed it is the cause of the accident could not be determined, then the manufacturer will therefore not have fair grounds for claim, or action for contribution and will thus be saddled with the whole sentence (Shifton, 2002). This could be meaningful if the manufacturer is in the best to bear loss position. Alternatively, it could be fairer to delegate liability to all the parties involved in the development and maintenance of the AI systems on the grounds that the cost of error is best distributed among all the potentially responsible parties or between those parties that could more effectively protect or insure them. The other approach would be to keep the AI itself accountable, suggesting, of course, that the legislation is willing to provide the AI with legitimate "personality" and allow the AI to receive sufficient insurance cover (Hoenig, 1981; Bonadio and Mcdonagh, 2020).

Defences Available to Avoid the Civil Liability for Damages Caused by AI Technologies

Causes within the victim’s own sphere: If the victim is attributable to a cause of harm, the reasons for holding another person liable should apply correspondingly in determining whether and to what extent the claim for compensation of the victim may be reduced.

Although jurisdictions across Europe already recognize that conduct or any other danger within the victim's own domain can minimise or even preclude her claim for compensation vis-à-vis another, it seems necessary to note that whatever the NTF of the Expert Group proposes to strengthen the rules on liability for emerging digital technology should apply accordingly if such technology are used within the victim’s own sphere(European Commission, 2020). This is in line with the contributory behaviour law known as the "mirror image." (Čerka et al., 2015) Therefore, if, for example , two AVs collide, the above criterion for determining the responsible operator will be used to assess the effect of the victim's own vehicle on its damage on the other AV operator's liability (Leroux et al., 2012).

Contributory Negligence-Assumption of Risk

This is the most strenuously litigated category of protection in strict tort liability litigation. While there is a significant amount of doubt in the opinions, it is becoming increasingly clear that, as is known in the Negligence Statute, contributory negligence is not applicable to cases of strict tort liability. Since negligence is not the basis of the strict tort liability case (Boohar, 1970).

The better reasoned formulation of this defence appears possibly in the recent Williams v. Brown Manufacturing Company decision of the Illinois Supreme Court, in which it was held in addition to the fact that a claimant who uses a substance for an intent that is neither intended nor objectively fairly foreseeable may be barred from recovery. The principle of contributory negligence does not extend to strict product liability in cases of misconduct. The court held that "assumption of risk" is an affirmative protection which does bar recovery. The test to be applied in deciding whether a consumer has assumed the risk of using a product considered to be dangerously defective is essentially a subjective test, in the sense that it is the user 's experience, comprehension, and awareness of the danger which must be measured, rather than that of the fairly cautious individual. In other words, a person who is aware of an unreasonably dangerous defect in a product and who continues to use the product in spite of that information, would be barred from recovery (Yanke, 2020).

As a side light, it should be noted that the cases of common law and the Uniform Sales Act hold that implied warranties are exempt where the defect would have been discovered by inspection or review and the customer failed to inspect or inspect properly. The Standardized Commercial Code similarly excludes the implication of a warranty as to defects which an inspection ought to have revealed, but acknowledges that the level of inspection is less rigorous for non-commercial individual buyers than for commercial buyers. This is, however, contrary to strict tort liability in which it is the complainant's subjective experience that governs and not what fair men might have done in similar circumstances (Sullivan and Schweikart, 2019).

Several cases claim that the inability of a product consumer to check for or defend against the risk of a product defect is not a security. It has also been assumed that if the complainant was present when a system failed on previous occasions, there can be no protection unless it is shown that the complainant noticed and recognized the failure and that the failure was the same, Like that which wounded the plaintiff (Barry, 2020).

Force Majeure

Force majeure (although it may have varying interpretations) is a standard defence recognised in almost all liability systems. From an economic point of view, one can easily argue that there should be no responsibility in the case of force majeure. Force majeure is usually a circumstance, not only for fault-based or strict liability, but for any regime of liability in tort. It relates to the provision of blameworthiness, which allows the wrongdoer to have the potential for tortious liability. A tort can therefore, according to most legal structures, only hold an wrongdoer responsible if the wrongful act is imputable to him (Pasquale, 2017).

This condition of blameworthiness relates to the tortfeasor’s free will and discretionary capacity. This requirement of blameworthiness also has a strong economic justification. If the injurer does not act out of free will, his incentives to take precautions may not be influenced by liability and, thus, has no economic value. A finding of liability that does not affect the motivation of the tortfeasor would only generate administrative costs (caused by the transfer of the loss) without any compensating advantages in offering additional incentives to take action (Vladeck, 2014).

Here we refer to the criterion for blameworthiness simply as suggesting that the injurer contributed to the loss in some way. The 'blame' requirement typically blends into a conception of fault or negligence. In fact, mere cause suffices in the sense of strict liability. But he could not be held (strictly) responsible if the injurer did not ‘cause’ the accident. Therefore Force majeure should also remain a shield, even under strict liability, because if the injurer could not have changed the danger, a finding of liability makes no sense (Dignum, 2019).

Conclusion

We are blessed to have AI present in our life that are advanced to an extent that are utilized as an alternative to human beings tasks especially with the pandemic COVID-19, whereby AI are utilized to identify, track and forecast outbreaks, diagnose the virus, process healthcare claims, deliver medical supplies by drones, sterilise areas by robots, deliver food and supplies and perform other tasks. The risk associated with its application is causing injuries to individuals, and therefore the issues of compensation and damages arose. It should be noted that the existing legal regulations suggest that AI is not the subject in legal relations exempting them from being personally liable, but rather the liability might be held by the maker or the operator in the cases of the AI malfunction or the consumer in cases of misuse. The authors suggest that a mandatory insurance system is imposed on every AI technology which would create more certainty for future victims of these technologies. Not to mention the importance of ensuring that every AI technology is ethical and is operated in a safe manner to avoid injuries to others. Therefore, it is important to create balance between relying on AI technologies in the modern life and in particular during COVID-19 pandemic and protecting the society from injuries that might be resulted from it use. Society must answer for itself the question of whether investment in chances for a better life should be rewarded with an exemption from responsibility for some of the risks involved—and what these risks are or the establishment of an adequate legal framework for AI related issues, including accountability.

References

Adriano, E.A.Q (2015). Natural persons, juridical persons and legal personhood. Mexican Law Review, 8(1), 101-118.

Anderson, J., Kalra, N., & Stanley, K. (2016). Autonomous vehicle technology: A guide for policymakers. Rand Corporation

Barry, S.D.B. (2020). DEF-definitions: computer, internet, and electronic commerce law.

Beiker, S.A. (2012). Legal aspects of autonomous driving. Santa Clara Law Review, 52(1):1145-1156.

Bertolini, A. (2018). Artificial intelligence and civil law: Liability rules for drones. Study commissioned by the European Parliament’s Policy Department for citizens’ rights and constitutional affairs at the request of the JURI Committee.

Bonadio, E., & Mcdonagh, L. (2020). Artificial intelligence as producer and consumer of copyright works: Evaluating the consequences of algorithmic creativity. Intellectual Property Quarterly, 2(1), 112-137.

Boohar, C.W. (1970). Products liability-The blood transfusion as a sale cunningham v. macneal memorial hospital, _Ill. App. 2d _, 251 NE 2d 733 (1969). William & Mary Law Review, 11(4), 1004-1014.

Čerka, P., Grigiene, J., & Sirbikyte, G. (2015). Liability for damages caused by artificial intelligence. Computer Law & Security Review, 31(3), 376-389.

Crowe, R. (2002). Case law of the community courts. In ERA Forum.

Dignum, V. (2019). Professionally responsible artificial intelligence. Arizona State Law Journal, 51(1),1057–1122.

Directive, C. (1985). Council directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products. Official Journal Law, 210(1), 29–33

Elvy, S.A. (2016). Contracting in the age of the internet of things: Article 2 of the Ucc and beyond. Hofstra Law Review, 44(2), 839–932.

European Commission. (2020). Expert group on liability and new technologies – New technologies formation. Liability for artificial intelligence report from the expert group on liability and new technologies-new technologies formation.

European Parliament. (2017). Civil law rules on robotics European parliament resolution (2015/2103(INL).

Hoenig, M. (1981). St. John ’ s law review resolution of crashworthiness design claims resolution of crashworthiness.

Kingston, J.K.C. (2016). Artificial intelligence and legal liability. In International Conference on Innovative Techniques and Applications of Artificial Intelligence.

Legislation. (2017). Whanganui river claims settlement Act 2017 No 7, public act – New Zealand legislation. Public Act. Retrieved from http://www.legislation.govt.nz/act/public/2017/0007/latest/whole.html#DLM6831460

Leroux, C., Labruto, R., & Boscarato, C. (2012). Suggestion for a green paper on legal issues in robotics.

Li, J., Liu, Y., & Yue, L. (2019). Artificial intelligence governed by laws and regulations. In Reconstructing Our Orders: Artificial Intelligence and Human Society. Springer Singapore.

Library of Congress. (2020). Regulation of artificial intelligence. Retrieved from https://www.loc.gov/law/help/artificial-intelligence/europe-asia.php

Micheler, E., Whaley, A. (2020). Regulatory technology: Replacing law with computer code. European Business Organization Law Review, 21(1), 349–377.

Parliament, E. (2017). European parliament P8_TA(2017)0051 Civil law rules on robotics european parliament resolution of 16 February 2017 with recommendations to the commission on civil law rules on robotics (2015/2103(INL)).

Pasquale, F. (2017). Toward a fourth law of robotics: Preserving Attribution, responsibility, and explainability in an algorithmic society.

Resolution, E.P. (2017). Texts adopted - Civil law rules on robotics. Civ. Law Rules Robot. Retrieved from https://www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.html

Sanders, N.R., & Wood, J.D. (2019). The humachine. Routledge

Scherer, M.U. (2016). Regulating artificial intelligence systems: Risks, challenges, competencies and strategies. Harvard Journal ofLaw & Technology, 29(2), 1-9.

Shifton, M.D. (2002). The restatement (third) of torts: Products liability - The ali’s cure for prescription drug design liability. Fordham Urban Law Journal, 29(1), 2343–2386.

Singapore. (2020). Artificial intelligence governance framework. World Econ Forum. Retrieved from https://www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/AI/SGModelAIGovFramework2.pdf

Smith, B.W. (2016). Automated driving and product liability. Michigan State University Law Review, 20(1), 1-9.

Smuha, N. (2019). A definition of AI: Main capabilities and disciplines. Eur Comm High-Level Expert Gr Artifical Intell (AI HLEG).

Sullivan, H.R., & Schweikart, S.J. (2019). Are current tort liability doctrines adequate for addressing injury caused by AI? AMA Journal of Ethics, 21(1), 160–166.

Vladeck, D. (2014). Machines without principals: Liability rules and artificial intelligence embedded in our literature and culture since the beginning of expressions-what one scholar describes as an effort to forget our own.

Wischmeyer, T., & Rademacher, T. (2020). Regulating artificial intelligence. Springer.

Yanke, G. (2020). Tying the knot with a robot: Legal and philosophical foundations for human–artificial intelligence matrimony.

Get the App