Journal of Legal, Ethical and Regulatory Issues (Print ISSN: 1544-0036; Online ISSN: 1544-0044)

Review Article: 2022 Vol: 25 Issue: 2S

Health Insurance and Algorithms: An Ethical Overview

Marcos Alonso, Universidad Adolfo Ibáñez

Citation Information: Alonsom M. (2022). Health Insurance and Algorithms: An Ethical Overview. Journal of Legal, Ethical and Regulatory Issues, 25(S2), 1-10.

Abstract

This article discusses the ethical implications of the use of algorithms and artificial intelligence in the field of health insurance. Big Data and Machine Learning have been revolutionizing various areas of human life in the last decade. The job market has been one of the first to incorporate these new technologies that promise to improve our practices and make them more efficient. Insurance providers have also joined this trend, albeit with precautions related to the prevention of bias and discrimination. The healthcare field could also benefit from the implementation of these technologies, although here the ethical issues are even more pressing. We will dwell on pressing issues of privacy, confidentiality, prevention of bias, transparency and accountability, among others. Finally, we advocate for a cautious and gradual implementation of these new technologies, placing great emphasis on the monitoring and reviewing of the results yielded by algorithms.

Keywords

Bioethics, AI Ethics, Algorithmic Ethics, Medical Ethics, Health Insurance

Introduction

There is little doubt now that artificial intelligence is at the heart of the new great revolution of the 21st century (Floridi, 2014). After the dazzling development of informatics and computing in the second half of the 20th century, the first decades of our century have witnessed an uninterrupted boom in digital technologies, with the creation of the smart phone, the improvement of Internet services and the progressive transfer of a large part of our lives to the cloud. In this context, humans have created an amount of recorded data unparalleled in history. The consequences of this are still being defined, but its importance cannot be overstated.

Data has thus become very attractive to different actors. Selling companies undoubtedly have a great interest in knowing the habits and practices of their consumers. But they are not the only ones, because in many cases this data can also help the consumers or users themselves. However, in either case, the raw material that data has become is of little use without tools to sift and analyze it. This is where artificial intelligence tools, Big Data, Machine Learning -and, in short, all technologies developed on an algorithmic basis- come into play to speed up and substantially increase the efficiency of data management.

Algorithms are essentially nothing more than formal instructions logically structured so that certain "inputs" are converted, after processing, into certain "outputs". In this sense, one of the most interesting characteristics of algorithms is that, from a certain initial configuration, their information processing is done autonomously; that is, once these instructions or rules are established, the human does not intervene in the creation of this "output". It is also key to understand that algorithms, especially those that through recent Machine Learning technologies can perfect themselves, reach a predictive level the brings up a whole new level of discussion (Jackson, 2018).

It is for this reason, because of their autonomous and much more powerful nature, that algorithms have gradually established themselves to assist, and in some cases even replace, humans in various tasks that are considered automatable. For example, we currently have countless algorithms in areas such as stock trading, human resource management, transportation, medicine or justice, to name a few. Each of these areas, and many others where algorithms are gaining an increasingly notable presence, raise different ethical issues of varying intensity. The legal or health fields, for example, are among the most delicate and problematic, and that is why the study of an area such as health insurance seems particularly relevant.

While it is always wise to take these predictions with a grain of salt, it is estimated that by 2030 insurance companies will widely adopt an AI based pricing and premium setting (Rodriguez-Pardo del Castillo, 2018). Indeed, a field such as insurance where information is absolutely decisive was prone to embrace this algorithmic revolution. For insurance companies, all the information about their policyholders or potential policyholders seems invaluable, as it allows them to price much more accurately and successfully. Indeed, according to some enthusiasts, these new tools could even make it possible to move from the current emphasis on "detect and repair" to a "predict and prevent" paradigm.

Health insurance, in particular, is an even more interesting case in that it involves an intersection between the legal, commercial and health fields. Commercial health insurance is a relatively recent development, as it was not until the mid-twentieth century that it began to appear in the context of the services that certain companies offered to their employees (Jones & McCullough, 2016, 1108). After a first stage of enormous growth in this area, in the last third of the last century concerns about the sustainability of these insurances (and especially, in the public sphere, pensions) led to the emergence of a great concern with the delimitation of prices related to these services (Jones & McCullough, 2016).

Countless and intense discussions about the right to these health insurances are currently taking place. To the point that, for some, the possession of health insurance -which from this perspective would be the basis of one's own security and stability as citizens- "is a prerequisite of full engagement in the community" (Mclean & Gannon, 1998, 97). For these authors, health insurance is not just one possible service among others, but its significance for today's Western societies is much greater.

Algorithmic Ethics' Main Issues

Given the importance of health insurance, it seems reasonable to approach the introduction of algorithmic tools with extreme care and caution. A substantial literature has emerged in recent years around the ethics of artificial intelligence, algorithmic ethics, data ethics, and related topics (Mittelstadt et al., 2016). In the following lines I will succinctly outline some of the ethical issues that have been most prominent on these issues, so that we better understand the landscape of the discussion.

Privacy and Confidentiality

Of course, a point that must inevitably come up in these discussions is privacy. Privacy is the right to limit other people's access to one's own body or thoughts. Since the new data-based technologies have so much data about us, about our behaviors, habits, preferences or characteristics, it is clear that one of the issues raised by these new technologies are those related to privacy. The solution often proposed is to anonymize this data. However, this is not always truly feasible, and the trace of the data to its owner can never be completely eliminated. Some authors such as Véliz have gone so far as to claim that data are a toxic substance (2021, 44). This author explains that, similarly to, for example, asbestos (a building material already in disuse), with data we can do many things that greatly improve our life’s. But in a similar way to asbestos, the use of data ends up intoxicating all those involved in its use (people, institutions, societies).

Closely related to privacy, although not exactly the same thing is confidentiality. In this case, the problem would not lie in limiting access to the information, but in the trust that two or more different actors are assumed to have with respect to this information. Confidentiality is a problem that arises most obviously in the dealings between doctors and patients and in the relationship between attorney and defendant, among others. As can be guessed, confidentiality is also a factor to be taken into account when it comes to health insurance.

Transparency and Explain ability

In contrast to the privacy and confidentiality points made just above, the algorithmic ethics has also insisted on the need for algorithms to be transparent (Burrell, 2016). This is a problem even more specific to algorithmic ethics, and one that has to do with the scientific, mathematical, and technologically complex nature of these systems. The point here is directly related to the problem of biases that we will see below. The idea is that only if we know how an algorithm works, how it arrives at its conclusions (at its "output"), can we remedy the discriminations and injustices that these algorithmic results may present.

The demand for transparency sometimes takes a more concrete form as a demand for explainability. Algorithms, according to this logic, should not only be transparent but explainable –a thesis that the European Union's Regulation on the Protection of Individuals with regard to the processing of personal data conceptualized as the "right to explanation" (Rodríguez-Pardo del Castillo, 2018, 3). The corollary of this is that no algorithm that we could not explain should be used. However, artificial intelligence, in some concrete and specific contexts, has reached a development that human minds cannot really achieve. A notorious example is chess, where new programs based on Machine Learning have reached a level of play far superior to the human. To fail to benefit, as for example chess players do, from the advances provided by artificial intelligence just because we cannot give a perfect account of how these advances have been achieved seems, nonetheless, problematic.

Bias

The problem of bias is perhaps the most discussed problem in the field of algorithmic ethics. Algorithms can be understood as pattern finders. These patterns tend to correlate with social variables such as race or gender (Jackson, 2018; Ferguson, 2012; Popp, 2017). As several studies have shown, our biases are hopelessly embedded in the algorithms themselves, rendering them invisible (O'Neil, 2016). And, since algorithm developers, in turn, are a very non-diverse population (primarily white males), it has been argued that their values and beliefs are also inevitably inscribed in the algorithms and the results they yield.

It has sometimes been argued that, to avoid these biases, we should remove from the data all references to characteristics that could lead to discrimination, such as gender, age, or race. The problem is that even if this information is initially explicitly removed, many other characteristics serve as proxies (Greenwald, 2017; Williams et al., 2018). Even if we do not want to take into account a person's purchasing power in assessing the price of their life insurance, an algorithm might conclude that individuals whose home has a certain zip code are more prone to accidents or premature death. Thus, even if we "blind" the algorithm, it ends up unearthing certain characteristics, resulting in discrimination by socio-economic status - as shown by Cathy O'Neil in Weapons of Mass Destruction (2016); Virginia Eubanks in Automating Inequality (2017).

Some authors have argued that this strategy of "blinding" algorithms, even if feasible, would be unacceptable from an ethical point of view (Kirkpatrick, 2016, 2). Zimmermann and Lee-Stronach argue that leaving issues such as gender or race out of the algorithm's consideration could bring about that the conditions of structural injustice present in society remain unchanged (Zimmermann & Lee-Stronach, 2021). According to these authors, we should rather make these characteristics visible so that the algorithm can incorporate some kind of compensation or correction to its calculation, so that the consequences of this structural and generalized discrimination are lessened.

Lack of Control

However, being aware of all these problems may not be enough to deal with them. Another major problem related to the implementation of algorithms in different spheres of life and the use of data for these algorithms is the little control we have over these data and algorithms. Despite the fact that most national regulations on these issues require consent forms for any company to collect and use our data, the truth is that most of the time citizens do not stop at these forms, do not have time to read and understand them, or are simply not able to really grasp what is at stake. And even when all these conditions are met, the lasting life of data means that we can hardly be aware of how our data will be used in the long term (Williams et al., 2018). As Jackson explains, there have already been cases of data that were initially collected for strictly commercial purposes being used over time for police tracking, border searches, etc. (Jackson, 2018).

Doxastic Negligence

All these issues converge in another problem that is common to the use of various technologies. Here I refer to "laziness", negligence or simply carelessness with the use of technology. It is widely documented how the use of technology in various fields (as for example happens with airline pilots) can make us dependent and less autonomous. In the case of algorithms, a particularly relevant problem is what Zimmermann and Lee-Stronach call "Doxastic Negligence" (2021). According to these authors, "A is doxastically negligent if A, purely on the basis of an algorithmic output concerning B, adopts a belief about what kind of treatment of B is warranted" (Zimmermann & Lee-Stronach, 2021). One of the serious problems in our relationship with algorithms and artificial intelligence is the aura of scientific certainty that accompanies the results of these systems. This blind trust in machines leads us to uncritically assume the results of these mathematical models are objective and need no interpretation. As Jackson explains, "Algorithms reduce decision making to a number. This reliance on numbers suggests an objective, unbiased approach to decision making based on the assumption that numbers, unlike people, do not lie" (Jackson, 2018).

But, as we have already seen, algorithms present different flaws. Some of them have to do with the permeation of biases and preferences present in the developers or in the same society from which the data is extracted. But their shortcomings also come from their own limitations, from the fact that the algorithm is not, in a human sense, an intelligent entity and therefore lacks an overall or holistic vision that prevents certain misunderstandings. In this sense, seeing the results that Big Data sometimes reaches, stakeholders have spoken of the advent of "the voodoo economy" (Rodríguez-Pardo del Castillo, 2018). For example, a case coming from the insurance world shows how certain insurers were charging policyholders more to individuals that had a "Hotmail" email account instead of a "Gmail" one (Rodríguez-Pardo del Castillo, 2018).

Responsibility, Accountability, Liability

Finally, and although to a certain extent it is a point that is present in all the points we have already discussed, algorithmic ethics also raises important problems around the concepts of responsibility, accountability and liability. The aforementioned autonomy of these artificial intelligence systems makes the responsibility for their decisions blurred or directly invisible. If an algorithm decides that your health insurance should be twice as expensive as your partner's, who is responsible for this? Sometimes it is assumed that the responsibility should go back to the programmer or programmers who designed the algorithm. But this also seems excessive, since it is difficult, if not impossible, for a programmer to foresee all the applications and concrete results that a given algorithm could produce. It is not possible in the context of this article to delve into this important issue, but it is clear that algorithmic responsibility is one of the most important ethical issues in the implementation of algorithms and artificial intelligence; an aspect that, most likely, involves not only developers or companies, but society as a whole.

Ethical Issues of Health Insurance

The healthcare field has not been oblivious to the algorithmic revolution (Mittelstadt & Floridi, 2016; Coeckelbergh, 2013). As expected, this area has been somewhat reluctant to the introduction of Big Data tools and algorithms. A recent survey in the UK shows that a "63% of the adult population is uncomfortable with allowing personal data to be used to improve healthcare and is unfavorable to artificial intelligence (AI) systems replacing doctors and nurses in tasks they usually perform" (Fenech, Strukelj & Buston, 2018). However, these reluctances (understandable when dealing with such a delicate and personal field as healthcare), fail to overshadow the enormous advantages that the use of Big Data and artificial intelligence promise for the practice of medicine. Especially in the case of the USA, where the predominance of private healthcare allows for innovation that is less subject to public scrutiny and censorship, multiple cases of collaborations between hospitals and artificial intelligence companies have already begun to emerge. Mayo Clinic, Ascension and recently HCA Healthcare have signed contracts with Google to develop healthcare algorithms using patient records and information. Some hospital systems have created companies, such as Truveta, dedicated to selling their patients' anonymized data. Other companies have been created specifically to fill this medical records analytics niche, such as Health Catalyst (La Tercera, 2021).

This trend toward making use of new algorithmic technologies has also begun to be applied in the more specific area of health insurance. As we are going to see, health insurance also has a number of specific ethical implications that need to be dwelt upon. As we have already mentioned, health insurance brings together several areas -namely legal, commercial and medical areas-, which have a significant ethical burden. It would simply be impossible to deal with all these ethical aspects. What we shall do in the following lines is to highlight those ethical elements that are particularly relevant when assessing the incorporation of artificial intelligence in this field.

Information Asymmetry

As explained above, privacy and confidentiality are two basic ethical issues of algorithmic ethics. This is even clearer regarding health insurance, as we already commented. We won´t dwell further on this, but instead focus on another problem, even more specific to health insurance.

One of the most basic premises of any commercial deal or contract is that the contracting parties must agree on the nature and scope of the contract. In the case of health insurance, the object of the contract is something very delicate and problematic: a person's health. By taking out insurance, the insured protects himself against possible illnesses and other eventualities by paying a premium, the value of which depends on an estimate of the probability of actually suffering such illnesses or misfortunes. The insurer makes a profit by correctly calculating the probability of illness or catastrophe, assigning a price accordingly. However, at least until now, the insured had much more information about his or her health than the insurance company. Therefore, in health insurance contracts, another important principle appears, the principle of "uberrima fides" ('utmost good faith') whereby the insured undertakes to report all relevant information about her health (Leigh, 1998).

As can be guessed, this adds another twist to the privacy and confidentiality issues that the use of Big Data and the implementation of algorithms already brings. In the case of health insurance, the customer's right to privacy comes into direct collision with the insurer's right to be fully informed of the data relevant to their insurance. Health insurance information is highly personal, sometimes embarrassing information that some people might understandably want to hide. But that same information -ranging from past accidents, family conditions and eating habits to sexual practices- is key for insurance companies in order to dictate the risk premium for a given insured.

However, this question branches out into many other potential problems. Is it legitimate for insurers to get this information of potential customers from any source? Does the insured have the right to withhold certain information? If so, what information, for how long, and why? Could it be the case that the information asymmetry is reversed, and the insurer gets to know more than the insured himself about his habits, physical condition, family history, etc.? In the late 1990s, with the advent of the genetic revolution, the "right not to know" (Mclean & Gannon, 1998, 91) on the part of the insured was widely discussed. The idea was that knowledge of one's genetic makeup could act to the detriment of the insured, forcing them to share information that was detrimental to them when taking out insurance. In the case at hand, would it be possible to invoke this right not to know, when in many cases the data is already of public domain? Even if anonymization techniques were effective and the specific subject related to some specific data could not be traced, population data pools already provide very reliable information that insurers could use.

Medical Bias and Stigmatization

The problem of algorithmic bias, which, as we saw earlier, is considered one of the most pressing problems in this field, has its correlate in the area of health insurance. However, here we find a paradox similar to the one we saw with respect to access to information. Although "discrimination" is a word that in recent decades has become loaded with clearly pejorative connotations, the basic meaning of discrimination is simply that of differentiation. And, regarding insurance, this differentiation is key to assessing risks and deciding on the premiums associated with those risks. As Sorell explains, "If all applications for insurance had to be approved no-questions-asked, or if high-risk applicants had to be treated the same as everyone else, payments to cover losses might quickly bankrupt firms (1998).

If discrimination is an inherent feature of insurance, can we ask that algorithms applied to this area avoid discrimination altogether? The answer is most probably no. Can we, perhaps, distinguish between fair and unfair discrimination? Some authors such as Leigh explain that fair discrimination is discrimination that treats equals equally (Leigh, 1998). But this remains too vague and formal, leaving unresolved the central problem of how to identify which cases are really equal or sufficiently similar.

It is precisely here that much of the discussion of bias in algorithms is embedded. As we have seen, one of the fundamental criticisms of the use of algorithms in different areas is the fact that these algorithms reproduce existing biases in the developers and in the society the data comes from. Related to this is the fact that the patterns identified by the algorithms may treat as equal cases that really are not. For example, there has been the case of African-American executives whose car insurance went up substantially after paying with your credit card in an establishment usually frequented by African-Americans; a fact that the algorithm correlated in a surprising way with traffic accidents. This could end up in a scenario similar to China's Social Credit System, in which social "scores" are given to its inhabitants on the basis of different variables. While this is something that some companies are already doing with consumer scores, here we would be taking a non-trivial step towards the comprehensive pricing of individuals (and, indirectly, of social groups), which could mark and determine countless lives in an unfair way (Dixon & Gellman, 2014).

Transparency and Explain Ability in Medicine

This last point regarding bias is obviously connected to the problem of transparency and explain ability mentioned above. This ethical aspect is even more problematic in health insurance and, more generally, in the healthcare field. This is because trust between doctor and patient (also, to a certain extent, between insurer and insured) is a central and decisive element of this context and its associated practices. If we cannot interpret and account for the reasons for accepting or refusing a particular treatment, this important relationship is completely undermined (Vayena, Blasimme & Cohen, 2018).

As Vayena, Blasimme and Cohen explain, "as more diagnostic and therapeutic interventions become based on MLm [Machine Learning Medicine], the autonomy of patients in decisional processes about their health and the possibility of shared decision-making may be undermined" (2018, 3). The importance of personalized and human treatment in the medical field cannot be underestimated, and health insurance would be no exception. When the well-being and health of oneself and one's loved ones is at stake, we need more than mere algorithmic calculation to be truly satisfied. Even in cases where algorithmic decisions might work in our favor, dissatisfaction that such a decision has been made mechanically can be something of relevance.

Private or Public?

One of the most notorious ethical controversies in the field of health insurance is whether this service can be private, or whether its importance justifies it being guaranteed by the state. Modern philosopher Immanuel Kant famously expounded the idea that the human being (as a rational being capable of autonomy) has dignity -that is, that he has immeasurable value, that he cannot be treated as a means to an end, but that he has intrinsic value, as an end in himself. Yet life insurance, and even more overtly algorithms applied to life insurance, assign a specific economic amount to a person's life. Is this constitutively immoral? Is this something that should make us abandon these practices as a matter of principle?

Such a position seems too maximalist and radical. But what has been extensively discussed is whether health insurance, precisely because it insures something as valuable and important as human life, should necessarily be something entrusted to the state. The assumption here would be that, if health insurance were left exclusively in private hands, this could pose a serious danger to economically depressed sectors of the population. This economically unfavorable situation could thus translate into a direct damage to the welfare and health of a significant part of the population. Hence, for many there is a moral need for state health insurance. Belonging to a political community depends to a large extent on having health that allows citizen participation; and for this to be effectively possible, some consider that health insurance should be public (Weale, 1998). However, other authors believe that the best way to make these insurances viable and prevent their collapse is to leave them in private hands, allowing the self-regulation of the market to create the optimal conditions for their development. Otherwise, according to these authors, we could run into different problems. One of the most significant would be that, unlike private insurance, public insurance does not usually involve contracts that can be claimed for breach of contract (Booth & Dickinson, 1998). If a society democratically decides to change the conditions of insurance, or to eliminate it, we would have no recourse and could be left helpless.

The inclusion of algorithmic tools could pose an additional problem for both advocates of public health insurance and proponents of its privatization. Given the amount of data that most states hold on their citizens, it does not seem that state health insurance could afford to discriminate without engaging in abuse of this data. Therefore, they would seemingly have to offer than universal and full health insurance indiscriminately. But this does not seem entirely feasible. Regarding the private model, could insurance companies be competitive without having access to data relating to their policyholders or potential policyholders? Probably not. It seems, thus, that the algorithmic revolution will only increase the problems associated with this important question about the public or private nature of health insurance.

The Paradox of Uncertainty

A final point worth mentioning is another of the paradoxes that arise in the context of insurance, particularly health insurance. Paradoxically, the unprecedented increase in information about people, their habits, practices and characteristics, could pose a problem for insurance insofar as the element of uncertainty on the basis of which these companies generate their profit, could be diminished (Leigh, 1998).

While this is something that, at least in the short term, is unlikely to happen, the reality is that if both policyholders and insurers had near-perfect knowledge of the risks of a given insurance policy, it would be very difficult to find a profit margin. While it has been argued that, at least in the private sphere, competition between companies could boost their prices (Sorell, 1998), it is not clear that this would fully solve the problem. People with a very high probability of falling ill or suffering serious adversities would have to pay premiums so high that they would be unaffordable; while people with a very low probability of falling ill could have very low premiums, and even end up preferring to contract ad hoc medical services, thus avoiding paying for insurance.

Final Remarks

As we have seen throughout the article, the implementation of Big Data, Machine Learning and more generally algorithmic-based technologies in different aspects of human life is an inevitable trend. However, we must be very aware of the important ethical issues that the introduction of these systems brings with them. The medical field, and particularly the area of health insurance, presents some specific challenges that should also be borne in mind. It is vital that issues such as privacy, confidentiality or the explain ability of algorithms are always taken into account when considering the implementation of these kind of technological solutions.

However, the main problem with the use of algorithms and artificial intelligence, even more so in the field of health insurance, is the prevention of bias and the discrimination associated with it. As we saw, there is a basic level of discrimination that is very difficult to overcome. Likewise with biases. Algorithms look for patterns, and in that process throw up improper connections or erroneous generalizations. Insurance, on the other hand, is based on discriminating aspects and characteristics of people, often sensitive from an ethical perspective, that are crucial to establish their risk premiums and be profitable. In this sense, it is very difficult to completely eliminate the aforementioned biases and discriminations.

However, this should not lead us to simplistic solutions, such as blindly accepting the use of algorithms or, conversely, rejecting them outright and losing all the advantages that their use entails. A partial and imperfect solution, but one that will undoubtedly help to alleviate the aforementioned shortcomings, is to give great importance to the duty of reviewing and monitoring these algorithms, even deploying ad hoc guidance of the work of the algorithms when circumstances warrant it. As different authors (Kirkpatrick, 2016; Vayena, Blasimme & Cohen, 2018; Zimmermann & Lee-Stronach, 2021) advocate, algorithmic tools, in their processes and particularly in their results, must be continuously reviewed and interrogated. In this sense, the proposal of Zimmermann and Lee-Stronach seems to be very well suited. These authors propose that, especially when the use of algorithms affects highly relevant issues such as health insurance, we should approach the results offered by algorithms with extreme caution, asking at least three questions before making any decision. 1. Have we considered all possible options?; 2. How much is at stake for the subject involved?; and 3. How much is at stake for the agent making the decision? (Zimmermann & Lee-Stronach, 2021).

As is evident from our entire discussion, technologies based on artificial intelligence and algorithms are already a reality and it would be impossible, and probably undesirable, to reject them or to pretend to live with our backs turned to them. What we can try to do is to understand the nature of algorithms, their dangers and their possibilities, so that we enhance their beneficial effects and eliminate, as far as possible, their pernicious effects. In the case of health insurance, this also implies taking into account its own importance and problematic nature; which, as we have argued, should possibly imply a more gradual and cautious introduction of artificial intelligence and algorithmic tools.

Funding

"This article is funded by FONDECYT Iniciación nº 11200050, held by Marcos Alonso. The article is also part of the project “Oportunidades de Mercado para las Empresas de Tecnología – Compras Públicas de Algoritmos Responsables, Éticos y Transparentes" Code: ATN/ME-18240-CH."

References

Albert, W. (1998). “Ethical issues in social insurance for health”. In Tom Sorell. Health Care, Ethics and Insurance. Professional Ethics. London: Routledge, 137-150.

Google scholar

Annette, Z., & Chad, L. (2021). “Proceed with Caution”. Canadian Journal of Philosophy, 1–20.

Google scholar

Booth, P.M., & Dickinson, G.M. (1998). “Public or private? Insurance and pensions”. In Tom Sorell. Health Care, Ethics and Insurance. Professional Ethics. London: Routledge, 181-215.

Google scholar

Burrell, J. (2016). How the machine ‘thinks:’ Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1–12, 2016.

Crossref, Indexed at

Carissa, V. (2021). Privacy is power: Why and how you should take back control of your data. London.

Google scholar

Coeckelbergh, M. (2013). E-care as craftsmanship: Virtuous work, skilled engagement, and information technology in health care. Medicine,Health Care and Philosophy 16(4), 807–816.

Crossref, Indexed at

Cathy, O. (2016). Weapons of mass destruction: How big data increases inequality and threatens democracy. U.S.: Penguin Books.

Google scholar

Daniel, M., Patrick, A., Mariarosaria, T., Sandra, W., & Luciano, F. (2016). "The ethics of algorithms: Mapping the Debate". Big Data & Society, 3(2), 205395171667967.

Google scholar

Dixon, P., & Gellman, R. (2014). The scoring of America: Secret consumer scores threaten your privacy and your future. World Privacy Forum.

Crossref, Google scholar

del Castillo, R., & Miguel, J. (2018). Ethical bias in artificial intelligence algorithms applied to insurance.

Effy, V., Alessandro, B., & Glenn, C. (2018). "Machine learning in medicine: Addressing ethical challenges." PLoS Medicine, 15(11), E1002689.

Google scholar

Eubanks, V. (2017). Automating inequality. How High-Tech Tools Profile, Police, and Punish the Poor. New York, NY: St. Martin's Press.

Google scholar

European Commission. (2012). Regulation of the European Parliament and of the Council on the Protection of Individuals with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation).

Google scholar

Fenech, M., Strukelj, N., & Buston, O. (2018). Ethical, social and political challenges of artificial intelligence in health.

Ferguson, A.G. (2012). Predictive policing and reasonable suspicion. The Emory Law Journal, 62, 259 ­325.

Google scholar

Floridi, L. (2014). The fourth revolution: How the Infosphere is reshaping human reality. Oxford: OUP.

Google scholar

Greenwald, A. (2017). An Al stereotype catcher. Science, 356(6334), 133­134.

Google scholar

Hildebrandt, M. (2011). Who needs stories if you can get the data? ISPs in the era of big number crunching. Philosophy & Technology, 24(4), 371–390.

Google scholar

Joni, R. (2018). “Algorithmic Bias”. Journal of Leadership, Accountability and Ethics, 15(4), 55-65, 2018.

Google scholar

James, W., & Laurence, B. (2016). "The ethics of insurance limiting institutional medical care: It's all about the money." Journal of Vascular Surgery, 63, 4, 1108-109.

Google scholar

Keith, K. (2016). "Battling algorithmic bias." Communications of the ACM 59, 10, 16.

Google scholar

La, T. (2021). "Google closes a deal with a hospital chain to develop algorithms for medical care", 6-27-2021 (Accessed 9-12-2021).

Google scholar

Mittelstadt, B.D., & Floridi, L. (2016). The ethics of big data: Current and foreseeable issues in biomedical contexts. Science and Engineering Ethics, 22(2), 303–341.

Google scholar, Indexed at

Popp, T. (2017). Black box justice. The Pennsylvania Gazette, 38­47.

Google scholar

Spencer, L. (1998). “The freedom to underwrite”, 11-53. In Tom Sorell. Health Care, Ethics and Insurance. Professional Ethics. London: Routledge.

Google scholar

Sheila, A.M., & Philippa, G. (1998). “Genetics and insurance”. In Tom Sorell. Health Care, Ethics and Insurance. Professional Ethics. London: Routledge, 87-100.

Google scholar

Tom, S. (1998). “Freedom within limits: Underwriting and ethics”. In Tom Sorell, Health Care, Ethics and Insurance. Professional Ethics. London: Routledge, 54-72.

Google scholar

Williams, B.A., Brooks, C.R., & Shmargad, Y. (2018). How algorithms discriminate based on data they lack: Challenges, solutions and policy implications. Journal of information Policy, 8, 78 ­ 115.

Google scholar

William, H. (2019). "Ethics beyond computation: Why we can't (and shouldn't) replace human moral judgment with algorithms." Social Research, 86(4), 977.

Google scholar

Received: 03-Dec-2021, Manuscript No. JLERI-21-8148; Editor assigned: 04-Dec-2021, PreQC No. JLERI-21-8148 (PQ); Reviewed: 09-Dec-2021, QC No. JLERI-21-8148; Revised: 21-Dec-2021, Manuscript No. JLERI-21-8148 (R); Published: 04-Jan-2022

Get the App