Ethical Concerns with Medical AI – Should We Care? 

Artificial intelligence (AI) has played an increasingly important role across many fields of research and areas of business and industry over the past few decades. One major area that sees fast growth of AI is the medical field, where AI has made its way into critical domains such as diagnostics, early risk identification, and treatment management.  Compared with traditional decision-making techniques, AI has the unique capability to perform analytics and prediction tasks more efficiently and accurately. With the rapid expansion of AI into the medical field, ethics concerns have arisen.

Currently, ethics concerns related to medical AI are focused on machine learning (ML), a branch of AI where computer systems learn from large amounts of data (known as training data), identify patterns, and derive a decision rule to categorize or explain the data. The decision rule derived from the training data can then be used to make predictions against previously unseen data. For example, a machine learning model trained on an extensive data set containing mammograms of patients who are later diagnosed as having (or not having) breast cancer can help identify abnormalities in breast tissue and predict if a new patient is likely to develop breast cancer in the future.

In this article, we’ll elaborate on some of the leading ethical concerns related to medical AI, including transparency and explainability, accountability, and justice. 

Transparency and explainability: Many ML models are considered black boxes because it is difficult to understand or explain their inner workings. For example, consider one form of ML – deep neural networks (DNNs). The complexity inherent in most DNN models makes it impossible for humans to interpret them straightforwardly. A DNN can comprise multiple hidden layers, each containing a vast number of neurons, and it is nearly impossible for humans to deconstruct how the various parameters interconnect to form decisions. When presented with a decision made by a DNN, we may find it difficult to trust that the decision is correct if we cannot understand on what basis the DNN made that decision, whether it is a prediction, a diagnosis, or a treatment recommendation.

The good news is that a growing field – explainable Artificial Intelligence (XAI), has developed some successful approaches to explain and interpret predictions of complex machine learning models such as DNNs (Holzinger et al., 2022). With a vibrant community, this field can be expected to produce more valuable tools to help human users comprehend and trust the results and outcomes created by AI.

Responsibility and accountability: The chain of actors involved in a medical AI solution can be rather complex, including various parties such as system designers and developers, computer programmers, data providers, data scientists, and clinicians. If a negative outcome results from a decision made by medical AI, it is important to identify the responsible parties so that errors can be fixed and prevented in the future. However, due to the opaqueness of AI systems, it can be challenging to trace the origin of the failure and assign responsibility or liability. Additionally, for AI algorithms that have the capacity to learn, responsibility/liability tracing is particularly challenging because humans do not have control over the actions of the algorithms and cannot “assume responsibility for them” (Matthias, 2004, 177).

Justice: The principle of justice requires that medical AI be used to benefit all populations, not just the privileged few, and should not result in any particular group being disadvantaged in terms of healthcare services (Johnson, 2019).  One primary ethical concern with medical AI stems from the fact that certain social groups, such as minorities, immigrants, and individuals with low socioeconomic status, are vulnerable to the missing data problem. Due to various reasons, including a lack of insurance and a primary care physician, as well as difficulties in accessing healthcare, individuals belonging to these groups tend to receive healthcare infrequently (Hoffman and Podgurski, 2020), resulting in missing data about them and underrepresentation in the mainstream healthcare information system.  An AI algorithm trained on such underrepresented data can disadvantage individuals belonging to marginalized groups because 1) the algorithm’s predictions may not be generalizable to individuals belonging to these groups, and 2) the algorithm may misinterpret the infrequent use of healthcare services by these groups as “a lack of disease burden” (Hoffman and Podgurski, 2020, 15), and consequently generate inaccurate predictions for individuals belonging to these groups.

It is essential that clinicians are aware of the technological limitations of AI systems. To recognize when an AI system produces flawed outputs, clinicians need to clearly understand the range of plausible outcomes that can be expected with input data (Sand et al., 2022). To determine whether the algorithm’s predictions are applicable to all patients of interest, clinicians must possess a comprehensive and meaningful understanding of the origin of the training data, including the purpose for which it was collected, curated, and analyzed (Sand et al., 2022).

In conclusion, while medical AI holds the promise of revolutionizing healthcare by improving accuracy, efficiency, and accessibility, it also poses several ethical concerns that must be addressed. These include issues related to transparency and explainability, fairness and bias, and accountability and liability. It is crucial that these ethical considerations are taken into account and that appropriate safeguards are put in place to ensure that the development and deployment of medical AI systems align with the values and needs of all stakeholders, including patients, healthcare providers, and society at large. Ultimately, by promoting responsible and ethical practices, we can harness the potential of medical AI to achieve better health outcomes for all.

References:

Challen, R., Denny, J., Pitt, M., Gompels, L., Edwards, T., & Tsaneva- Atanasova, K. (2019). Artificial intelligence, bias and clinical safety. BMJ Quality & Safety, 28(3), 231– 237.  DOI: 10.1136/bmjqs-2018-008370

Emanuel EJ, Emanuel LL. (1996). What is accountability in health care? Ann Intern Med. 124(2), 229-239. DOI: 10.7326/0003-4819-124-2-199601150-00007

Hoffman, Sharona and Podgurski, Andy, Artificial Intelligence and Discrimination in Health Care (2020). 19(3) Yale Journal of Health Policy, Law, and Ethics 1 (2020), Case Legal Studies Research Paper No. 2020-29, Available at SSRN: https://ssrn.com/abstract=3747737

Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W. (2022). Explainable AI Methods – A Brief Overview. In: Holzinger, A., Goebel, R., Fong, R., Moon, T., Müller, KR., Samek, W. (eds) xxAI – Beyond Explainable AI. xxAI 2020. Lecture Notes in Computer Science(), vol 13200. Springer, Cham. https://doi.org/10.1007/978-3-031-04083-2_2

Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics Inf Technol 6, 175–183. https://doi.org/10.1007/s10676-004-3422-1

Mihály, Héder (2020). A criticism of AI ethics guidelines. Információs Társadalom XX, no. 4: 57–73. https://dx.doi.org/10.22503/inftars.XX.2020.4.5

Sand, M., Durán, J. M., & Jongsma, K. R. (2022). Responsibility beyond design: Physicians’ requirements for ethical medical AI. Bioethics, 36(2), 162–169. https://doi.org/10.1111/bioe.12887

Like (2)