Ethical Issues of applying Artificial Intelligence in Health Care

Introduction

Considerable advances in Artificial Intelligence are bound to drastically impact the current situation in health care. Its efficiency and performance in computer vision, such as image classification (Krizhevsky, Sutskever, and Hinton, 2012) and self-driving cars (“Self-Driving Cars Take the Wheel - MIT Technology Review”), has fueled the rapid development of tools that range from diagnosis (Cohen, Bertin, and Frappier, 2019), clinical decision making (Yang, Steinfeld, and Zimmerman, 2019), personalized medicine (Schork, 2019), drug discovery (Zhavoronkov et al., 2019), robotic prostheses (Handford and Srinivasan, 2018), and psychological chatbots (Sheth et al., 2019). Some methods, for example, achieved dermatologist-level classification in the diagnosis of skin cancer (Esteva et al. 2017) and enabled early diagnosis of diabetic retinopathy in individual patients (Arcadu et al., 2019). Moreover, AI-based medical devices have already been approved by the FDA (“FDA Permits Marketing of Artificial Intelligence-Based Device to Detect Certain Diabetes-Related Eye Problems; “FDA Approvals for Smart Algorithms in Medicine in One Giant Infographic - the Medical Futurist”). Furthermore, people can receive mental health help online from an automated conversational agent Woebot (“Woebot - Your Charming Robot Friend Who Is Ready to Listen, 24/7”). Researchers (Fitzpatrick, Darcy, and Vierhile, 2017) reported that it significantly reduced symptoms of depression in young participants over the course of the study.

Drawbacks

Despite all of the benefits and successes, there have been some drawbacks and sensitive questions that have been raised. One of them is related to the methods themselves. The one pitfall is that state-of-the-art methods based on neural networks can be easily fooled by adversarial attacks (Goodfellow, Shlens, and Szegedy, 2014). This can lead to incorrect diagnoses (Finlayson et al., 2018) or abuse of the AI system for social benefits. Another case is that Neural Networks are considered Black-Box models (London, 2019), while for medical diagnosis, it is crucial to understand what has led to a diagnosis. Another challenge is the fairness (“How Bias Distorts Ai (Artificial Intelligence)”) of the predictions. Authors of the following work (Chen, Szolovits, and Ghassemi, 2019) found that machine learning models have learned a bias with respect to gender and health insurance (public or private) in mortality prediction. This can lead to misdiagnosis within specific underrepresented groups. An additional issue is a lack of reproducibility in digital medicine (Stupple, Singerman, and Celi, 2019), which is potentially influenced by the everyday successes of AI and may pose risks to the robustness of the models.

Ethical AI in Health Care

It is essential to make sure AI-based medical applications comply with the individuals’ safety, privacy, confidentiality, and autonomy. Data privacy can be solved by providing network security, data management, and compliance with HIPAA. As an example, Google Cloud offers such services to the American Cancer Society, Colorado Center for Personalized Medicine (“Healthcare Solutions | Google Cloud”). At the same time, an individual’s autonomy asks for the role of AI in the health care act, patient understanding of AI, and responsibility for the errors of the AI. Authors (Schiff and Borenstein, 2019) suggest that a person should be provided with an explanation of how an AI system works and to receive an interpretation of its decision-making process, in addition to general informed consent. The responsibility lies on the shoulders of its designers (products liability), physicians (malpractice), hospitals (vicarious liability), public policymakers, and other actors. It includes careful documentation, training, and protocols to keep the technology safe for the patients. Another view is proposed by (Sullivan and Schweikart, 2019) as AI personhood, enterprise liability, and modification of the standard of care. The first possible solution is making AI an insured quasi-juridical person. The second — all groups involved spread the responsibility. The third — the health care providers who provide AI-based diagnosis are responsible.

Conclusions

Regardless of the drawbacks and sensitive questions around current methods, AI will likely dominate the health care domain. It is a question of time, public policy, and research. Specifically, the setbacks of deep neural networks are hot research topics that still require a lot of work. The reproducibility issue requires good research practices such as data and code sharing, openness, and commitment to science. Moreover, the public policy dilemmas should be resolved. However, it is vital to raise awareness that everything is being done ethically, even with the rapid pace of AI development.

Acknowledgments

Thanks to Eloy Geenjaar for reviewing earlier drafts of this post and providing valuable feedback. These thoughts are part of the paper assignment for the PHIL6000 class by Dr. Jason Borenstein.

Citation Info (BibTex)

If you found this blog post useful, please consider citing it as:

@misc{fedorov2021ethicalaihealthcare,
  title   = {Ethical Issues of applying Artificial Intelligence in Health Care},
  author  = {Alex Fedorov},
  year    = 2021,
  note    = {\url{https://entodi.github.io/thoughts/ethical-ai-healthcare/}}
}

References

  • Arcadu, Filippo, Fethallah Benmansour, Andreas Maunz, Jeffrey Willis, Marco Prunotto, and Zdenka Haskova. 2019. “Deep Learning Algorithm to Predict Diabetic Retinopathy (Dr) Progression on the Individual Patient-Level.” Investigative Ophthalmology & Visual Science 60 (11): PB093–PB093.

  • Chen, Irene Y, Peter Szolovits, and Marzyeh Ghassemi. 2019. “Can Ai Help Reduce Disparities in General Medical and Mental Health Care?” AMA Journal of Ethics 21 (2): 167–79.

  • Cohen, Joseph Paul, Paul Bertin, and Vincent Frappier. 2019. “Chester: A Web Delivered Locally Computed Chest X-Ray Disease Prediction System.” arXiv Preprint arXiv:1901.11210.

  • Esteva, Andre, Brett Kuprel, Roberto A Novoa, Justin Ko, Susan M Swetter, Helen M Blau, and Sebastian Thrun. 2017. “Dermatologist-Level Classification of Skin Cancer with Deep Neural Networks.” Nature 542 (7639): 115.

  • “FDA Approvals for Smart Algorithms in Medicine in One Giant Infographic - the Medical Futurist.” https://medicalfuturist.com/fda-approvals-for-algorithms-in-medicine/.

  • “FDA Permits Marketing of Artificial Intelligence-Based Device to Detect Certain Diabetes-Related Eye Problems FDA.” https://www.fda.gov/news-events/press-announcements/fda-permits-marketing-artificial-intelligence-based-device-detect-certain-diabetes-related-eye.

  • Finlayson, Samuel G, Hyung Won Chung, Isaac S Kohane, and Andrew L Beam. 2018. “Adversarial Attacks Against Medical Deep Learning Systems.” arXiv Preprint arXiv:1804.05296.

  • Fitzpatrick, Kathleen Kara, Alison Darcy, and Molly Vierhile. 2017. “Delivering Cognitive Behavior Therapy to Young Adults with Symptoms of Depression and Anxiety Using a Fully Automated Conversational Agent Woebot): A Randomized Controlled Trial.” JMIR Mental Health 4 (2): e19.

  • Goodfellow, Ian J, Jonathon Shlens, and Christian Szegedy. 2014. “Explaining and Harnessing Adversarial Examples.” arXiv Preprint arXiv:1412.6572.

  • Handford, Matthew L, and Manoj Srinivasan. 2018. “Energy-Optimal Human Walking with Feedback-Controlled Robotic Prostheses: A Computational Study.” IEEE Transactions on Neural Systems and Rehabilitation Engineering 26 (9): 1773–82.

  • “Healthcare Solutions Google Cloud.” https://cloud.google.com/solutions/healthcare.

  • “How Bias Distorts Ai (Artificial Intelligence).” https://www.forbes.com/sites/tomtaulli/2019/08/04/bias-the-silent-killer-of-ai-artificial-intelligence/#3e27ffd7d87e.

  • Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E Hinton. 2012. “Imagenet Classification with Deep Convolutional Neural Networks.” In Advances in Neural Information Processing Systems, 1097–1105.

  • London, Alex John. 2019. “Artificial Intelligence and Black-Box Medical Decisions: Accuracy Versus Explainability.” Hastings Center Report 49 (1): 15–21. https://doi.org/10.1002/hast.973.

  • Schiff, Daniel, and Jason Borenstein. 2019. “How Should Clinicians Communicate with Patients About the Roles of Artificially Intelligent Team Members?” AMA Journal of Ethics 21 (2): 138–45.

  • Schork, Nicholas J. 2019. “Artificial Intelligence and Personalized Medicine.” In Precision Medicine in Cancer Therapy, 265–83. Springer.

  • “Self-Driving Cars Take the Wheel - MIT Technology Review.” https://www.technologyreview.com/s/612754/self-driving-cars-take-the-wheel/.

  • Sheth, Amit, Hong Yung Yip, Arun Iyengar, and Paul Tepper. 2019. “Cognitive Services and Intelligent Chatbots: Current Perspectives and Special Issue Introduction.” IEEE Internet Computing 23 (2): 6–12.

  • Stupple, Aaron, David Singerman, and Leo Anthony Celi. 2019. “The Reproducibility Crisis in the Age of Digital Medicine.” NPJ Digital Medicine 2 (1): 2.

  • Sullivan, Hannah R, and Scott J Schweikart. 2019. “Are Current Tort Liability Doctrines Adequate for Addressing Injury Caused by Ai?” AMA Journal of Ethics 21 (2): 160–66.

  • “Woebot - Your Charming Robot Friend Who Is Ready to Listen, 24/7.” https://woebot.io/.

  • Yang, Qian, Aaron Steinfeld, and John Zimmerman. 2019. “Unremarkable Ai: Fitting Intelligent Decision Support into Critical, Clinical Decision-Making Processes.” In Proceedings of the 2019 Chi Conference on Human Factors in Computing Systems, 238. ACM.

  • Zhavoronkov, Alex, Yan A Ivanenkov, Alex Aliper, Mark S Veselov, Vladimir A Aladinskiy, Anastasiya V Aladinskaya, Victor A Terentiev, et al. 2019. “Deep Learning Enables Rapid Identification of Potent Ddr1 Kinase Inhibitors.” Nature Biotechnology 37 (9): 1038–40.

2021

Acceptance to MONAI MICCAI Bootcamp 2021

less than 1 minute read

I am excited to participate in 3-Day GPU Hackathon: MONAI MICCAI Bootcamp 2021. The hackathon will be held virtually on September 22-24, 2021.

Back to top ↑

2020

Back to top ↑