EDITORIAL


The Pulse of AI: Implementation of Artificial Intelligence in Healthcare and its Potential Hazards



Syeda Farheen Zaidi1
iD
, Asim Shaikh2
iD
, Salim Surani3, *
iD

1 Queen Mary University, London, E1 4NS, United Kingdom
2 Department of Medicine, The Aga Khan University, Karachi74800, Pakistan
3 Department of Medicine & Pharmacology, Texas A & M University, College Station, Texas77840, USA


Article Metrics

CrossRef Citations:
0
Total Statistics:

Full-Text HTML Views: 497
Abstract HTML Views: 317
PDF Downloads: 207
ePub Downloads: 143
Total Views/Downloads: 1164
Unique Statistics:

Full-Text HTML Views: 296
Abstract HTML Views: 224
PDF Downloads: 161
ePub Downloads: 113
Total Views/Downloads: 794



Creative Commons License
© 2024 The Author(s). Published by Bentham Open.

open-access license: This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 International Public License (CC-BY 4.0), a copy of which is available at: https://creativecommons.org/licenses/by/4.0/legalcode. This license permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

* Address correspondence to this author at the Department of Medicine & Pharmacology, Texas A & M University, College Station, Texas, 77840, USA; E-mail: srsurani@hotmail.com


Abstract

Keywords: AI in healthcare, Bias, AI risks, Data inconsistencies, Healthcare disparities, AI privacy, AI in diagnostics, Democratization of AI, AI impact on healthcare professionals, AI framework, Patient data security.



Artificial intelligence (AI) refers to the application of technology to emulate critical thinking and intelligent behavior that is akin to a human being [1]. The term AI was coined by John McCarthy and further elaborated by Alan Turing who subsequently developed the Turing test. If an evaluator was unable to reliably distinguish machine responses from human responses, then the machine would have passed the Turing test. This work then led to further development in various AI domains [1, 2]. The utility of machine learning in biotechnology, pharmaceuticals, and the broader science, technology, engineering, and mathematics (STEM) domain has undeniably played a pivotal role in the advancements we are seeing today. On a molecular level, key think tanks have optimized their results by generative and computational AI, facilitating the replication and scaling of complex biochemical models [3]. Their models have been applicable in research drug delivery and have been applied in personalized patient therapies, such as in oncology [4]. Naturally, the healthcare industry is keen on unearthing AI's further potential. While the various benefits and capabilities of large-scale use of AI have been discussed in great detail, the potential hazards, which could be significant, have been overlooked in this context.

Let us begin with some important considerations. Introducing AI within a system is a highly resource-intensive feat. At the grassroots level, harnessing AI necessitates data mining and processing, employing machine learning algorithms, such as decision trees, boosting, and deep learning. Thereafter comes the cycle of testing, trials, troubleshooting, and overseeing the technology. This process will require a large investment of resources and may also disrupt the systems in place. Next, AI is as good as the data it has access to. For an AI model to integrate into a healthcare ecosystem, it will need to process an incomprehensible level of data. Unfortunately, electronic health records, registries, and biobank data are reasonably limited. The data are often riddled with inconsistencies, subjectivities, bias, and incorrect inputs by healthcare workers, leading to operator-dependent variability. Due to these factors, our little AI box of inaccuracies would thus be building its foundations on assumptions, correlations, and reductionisms it has learned, leading to counterintuitive results and spurious associations. To account for the gap in data, we need to explore the baseline disparities, find the drivers of disparities, elicit a bigger picture of bias, and establish ways to mitigate it.

Recently, AI-driven screening and surveillance have shown considerable promise, particularly in chronic diseases, such as diabetes. By creating multidimensional datasets through demographic and biochemical data, researchers have been able to establish predictive models with considerable accuracy [5]. However, it is essential to consider that medical technology has historically been designed with the Caucasian population in mind; therefore, some metrics are inaccurate for a broader, diverse population. While the algorithm keeps building upon the data it has, the screening and surveillance of underrepresented groups and how its conclusions may vary need to be taken into account. There is a possibility that AI could inadvertently lead to invasive testing or procedures, or conversely, dismiss critical health concerns. For example, African American women have a difficult time communicating their health concerns and feel ignored by healthcare professionals [6]. The AI model may thus hold the bias to simplify or overlook their concerns, potentially delaying intervention and causing a further breakdown of trust in the healthcare system.

Furthermore, one of the key determinants of AI implementation would be readiness and acceptability by healthcare workers. Many healthcare providers find themselves constrained by time to facilitate new changes as it may compromise their perceived quality of care. Therefore, it would require considerable effort for key individuals to convene and find a way to make the process work. Another important factor to consider is the accessibility of this system on a broader scale. Given that AI technology will only be available to a microscopic stratum of healthcare systems, the scenario arises where patients present from low-resource areas with limited data; the AI will then be completely unreliable and, if deeply integrated within the system, problematic to override. This is why the democratization of AI will be very relevant here, wherein, developer tools, libraries, and data sets would be made accessible so that AI can be easily developed in underrepresented areas [7, 8].

In certain scenarios, we must contemplate the risks associated with reliance on AI in healthcare. We have seen AI’s capabilities and intelligence advance at a rapid pace, from successfully passing the United States Licensing exams [9] to efficiently executing hospital administrative tasks, as seen with AI systems, like BotMD [10]. Perhaps, the most significant growth of AI is seen in radiomics and pathomics. Radiomics and pathomics are quantitative approaches to medical imaging analysis. Through deep learning techniques and feature engineering, the AI model assimilates a vast dataset of histomorphometry, intensities, colors, structures, textures, and spatiality, providing high levels of interrogation and computer visual processing. It is now very plausible to assume that AI can deliver both quantitative and qualitative findings with extreme levels of accuracy in its interpretations and may be able to detect minuscule changes better than human capabilities [11, 12].

A study demonstrated that AI was able to diagnose colorectal cancer, lung cancer, and liver cirrhosis more accurately than a board-certified pathologist, with 98% accuracy compared to 96.9% accuracy, respectively [13]. Similarly, AI predictions for assessing breast cancer risk have proven superior to a radiologist [14]. There is also a growing influence of AI in diagnostics. Through convolutional neural networks, an AI model learned to diagnose Kawasaki disease [15], which has traditionally posed diagnostic challenges. While these capabilities are useful, we must consider the impact they may have on healthcare professionals. The utility of AI can either lead to a strong reliance on this machine as a tool and degradation of expertise or, in a dystopian scenario, lead to a fundamental shift in roles, particularly in fields, such as radiology and pathology, where there is less patient interaction. Suppose AI-generated findings were established as superior for accuracy in reporting, in that case, it may engender diagnostic conflicts wherein healthcare professionals could find themselves struggling to challenge incorrect AI findings and may fear litigation in case of opposition.

Finally, one of AI's main vulnerabilities lies in privacy and data breaches. Considering the political climate and frequent penetrations of even the most robust strongholds of society by cyber-attacks, the healthcare industry may prove to be an easier target in comparison. Patient records would then be susceptible to distortion or leaks, and intelligent data could be sold to gain profits insidiously. Such scenarios are highly plausible and may result in an overall shutdown of the entire infrastructure, further incurring a hefty cost burden on the industry. In 2018, Google acquired DeepMind, a prominent player in healthcare AI. The National Health System of the United Kingdom transferred data from 1.6 million patients onto DeepMind servers without obtaining formal consent for AI research. This patient data was then utilized to develop Streams, an application that featured predictive algorithms for acute kidney injury; while good intentioned, it breached European data laws and was scrapped quickly by Google post its criticism [16, 17]. This is just one example of how sensitive patient data can be siphoned off to third parties.

The more popular concerns in media currently present doomsday scenarios with AI. Some philosophical threats about sentient AI intelligence explosion, self-governance, and instrumental convergence lead to a future akin to sky-net level AI [18]. Such claims are admittedly far-fetched and do not warrant a realistic concern. Nevertheless, legitimate debate about the appropriate application of AI and what it means for the healthcare industry is highly valid. A robust framework needs to be established to ensure the accuracy and reliability of AI-driven assessments. Some examples of regulatory features in the AI framework have already been proposed, such as ethical governance, which tackles fairness, privacy, and transparency. Explainability and interpretability are required to ensure comprehension of algorithmic decisions on a layperson level so that AI findings can be challenged. Finally, ethical auditing would examine the inputs and outputs of algorithms to identify any prevalent biases or harms in the system [19-21]. These features can be put into practice through technical approaches, such as algorithmic impact assessments, improving client-side data encryption, and wider AI education and training [16, 21]. Corporations, government bodies, and policymakers, all need to be a part of the broader discussion, with medical experts leading the charge for framework development and addressing potential concerns with AI implementation in healthcare systems.

CONFLICT OF INTEREST

Dr. Salim Surani is the co-editor of The Open Respiratory Medicine Journal.

ACKNOWLEDGEMENTS

Declared none.

REFERENCES

[1] Mintz Y, Brodie R. Introduction to artificial intelligence in medicine. Minim Invasive Ther Allied Technol 2019; 28(2): 73-81.
[2] Turing AM. I.—computing machinery and intelligence. Mind 1950; LIX(236): 433-60. [DOI: 10.1093/MIND/LIX.236.433].
[3] Paul D, Sanap G, Shenoy S, Kalyane D, Kalia K, Tekade RK. Artificial intelligence in drug discovery and development. Drug Discov Today 2021; 26(1): 80-93.
[4] Wang L, Song Y, Wang H, et al. Advances of artificial intelligence in anti-cancer drug design: A review of the past decade. Pharmaceuticals 2023; 16: 253 .
[5] Aminian A, Zajichek A, Arterburn DE, et al. Predicting 10-year risk of end-organ complications of type 2 diabetes with and without metabolic surgery: A machine learning approach. Diabetes Care 2020; 43(4): 852-9.
[6] Washington A, Randall J. “We’re not taken seriously”: Describing the experiences of perceived discrimination in medical settings for black women. J Racial Ethn Health Disparities 2023; 10(2): 883-91.
[7] Rubeis G, Dubbala K, Metzler I. “Democratizing” artificial intelligence in medicine and healthcare: Mapping the uses of an elusive term. Front Genet 2022; 13: 902542.
[8] Garvey C. A framework for evaluating barriers to the democratization of artificial intelligence. Proc Conf AAAI Artif Intell 2018; 32(1): 8079-80. [DOI: 10.1609/AAAI.V32I1.12194].
[9] Kung TH, Cheatham M, Medenilla A, et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digital Health 2023; 2(2): e0000198.
[10] Basu K, Sinha R, Ong A, Basu T. Artificial intelligence: How is it changing medical sciences and its future? Indian J Dermatol 2020; 65(5): 365-70.
[11] van Timmeren JE, Cester D, Tanadini-Lang S, Alkadhi H, Baessler B. Radiomics in medical imaging—“how-to” guide and critical reflection. Insights Imaging 2020; 11(1): 91.
[12] Gupta R, Kurc T, Sharma A, Almeida JS, Saltz J. The emergence of pathomics. Curr Pathobiol Rep 2019; 7(3): 73-84. [DOI: 10.1007/S40139-019-00200-X/FIGURES/4].
[13] Oka A, Ishimura N, Ishihara S. A new dawn for the use of artificial intelligence in gastroenterology, hepatology and pancreatology. Diagnostics 2021; 11(9): 1719.
[14] Yala A, Lehman C, Schuster T, Portnoi T, Barzilay R. A deep learning mammography-based model for improved breast cancer risk prediction. Radiology 2019; 292(1): 60-6.
[15] Xu E, Nemati S, Tremoulet AH. A deep convolutional neural network for Kawasaki disease diagnosis. Sci Rep 2022; 12: 1-6.
[16] Khan B, Fatima H, Qureshi A, et al. Drawbacks of artificial intelligence and their potential solutions in the healthcare sector. Biomed Mater Devices 2023; 1: 1.
[17] Powles J, Hodson H. Google deep mind and healthcare in an age of algorithms. Health Technol 2017; 7(4): 351-67.
[18] Federspiel F, Mitchell R, Asokan A, Umana C, McCoy D. Threats by artificial intelligence to human health and human existence. BMJ Glob Health 2023; 8(5): e010435.
[19] Taddeo M, Floridi L. How AI can be a force for good. Science 2018; 361(6404): 751-2.
[20] Floridi L. Soft ethics, the governance of the digital and the General Data Protection Regulation. Philos Trans- Royal Soc, Math Phys Eng Sci 2018; 376(2133): 20180081.
[21] Cath C. Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. Philos Trans- Royal Soc, Math Phys Eng Sci 2018; 376(2133): 20180080.