Healthcare’s digital transformation is a key driver behind the integration of artificial intelligence (AI) applications into modern medicine. As healthcare data become more shareable, it raises concerns about patient privacy and confidentiality – one of the core ethical pillars of the medical field. The risks increase if AI medical applications exhibit biases related to patient characteristics like gender, age, or race during the training process, and if they can make independent medical decisions. However, expected benefits of AI in healthcare generally outweigh the potential security, legal, and ethical risks. In fact, depriving patients of the AI-powered capabilities could be considered a flaw in healthcare delivery.
When starting a new project that includes AI medical applications, it is crucial to have a thorough legal review. Key questions to address include the following: who bears legal responsibility in the event of a medical error when an AI medical application – the doctor, the AI manufacturing company, or both, and does AI applications have a legal entity that can be held accountable? [1].
The legal evaluation of new AI medical applications is a critical first step before introducing AI and robotics into a healthcare facility. Health practitioners may also need to provide evidence of their ability to effectively use the AI applications, such as certificates or documented experience within their clinical privileges approved by the medical administration. Unauthorized access and potential stealing of patient medical data also pose a significant legal risk [2]. This risk is heightened by the large number of stakeholders sharing that data – including healthcare providers, researchers, programmers, analysts, and engineers – who may have access to sensitive information. The use of AI medical applications linked to cloud storage or networks outside healthcare facilities, or even outside the country, may be very concerning [3].
AI applications may carry criminal risks as well, and while such scenarios may seem like science fiction, they are not entirely implausible. For instance, a healthcare provider could misuse AI-powered medical systems, such as to perform illegal procedures like illegal abortions, and criminals could exploit AI-enabled medical robots for evil purposes. Criminal acts involving AI medical applications can be classified into three categories based on legal entity responsibility. In the first category, the AI is considered a tool, and the user bears full criminal and legal responsibility. The second category involves criminal incidents resulting from software or technical malfunctions, for which the producing company or entity maintaining the AI system is responsible. The third category covers criminal incidents from autonomous AI systems capable of making and implementing decisions without human intervention or evidence of malfunction. Here, it is impossible to assign criminal responsibility, as these AI applications lack legal entity that can be prosecuted [1, 2].
When using AI in healthcare, it is crucial to address ethical concerns as well, such as obtaining informed patient consent to access and use personal data. Informed consent is a cornerstone of the patient-provider relationship, demonstrating the patient’s willingness to share their private health information to benefit their care. Given the high-risk nature of healthcare, it is essential to ensure the safety and transparency of any AI medical applications used for patient care. Ensuring justice and freedom from any kind of algorithmic bias is critical ethical principle for healthcare AI applications. To sustain those principles, it is essential to transparently disclose how the AI system works by thoroughly reviewing the training data and algorithms to identify and mitigate any potential biases, such as bias in race, age, or socioeconomic status representations [4, 5].
All AI developers must adhere to the highest safety and security standards when creating AI medical applications, whether software or robotic capabilities. They should provide users with transparent information and with the necessary training and competencies for the proper use of AI medical applications. Additionally, developers must obtain accreditation certificates and authorization for AI-enabled medical products or devices, as well as specifying the potential hazard level of using them along with obtaining the required permits before deployment in healthcare facilities. It is crucial to analyze the algorithms powering both decision-support AI applications and autonomous AI medical devices, such as surgical robots, to ensure the accuracy and appropriateness of the medical decisions they make [4, 6].
When utilizing AI-powered healthcare applications, protecting patient data privacy is paramount. According to Article 21 of Saudi Arabia’s Health Professions Practice bylaw, healthcare providers must safeguard any confidential information obtained about patients during their professional duties and may only disclose such information under specific, limited circumstances. Given that AI applications rely heavily on sensitive patient data, it is critical to ensure the highest standards of privacy, confidentiality, and cybersecurity are met. This requires applying the strictest security protocols, obtaining all necessary licenses, and adhering to the most rigorous data protection requirements [7].
Keep in mind that AI applications often lack cultural sensitivity, leading to inaccurate diagnoses and recommendations, especially when certain community groups are underrepresented in the training data. Differences in religion, ideology, customs, and traditions – particularly in mental health – may cause AI applications to misinterpret or misunderstand patients’ experiences and needs, which may lead to exacerbating their mental suffering [8]. AI applications span across four key digital layers, each with distinct cyber risks which are the digital perception layer, the communication network layer, the cloud layer, and the AI application layer [3, 9, 10].
The digital perception layer includes tools that act as sensors and have a direct physical link to the patient, such as temperature monitors, blood pressure cuffs, electrocardiograms, and brain monitoring devices. This layer is vulnerable to cybersecurity threats, including eavesdropping, jamming, and injection attacks. These attacks could lead to device failure and disrupt critical surgical workflows, with potentially catastrophic consequences on patients’ care [3].
The communication network layer receives and processes data from the digital perception layer and then transfers the reliable data directly to the cloud layer above it. This makes the network layer vulnerable to cybersecurity risks such as denial-of-service attacks, forgery attacks, and man-in-the-middle attacks. These attacks can slow down or crash the system, and allow unauthorized access, modification, or manipulation of sensitive medical information [3].
The cloud layer securely backs up medical data to protect patient confidentiality. However, these data may still be vulnerable to certain cybersecurity attacks, such as flood attacks or injection attacks. These attacks could overload servers and disrupt cloud services or allow unauthorized parties to access or modify sensitive health information, potentially compromising national health security [3].
The AI application layer sits at the top of the digital healthcare pyramid. This is where data are analyzed, and health solutions/decisions are generated. This layer may also be exposed to cybersecurity risks like phishing, malware, and denial-of-service attacks. Fraudsters could potentially hack connected medical devices to conduct phishing on the application layer and disrupt its services. The 2021 incident of the Health Service Executive of Ireland was a terrifying example, costing hundreds of millions, with its negative impact still present on the Irish healthcare system till today [3]. Finally to enhance the user experience of AI medical applications, it is recommended to have the following considerations:
- 1.
Improving the quality of training data and AI algorithms and verifying the validity of them, ensuring it represents fairly the broadest possible segment of society.
- 2.
Requiring developers of AI applications and systems to transparently disclose the system information and to submit legal and ethical accountability documents before deploying their products in healthcare facilities.
- 3.
Integrating human oversight and mandatory human action intervention into the AI-based health workflow system at appropriate levels according to the hazard level to ensure patient safety and accountability.
- 4.
Establishing legal and judicial regulations to determine liability when medical errors occur due to the use of AI applications as well as including liability coverage for such errors in medical malpractice insurance for healthcare professionals.
- 5.
Implementing the highest standards of digital and cybersecurity protection protocols for AI medical applications and their connected networks inside or outside the healthcare facility.
- 6.
Educating healthcare practitioners on the capabilities, limitations and risks of using AI medical applications, and granting clinical privileges to utilize AI tools when evidence of qualifications is present.
Conflict of Interest Statement
The author has no conflicts of interest to declare.
Funding Sources
This study was not supported by any sponsor or funder.
Author Contributions
Faisal A. Al-Suwaidan prepared the manuscript and approved the final manuscript.