Potential effects of implementing artificial intelligence in the medical field.
According to Julia (20), the need for artificial intelligence in the medical field has been sparked by the complexity and rise of data in the medical field. There are already several types of AI that are being used by healthcare providers. The key categories of applications involve diagnosis and treatment recommendations, patient engagement and adherence, and administrative activities (Julia, 2020). Since artificial intelligence is a new development, it is likely to affect healthcare in many different ways. Therefore, it is important to explore its potential effects in healthcare to determine the associated risks and how the risks can be mitigated or reduced before it takes full effect. This paper explores the effects of artificial intelligence in healthcare in depth.
Evidence 1 – less important effects
The less important effects of artificial intelligence in the medical field include the risk for injuries and error resulting from the technology, privacy concerns as well as bias and inequality. To begin with, it is obvious that technology just like man is never perfect, therefore, artificial intelligence systems will sometimes be wrong resulting in patient injury or other health-related problems. The injury is likely to result when the artificial intelligence (AI) system prescribes the wrong medication or fails to locate a tumor in a radiological scan or allocates bed to one patient over the other due to the wrong prediction on which patient would benefit more (Elizabeth and Krupinski 5-10). Although medical errors leading to injuries are common in healthcare even without AI system involvement, the injuries caused by the AI systems will be different in two ways. First, the injuries resulting from AI systems will be widespread given that an error resulting from one AI System will affect thousands of patients as opposed to human error which usually occurs in a limited number of patients attended by a specific healthcare provider. Secondly, patients and healthcare providers may react differently to injuries resulting from AI systems other than those resulting from human error.
Another imminent risk associated with AI systems is privacy. This stems from the fact that AI systems require large datasets that prompt the developers of the systems to collect such data from many patients (Kluge and Eike-Henner 47-49). Some patients find this as a violation of their privacy and can lead to lawsuits filed against the data-sharing between health systems and the developers of the AI systems. Another way the AI system can cause privacy violation is by predicting private information even though it was not received by that algorithm. For instance, the AI system may predict that the patient has Parkinson’s disease based on the trembling of the mouse even though the patient did not reveal such information to anyone or rather was not aware that he or she had that condition. Patients are likely to perceive this as a violation of their privacy in cases where the AI system data are being shared by third parties such as banks and life insurance companies (Kluge and Eike-Henner 47-49).
According to Nicholson, healthcare AI systems are prone to risks involving bias and inequality in several ways. First, the AI systems learn from data on which they are trained such that they incorporate the biases that come with the data. For example, an AI system that is trained based on the data collected in various academic medical centers is likely to be less effective in dealing with patients from a population that do not frequent academic medical centers. Although AI systems learn from comprehensive, and accurate data, there will still be a problem if the data has underlying biases and inequalities in the health system. For example, in the current healthcare setting, an African-American patient on average receives less treatment for pain as compared to white Americans. This situation is a systemic bias as it is not biologically proven. However, the AI system designed on this discrepancy is likely to prescribed fewer doses of pain medications to African-Americans. Another imminent inequality that is likely to arise is the resource allocation by the AI systems as the systems may allocate fewer resources to patients deemed less profitable by the healthcare systems for problematic reasons.
Evidence 2 – Major effects
The major effects of artificial intelligence in the medical field are the increased human performance and automated drudgery in medical practice. To begin with, AI systems are beneficial to healthcare in the sense that they help push the boundaries of human performance. This can be attributed to the AI systems’ ability to perform tasks that are beyond human performance ability. For instance, Google Health has devised a program that can predict the onset of acute kidney injury two days before the injury (Houlton 13-17). This outperforms the current medical practice in which the injury becomes known after it happens. Such algorithms can improve care beyond the reach of human performance.
Another major effect of the artificial intelligence in the medical field is the automation of computer tasks that usually consumes a lot of time in current medical practice. The healthcare providers usually take a lot of time dealing with electronic medical records through reading screens and typing on keyboards. If the AI system can queue up most relevant information in patient records and then distill recordings of appointments and conversations down into structured data, they could save substantial time for providers and might increase the amount of face-time between providers and patients and the quality of the medical encounter for both (Bernard).
Lastly, there exists an unavoidable effect of artificial intelligence in the medical field that has raised a lot of questions, fear, and suspicion. The controversial effect of artificial intelligence in the medical field is the possibility that AI systems could potentially replace clinicians. However, from my perspective, this scenario is unlikely to happen on the basis that clinicians are more flexible on their approach to problems than the AI system. In as much as the AI system is or will make diagnosis more accurate, and reliable, diagnosis is just one stage in the treatment process, and clinical expertise and human intervention will continue to be invaluable throughout the diagnostic and treatment pathway. Besides, human beings have broader general intelligence as compared to AI systems which have a narrower, more task-specific focus in its current state (Alquézar et al. 55). Moreover, healthcare as a care-oriented service industry that focuses on holistic care as opposed to treating the symptoms, requires the healthcare providers to create therapeutic relationships with the patients that foster empathy (Koch et al. 27). This can never be achieved by the AI systems which imply that AI systems will never replace clinicians in whatsoever way.
In conclusion, AI is of great importance in healthcare as it helps with efficient diagnosis and data management. It increases the clinicians’ performance and reduces the time spent on patient data analysis. However, there exist imminent risks associated wit the AI systems that need to be mitigated before it is fully implemented. The AI systems can assist in both the administration and treatment process by being an important tool to the clinicians but can never replace the clinicians due to their limited intelligence and rigid nature.
Alquézar, René et al. Artificial Intelligence Research and Development. IOS Press, 2010.