Will AI replace or supplement human doctors?

artificial intelligence

For several years, artificial intelligence (AI) has played an increasingly important role in medicine for risk stratification, imaging and diagnosis, genomics, precision medicine, and drug discovery.

Although the introduction of AI in surgery is more recent, it is gradually changing the surgery practice with strong, advanced technological developments in imaging and navigation. Early techniques focused on feature detection and computer-assisted intervention for both pre-operative planning and intra-operative guidance.

Over the years, many supervised algorithms and deep learning methods have been developed to replace ad hoc hand-crafted methods in healthcare and learn automatically from the vast amount of medical data that can significantly impact the management of both acute and chronic diseases prolonging life and continuously extending the boundary of survival.

With AI integration, surgical robotics would perceive and understand complicated surroundings, conduct real-time decision making, and perform surgical tasks with increased precision, automation, safety, and efficiency. For instance, current robots can automatically perform simple surgical tasks, such as suturing and knot tying.

With the increasing use of robotics in surgery, AI can also transform the future of surgery through the development of more sophisticated sensorimotor functions with different autonomy levels that can give the system the ability to adapt to constantly changing patient-specific environment, leveraging the parallel advances in medicine in early detection and targeted therapy. It can simplify complex surgical navigation, reduce surgical trauma, improve post-operative care and assistance, and enhance the patients’ recovery and early detection of post-surgical complications.

Deep learning methods, such as neural network models, are now used in the most advanced forms of AI to mimic the process of human thought. AI can now learn and acquire knowledge at a rate and in a manner that far exceeds that of a human – and without the need for years of training. Unlike humans, AI is immune to the many factors that affect human doctors’ performance, such as burnout, fatigue, and heavy workloads. Given the significant shortages in the availability of healthcare workers and doctors, AI can far exceed any human capacity to meet the demand for care; and due to the potential to provide safer and higher-quality services for certain aspects of medical care, there is also a moral imperative to support such provisions.

Despite these advancements, this research contends that, while AI may alter the responsibilities of a doctor’s role – and sooner than we think – a machine could never completely replace a human doctor for a variety of reasons. First and foremost, patients will never attribute to a machine all of the basic characteristics necessary to form a patient-doctor relationship. As a result, the patient-doctor relationship serves as a portal to all doctoring activities, ensuring that patients seek medical care, disclose sensitive information during assessments, and seek psychological support from medical professionals during poor health and wellbeing periods. This is perhaps most evident in primary care, where the human interactions between the doctor and the patient have a wide range of consequences.

For example, there is now an ever-growing body of research determining that a patient’s perceptions of the quality of the doctor-patient relationship mediate their compliance with self-management and health-seeking behaviors. Subsequently, self-management and health-seeking behaviors are two of the most pervasive determinants of wider health and wellbeing outcomes. The four fundamental attributes of the doctor-patient relationship are loyalty, trust, knowledge, and regard. Therefore, this work posits that even with the introduction of AI that could act as a doctor, these are intrinsically human principles and that most patients would likely find difficult to ascribe to a non-human entity. To establish a therapeutic patient-doctor relationship, many patients cite that they require physicians to be empathetic, which by its very nature denotes that a doctor must be able to relate to human experiences.

In addition, evidence suggests that patients have a high level of distrust when AI is involved in any aspect of their care. Despite the numerous advantages of robotic surgery, participants maintain a preference for human doctors. An international survey of 12,000 patients from Europe, the Middle East, and Africa found that 63 percent of those surveyed would refuse to undergo major invasive surgery if robotic-assisted techniques were used. Only 53% said they would consent to a minimally invasive procedure involving robotic-assisted surgery. Indeed, some argue that AI’s complementary role has already had a negative impact on patients, causing physicians to have less interaction with patients as algorithms take the place of doctors’ involvement in clinical processes.

If artificial intelligence (AI) replaces human doctors, it raises ethical concerns about other aspects of the doctor’s role. Ethical practice means that doctors are fully accountable for their actions and are held morally responsible in the event of a medical error. What does it mean to assign moral responsibility for any consequences if an AI doctor makes a medical error? Because AI is typically developed by multiple people across multiple agencies, assigning blame would be extremely difficult – a situation known as the “problem of many hands.”

Furthermore, AI’s very benefit is its ability to create systems that outperform human intellectual ability, making it difficult to accurately assess their performance. In other words, it was impossible to assess whether AI complied with the governance standards that should have an impact on people’s medical practices as well as the design and execution of processes. As a result, the second, and perhaps most important, factor that should prevent AI from displacing human doctors comes into play.