Artificial Intelligence in healthcare – Hype vs. Hope

AI-healthcare

One of the most significant near-term risks in the current development of AI tools in medicine is not that it might cause unintended, serious harm to humans, but that it cannot meet the incredible expectations that excessive hype is boosting.

Indeed, as noted by the Gartner Hype Cycle, which tracks relative maturity stages for emerging technologies, so-called AI technologies such as deep learning and machine learning ride to the highest peak of inflated expectations for emerging technologies. Without an appreciation of both the capabilities and limitations of AI technology in medicine, we’ll predictably crash into a “trough of disillusionment.” The highest risk of all may be such a backlash that impedes real progress towards using AI tools to improve human lives.

Several factors have led to increased interest in, and escalation of the AI hype over the last decade. There have been legitimate discontinuous leaps in computational capacity, availability of electronic data (e.g., ImageNet and digitization of medical records), and capacity for perception (e.g., recognition of images). Just as algorithms can now automatically name a dog’s breed on a photo and generate a caption of a “dog catching a frisbee,” we see automated recognition of malignant skin lesions and specimens of pathology.

Such functionality is impressive, but can easily lead one to assume mistakenly that the computer “knows” what skin cancer is and what surgical excision needs to consider. People expect an intelligent human being who can recognize an object in a photo can also naturally understand and explain the context of what they see. But, there’s no such general understanding of the narrow, applied AI algorithms atop the current hype cycle. Instead, each of these algorithms is designed to accomplish specific tasks, such as answering well-formed multiple-choice questions.

With Moore’s law of exponential growth in computing power, the question arises as to whether it is reasonable to expect machines to possess higher computational power than human brains soon. This comparison may not even make sense with the fundamentally different computer processor and biological brain architectures, because computers can already exceed human brains by measurements of pure storage and speed. Does this mean that humans are heading for a technological singularity that will spawn fully autonomous AI systems that continuously improve themselves beyond the confines of human control?

Roy Amara, co-founder of the Institute for the Future, reminds us that, “We tend to overestimate the short-term effect of technology and underestimate the long-term effect.” However, intelligence is not merely a function of computing power, among other reasons. Increasing the speed and storage of computing makes a better calculator but not a better thinker. At least, in the near future, this leaves us with fundamental questions of design and concept in (general) AI research that has remained unresolved for decades (e.g., common sense, framing, abstract reasoning, and creativeness).

Explicit hyperbole advertising might be one of the most direct triggers for hype’s unintended consequences. While such promotion is essential for driving interest and motivating progress, it can become overly counterproductive. Ultimately, hyperbolic marketing of AI systems that will “outthink cancer” can set the field back when confronted with the painful realities in trying to bring about changes in actual patient lives. Modern advances reflect remarkable progress in AI software and data, but may short-sightedly discount a health care delivery system’s “hardware” (people, policies, and processes) needed to perform care.

Limited AI systems may fail to provide clinicians with insights beyond what they already knew, undercutting many hopes for early warning systems and screening for rare disease asymptomatic patients. Continuous research tends to promote the latest technology as a cure-all, even if there is a “regression to regression” where well-worn methods backed by a suitable source of data can be as or more useful in many applications than “advanced” techniques of AIs.

To recognize the credible potential of AI systems and avoid the backlash that will come from overselling them, a combination of technical and subject-domain expertise is essential. Yet if our benchmark improves on the current state of human health, there is no need for pessimism. Algorithms and AI systems are unable to provide “guarantees of fairness, equity, or even truthfulness,” but neither can humans.

The “Superhuman Human Fallacy” is to dismiss computerized (or human) systems that do not attain an unrealizable standard of perfection or improve the best performing human being. Accidents attributed to self-driving cars, for example, receive outsized media attention, even though they occur far less frequently than accidents attributed to human-driven vehicles. However, the potential over-sized impact of automated technologies makes us reasonably demand a higher standard of reliability even if the degree required is unclear and may also cost more lives in the cost of opportunity while awaiting perfection.

In health care, it is possible to determine where even imperfect augmentation of clinical AI can improve care and reduce variation in practice. For instance, there are gaps where humans commonly misjudge the accuracy of screening tests for rare diagnoses, grossly overestimate patient life expectancy, and deliver comprehensive intensity care over the last six months of life. The potential of AI in medicine does not need to be overhyped when there is ample opportunity to address existing issues with undesirable variability, crippling costs, and impaired access to quality care.

To identify opportunities for automated predictive systems, stakeholders should consider where essential decisions depend on people making predictions with a precise result. Though human intuition is powerful, without a support system, it is inevitably variable. One could identify scarce interventions that are known to be valuable, and use AI tools to help identify the most likely benefit patients. For example, not everyone needs to be treated by an intensive outpatient care team, but only those patients who are predicted to be at high risk of morbidity by AI systems can be targeted. Additionally, there are numerous opportunities to deploy AI workflow support to help people respond quickly or complete repetitive information tasks (e.g., documentation, scheduling, and other back-office management).