Expectations for the use of artificial intelligence in the healthcare field are high, possibly even over exaggerated. In fact, a report published in the New England Journal of Medicine by Jonathan Chen and Steven Asch said, “in the ‘hype cycle’ of emerging technologies, machine learning now rides atop the ‘peak of inflated expectations.’”
AI’s list of possible applications in the medical field are large, from IoT medical devices, to diagnostic tools that can learn and predict how diseases will progress, even to smarter voice-to-text recorders tailored for medical jargon. However, for every incredible pro, there could be an even worse con that may need to be dealt with before AI can be seamlessly and securely implemented into healthcare routines.
The Potential for Greatness
A quick search for “AI in Healthcare” will show anyone that the industry is, on the whole, excited about this myriad of labor saving, cost saving, and life-extending applications for AI. There is certainly data to support the good side of AI, and no one can argue that a computer that can diagnose lung cancer 35 percent more accurately than radiologist is a good thing.
For nearly every kind of cancer there’s a corresponding AI system being developed to detect and diagnose it, some with thrilling levels of success. According to the National Cancer Institute, about 20 percent of mammograms present a false negative, leading many women to have cancer in their bodies that goes without treatment for years. French startup Therapixle has designed an AI algorithm to read these scans, and it has become successful at detecting breast cancer, cutting down on false positives be 5 percent.
And that’s just breast cancer. Chinese researchers at Nanjing University have designed an algorithm that detects prostate cancer as accurately as pathologists; and a multinational team from Germany, France, and the U.S. developed an AI that detects skin cancer in 95 percent of scans, compared to a team of dermatologists who only achieved 87 percent.
The medical field is even using AI to tackle rarer diseases like malignant pleural mesothelioma. A Scottish research firm was granted £140,000 to begin developing an algorithm specifically to detect this disease, which affects seniors disproportionately and has a very short life expectancy. An earlier diagnosis for any of these fast moving cancers can greatly improve a patient’s mental well-being, possibility of positive outcome on treatment, and life expectancy.
Another example of this type of software is Google’s Inception. Aristotelis Tsirigos, a pathologist at NYU School of Medicine, explained that this system was programmed with an open source algorithm allowing the programmers to add information on cancer detection and treatments and modify it as they see fit. Open source algorithms are edited and perfected on a sort of marketplace of ideas before being finalized and implemented.
Thousands of images of cancerous and healthy tissues from the Cancer Genome Atlas were uploaded to the software and programmers directed the technology to learn the difference. From there, Inception was able to make its own inferences, to learn and grow with the information provided, exhibiting deep learning.
Inception can also dig deep into the DNA of cells and determine where mutations might happen in the genetic coding. This may help doctors get ahead of the illness and help provide patients with more tailored treatments, based on their own genetic predispositions.
The uses of AI aren’t just confined to diseases detection and diagnosis. The Harvard Business Review analyzed the top 10 promising applications of AI in healthcare and estimated that by 2026, AI could save about $150 billion annually in the industry. The biggest saver was robot-assisted surgery, with an estimated $40 billion potential savings.
Other usages of AI include virtual medical assistants, streamlining administrative workflow, fraud detection in medical billing, and dosage error reduction. These usages drive the overall value of developing AI applications into the stratosphere, but also lead to the overinflation of expectations.
The Vulnerabilities of AI
The Gartner Hype Cycle describes the lifetime of emerging technologies from inception to general adoption. Some technologies fall off of the curve, never making it into mainstream usage and success. Right now, AI is teetering on the verge of developing more problems than it’s worth.
Though disease detection and diagnosis are considered successful applications of AI capabilities, the later end of the process, treatment, is not quite on par yet. Preliminary attempts at using AI to tailor treatment plans have been mixed, some favorable and some disastrous. An AI developed to write treatment plans based on past medical data gave recommendations that were deemed “unsafe.” IBM’s Watson AI ideated a treatment plan for a chemotherapy with a history of extensive bleeding patient that included Avastin, a drug known for the side effect of bleeding, which could lead to hemorrhage.
However, the creators blamed this shortcoming on their use of fabricated patient data to train the algorithm, which brings up another potential issue with data-driven AI applications. Data can be biased or incomplete, especially medical data, reported The New England Journal of Medicine in an article written by Jonathan Chen and Steven Asch.
“There are problems with real-world data sources,” reads the report. “Emerging data sources are typically less structured, since they were designed to serve a different purpose (e.g., clinical care and billing). Issues ranging from patient self-selection to confounding by indication to inconsistent availability of outcome data can result in inadvertent bias, and even racial profiling, in machine predictions.”
This creates a bit of a conundrum if AI can only succeed when provided real data points, but those real data points are riddled with human biases. At worst we could create a mistake-prone, racist medical tool, and at best an expensive and unreliable system that doesn’t really save practitioners any time.
If AI is implemented at large scale in these patient-facing disciplines, there’s also the issue of who incurs liability. Many algorithms with sophisticated machine learning to make smart decisions about disease planning operate in a vacuum, without human confirmation of correct assumptions. A black box AI making treatment recommendations opens up the question of where fault lies.
If a patient is treated using these recommendations and they turn out poorly, is it the patient’s fault for accepting the plan? The doctor’s for not coming up with an alternative one or catching the error? Perhaps the fault lies on the hospital for using machinery that shouldn’t be autonomous. Or maybe it goes all the way back to patient zero, and the programmer is to blame for creating the machines.
These questions of fault certainly open the door for regulatory agencies like the FDA to step in, but even well planned rules won’t always be able to catch every problem in its tracks.
Another issue that advanced AI machines will present is the simple impact to a hospital’s bottom line. Though the global healthcare industry is expected to receive savings, smaller hospitals may not be able to swing the initial cost. A Signify Research report revealed that by 2023, hospitals are projected to spend $2 billion annually on AI specifically for medical imaging.
Not only will the hardware be expensive to hospitals, just imagine the cost of CT imaging equipment outfitted with a hyper-intelligent cancer detecting AI built in, but it could actually drive the cost of healthcare up, eventually. Cancer care typically takes months at a time, from diagnosis to treatment plan formulation, to trial and error in operation and chemotherapy, and if AI cuts down on that lead time, patients will be faced with paying a premium for their swift treatment.
Balancing the Scale
Gartner’s assessment of AI’s place in its current life cycle can give some insights on what to expect from the future of this technology. This life cycle begins with a swift upward growth during the Innovation Trigger phase, then reaches an inverted parabolic curve, known as the Peak of Inflated Expectations. The third part of the cycle is a steep downward turn and bottoming out called the Trough of Disillusionment, followed by slow and steady growth referred to as the Slope of Enlightenment. The final stage of the life cycle is the end-goal, or the Plateau of Productivity.
For healthcare’s use of AI to reach that plateau, the many kinks in usage will need to be worked out, and careful implementation will need to be thought out. Gartner estimates that Deep Learning AIs could reach the plateau in about 3-5 years, which lines up seamlessly with some of the projected savings, so we’ll just have to wait and see.