AI In Healthcare: 5 Considerations for Decision-Makers

by | Jul 19, 2022

Artificial intelligence, or AI, is gaining importance in healthcare settings. However, it’s a technology surrounded by misunderstandings and confusion. AI is not a panacea or a quick fix; it’s a tool – and just one of many that providers and organizations can use to facilitate better patient care and operations.

When I’m talking with executives about the potential of AI for their organizations, the analogy I often use is that of a child. Like children, AI technologies are continually gathering data about the world around them and making intuitive leaps based on what they learn. That data may be structured like the things children learn in school, or unstructured like when children grow to grasp the concept of object permanence.

While it’s not perfect, this analogy often helps healthcare executives to understand what they need to think about when choosing and deploying AI solutions in their organization.

Change Management

When a family welcomes a new child, it requires a lot of planning. Homes need to be childproofed, routines changed, spaces adapted, expectations communicated. Similarly, adding an AI technology to a healthcare organization requires careful planning and communication.

Healthcare organizations are one of the most complex enterprises on earth when it comes to workflows and processes. A snag in a process here or a misstep there can have fatal consequences. An AI solution has a large potential to improve things in healthcare organizations, but also to upend carefully balanced processes and procedures – and not just early on, but as it grows and adapts.

Like AI tech, children become more complex as they learn and grow. School runs, football practice timing, weekend language classes, play dates, all these can easily throw a spanner in the rhythm of parent’s lives.

As AI tech ingests more data and “learns” from the information its given, it holds the potential to challenge the everyday decisions being made by team members. From identifying an image as cancerous, to predicting bed capacity in the ICU, to processing doctor’s notes and extracting medical ontological information from them, AI can easily start “stepping” on people’s toes.

Healthcare leaders must prepare for change management as their teams start implementing AI solutions. Make sure all stakeholders impacted by the decisions made by the AI solution are involved right from the start and that a clear-cut communication plan is in place as the deployment goes on.

Ethical considerations of AI in healthcare

Like children, AI technologies are a product of the data inputs they collect. And, like children, AI solutions extrapolate what will happen based on what has happened. Children learn what works by applying something that gained a desired result in one situation to a different situation and discovering that it doesn’t apply. In healthcare, this type of trial and error is not acceptable.

AI models developed for a given geography, for example, are unlikely to fit another one. In the same way, models developed based on data from patients from a particular background will likely start giving wrong predictions when applied to a different set of patients. This is where the concept of Responsible AI must be incorporated by the leadership in the culture of their organizations.

The methodologies being worked out by the Responsible AI institute, by Google, and by Microsoft are just a few examples of how organizations and vendors are trying their best to ensure that fairness, interpretability, and privacy becomes a core part of any AI solution.

Specifically for the healthcare industry, a great resource for executives to help understand Responsible AI is provided by Actium Health.

ROI takes time

Children are an 18-year time investment. While seeing the fruits of your labor as your kids settle down in their lives isn’t truly analogous to achieving return on investment (ROI), the understanding that long-term outcomes outweigh short-term rewards is.

You won’t have to wait 18 years to realize return on your AI investment, but the timeline for any AI tool is never shorter than 2 years (longer for more complex projects). What this means is that you have to have a long-term strategy in place rather than look for quick-fire return value from your AI investments.

Read more about ROI on Artificial Intelligence initiatives from Accenture and Deloitte.

Monitor constantly

Back in the day, you would buy software on a CD, put the CD in a disk drive, click setup and wait for the program to install. These days, the same thing is done via online downloads. What is important to understand, is that once you install the software you can start using it right away and it more or less behaves in the same manner throughout the time you use it.

Not for AI technology, though. It is rarely a separate piece of software with its own icon on the desktop. Rather, it is a complex set of algorithms that sit quietly in a “corner” and learn through the data it reads. This means that the if the nature of data changes, the output of the AI will also change. In other words, any AI solution constantly requires monitoring.

Just as you would monitor what media a child is consuming or the types of food they’re eating to support their mental, emotional, and physical development, it’s critical to understand what data your AI is ingesting. Limited data sets will lead to inappropriate conclusions, as will bad data.

In one recent case, an algorithm intended to minimize racial disparities began allocating care disproportionately away from individuals who self-identified as black. The algorithm was using data based on total accrued healthcare costs, which were similar across racial lines, despite the black population being substantially sicker. While researchers were able to catch this instance, it’s a clear call for health systems to devote the resources to continually monitor both the inputs and outputs of their AI solutions.

Explainable AI

AI solutions are enormously complex, ingesting and analyzing massive amounts of structured and unstructured data. The conclusions they produce from this can be surprising – sometimes by doing what they’re meant to exceedingly well, and sometimes by predicting outcomes that are inexplicable.

Never does a day go by when I don’t have to ask my 7-year-old why he decided to do something completely out of line with the norm. Just the other day, he decided to rush into a toy shop in the mall, leaving his mother and brother frantically searching for him for an hour or so. AI technologies aren’t id-driven like 7-year-olds, but without the proper checks, their conclusions can seem just as capricious.

When evaluating AI technologies for your healthcare organization, you must ensure that the AI solutions you’re deploying have the element of “Explainable AI” embedded in them. It must be possible to not only spot outliers but trace them back to the data inputs. Without this there will be little or no trust in the outcomes of the predictions produced by the AI model.

Risk vs. Reward

While these considerations highlight risks in AI deployment, the potential benefits to patients, providers, and healthcare organizations remain enormous. AI is gaining in importance across a wide array of industries and use cases. It’s even more important in the healthcare industry due to the nature of critical services it provides.

If you need expertise in developing or implementing an AI solution, there are tools such as the World Economic Forum’s Empowering AI Leadership: AI C-Suite Toolkit (available here) which provides an excellent blueprint for executive leadership.

Kamran Ali

Kamran Ali, AI and Analytics Leader – EMEA at GE Healthcare

Kamran Ali is a guest writer with MDisrupt. With over 15 years of experience in deploying Enterprise level IT solutions, he is currently leading the Commercial Analytics Team for GE Healthcare’s EMEA Region. Certified in both Project Management and Azure AI technologies, Kamran’s passion is to make stakeholders and key opinion leaders understand the nuances of AI deployments, especially when it comes to the healthtech space.

At MDisrupt we believe that the most impactful health products should make it market quickly. We do this by uniting digital health companies with experts from the healthcare industry to help them accelerate their time to market responsibly.

Our expert consultants span the healthcare continuum and can assist with all stages of health product development: This includes regulatory, clinical studies and evidence generation, payor strategies, commercialization, and channel strategies. If you are building a health product, talk to us.

Explore our content

Share This