Transparent Decision-Making: A Key to A.I. Adoption in Healthcare

A.I. & Digital Tech in Healthcare: A Company Spotlight Series

by Dylan Reid & Robert J. Horne / Published January 18th, 2024

Key Takeaways

  • A.I. healthcare companies with transparent decision-making capabilities are more attractive to lawmakers and the health system than those that operate in a “black box.”

  • Patient safety and medical liability concerns – current roadblocks to A.I. use in the sector – can be addressed by “transparent” products that share the decision-making process with end-users.

  • This article places a “spotlight” on Calmwave, an A.I. healthcare company, to help tell the story.

A.I. & Digital Tech in Healthcare: A Company Spotlight Series - Copyright Forest Hill Labs, 2024

We have written previously on the benefits of A.I. as a workforce alternative in the healthcare sector. In our opinion, several functional (and policy) issues will determine how much of this vision can be realized.

One issue is the extent to which the A.I. decision-making process is transparent to end-users. This is because there are patient safety and medical liability issues if the information or recommendations A.I. produces are wrong and acted upon by a person or medical provider caring for them. According to Ophir Ronen, CEO of Calmwave:

“AI, while undoubtedly innovative, operates as a "black box," lacking transparency in its decision-making processes. This issue is particularly concerning in critical healthcare situations where understanding AI decisions is essential for patient safety and clinician trust. Generative AI can make profound recommendations based on large data sets, but these can sometimes diverge from the training data, leading to trust issues and safety concerns.”

For A.I. to become a medical labor alternative in the healthcare market, it should be able to replicate the behaviors of medical professionals. In this instance, consider A.I. to be the source of a medical ‘second opinion.’

It is common practice for medical professionals and patients alike to seek the reasoning behind a medical recommendation. In fact, understanding the reasoning behind recommendations is a general human behavior no matter the market. Therefore, it seems logical that A.I. healthcare companies would want decision-making transparency capabilities as part of their offering in order to make them more attractive offerings to consumers. 

In a field where mistakes can have significant consequences, transparent AI allows healthcare providers to quickly identify errors and incorrect recommendations, thereby ensuring accountability and preventing potential patient harm. Decision-making transparency is also important for the business of healthcare. Consider the legal issues associated with A.I. that fails to work properly. For starters, who is liable?

“The incorporation of AI in healthcare is not without its challenges. One critical aspect is the importance of maintaining a "human in the loop," especially in critical care settings. Decisions in such environments have significant and immediate impacts on patient health. While AI offers valuable insights, the ultimate responsibility for decisions lies with the clinician, who must balance AI recommendations with their professional judgment and expertise.

This delineation of responsibility is crucial because the clinician, not the AI system, bears the legal and ethical responsibility for patient outcomes. A mistake could not only harm the patient but also jeopardize the clinician's license and career. Thus, AI in healthcare shines as a tool to augment, not replace, human decision-making, ensuring that patient care remains at the forefront of medical practice.”

Medical mistakes and the liability that comes with them can have chilling effects on the use of innovative technologies. Transparent technologies that allow medical professionals and patients alike the ability to understand why a conclusion was reached can help clarify important liability issues that might otherwise discourage A.I. use.

Further, medical workforce shortages are abundant in healthcare today. While creating more doctors and nurses through training is one way to address the problem, it is a lengthy (and costly) process. Reducing the administrative and care burden on medical workforces so that they can see more patients is a more feasible approach to solving this problem. Consider the benefits of transparent A.I. from a labor perspective. Alarm fatigue occurs when doctors and nurses become desensitized to medical device alarms such as are found in hospitals. This fatigue is becoming more prevalent, and is a patient safety issue.

The awareness of “alarm fatigue” has become more prevalent with the rise of the decreased staff retention rates of nurses caused by burnout. If “alarm fatigue” perpetuates among other hospital systems, then patients receive delayed treatments. In short, these types of scenarios exacerbate the cost of healthcare in the United States because of the lack of ability to determine whether a patient is truly in an emergency. 

A.I. in healthcare in many instances is not a replacement for medical professionals but rather an opportunity to do more with less. AI healthcare companies that prioritize transparent decision-making can instill confidence among clinicians. Doing so should increase business opportunities for transparent A.I. over those operating in a ‘black box.’

We believe AI with transparent decision-making capabilities will be a key factor in the digital transformation of healthcare. As the healthcare industry continues to evolve, embracing transparent AI could be critical for market gains as well as ensuring patient safety, accountability, and the overall effectiveness of AI in healthcare.

For more information on this topic, you can read Ophir Ronen’s article in Forbes here.

Previous
Previous

Reimagining FDA Regulation of A.I. & Service-Delivery Capable Medical Products (FDA Reform for A.I./Digital Part 2)

Next
Next

A.I. & Digital Health Sector Predictions for 2024