Internal Controls: Regulating The ‘Risks’ and Rewards of A.I. in Healthcare

Part 3: Reimagine FDA Regulation for the Digital Age

by Robert Horne, Daniel Maley, and Dylan Reid

“Building Ethics into A.I.” FHL, Copyright 2024

Key Takeaways

  • A.I. developers and regulators should focus on implementing internal controls for A.I. like society uses to train humans into responsible members of society.

  • A set of internal controls that developers and regulators can use to address many of the risk issues associated with A.I. proactively and collaboratively.

  • Ethical frameworks are the means of establishing a set of A.I. internal controls capable of ensuring safe use in healthcare.

Introduction

There is significant public interest in Artificial Intelligence (A.I.) today, prompted by its capacity to replicate and enhance human capabilities. From self-driving cars to personal aides, and notably, in transforming medical service delivery, A.I. stands at the forefront of technological innovation.

Discussions about Artificial Intelligence (A.I) often highlight its potential risks to humanity, including scenarios of rogue killer robots and existential threats. There's speculation that AI could surpass human intelligence, posing significant challenges. On the other hand, the ability of A.I to enhance patient care and empower consumers underscores its positive potential.

This article examines the real threats AI poses in healthcare settings alongside its benefits. It also outlines actionable recommendations for policymakers to mitigate these risks, aiming to balance AI integration into healthcare with safety and efficacy.

A.I. Use in Health & Medical Care Settings

There are a few aspects of A.I. use in healthcare that are important to consider.

A.I. can act as a navigation tool for people logging into digital portals for a range of digital health products and services, including therapeutics, virtual care and telehealth, in their own homes. A.I. can connect people to medical providers when their specific needs require the attention of a doctor or nurse.

A.I. can function as a personal assistant for consumers seeking to improve their health and use their health data outside of medical encounters with a physician. These technologies can also act as medical assistants offering information, advice, and care recommendations to medical professionals. Some A.I. products are even capable of delivering people digital versions of medical services without a doctor present. Cognitive Behavioral Therapy, which is the gold standard medical service for people suffering from PTSD or other mental health conditions. is a good example.

There are many more medical and consumer-centric applications for A.I. in healthcare too numerous to list. The point is the technology is a source of limitless service delivery.

Understanding the Risks Associated with A.I. Healthcare

The Food and Drug Administration considers risk for stand-alone software products like analytics and A.I. different from those embedded into other medical products like medical devices. From a regulatory and safety perspective, the majority of risk associated with standalone A.I. is around the accuracy, relevancy, and quality of the information or recommendations it produces. If the medical information is meant for a “learned intermediary” like a medical doctor or nurse, the risk of bad information or recommendations harming patients is generally lower because the final decision rests with a medical professional. If the medical information is meant for a consumer, and a doctor is not present to make the final judgment, the risk from use is considered much higher.

AI's application in healthcare epitomizes a double-edged sword. It offers groundbreaking possibilities in predictive analytics and personalized medicine, raising hopes for a future where healthcare is both more effective and personalized³. However, these innovations also pose a number of challenges, such as data privacy and algorithmic bias, necessitating a careful balance between innovation and ethical oversight⁴.

Humans go through a process of training early in life to be able to self-regulate themselves as adults. People achieve this by developing their own internal controls to help regulate their behavior, emotions, and thoughts. Many people call these internal controls skills.

We believe that developers and regulators should develop a set of shared internal controls for A.I. like society uses to train humans into responsible members of society. A set of internal controls that developers and regulators can use to address many of the risk issues associated with A.I. proactively and collaboratively. A foundation upon which developers and regulators alike can ensure A.I. behaves responsibly.

Below is a review of some internal controls that can help regulate A.I. performance and mitigate risk to end-users. We plan to update this list regularly as new ideas and ways of thinking becomes available. We encourage others to do the same.

Ethical Frameworks and Internal Controls
Ethics are defined as the moral principles that govern a person’s behavior, or the conducting of an activity. Therefore, consider ethics to be the framework for establishing a set of A.I. internal controls. Many different health and ethical bodies underscore the importance of embedding ethical considerations into the core of AI in healthcare⁷. We believe that developing A.I. internal controls within a framework of ethical considerations can allow ethics to serve as navigational charts capable of guiding the healthcare sector through the ethical intricacies of AI application, ensuring patient welfare and equity are upheld⁸.

For instance, ethical frameworks can ensure issues of bias are dealt with correctly for everyone’s benefit. They can act as issue-neutral guides for developer and regulators to protect against personal views or collective illusions out of step with a majority in society. A move more in line with bringing all of societies views to bear on A.I. versus those of various constituencies.

The authors are concerned that without ethics to guide the way, anti-bias efforts might result in one form of bias replacing another. A move which would severely limit the value of the technology for society. We believe that a proper approach to ethics can solve this problem.

Regulating Bias

Developers, lawmakers and regulators alike should carefully consider the issue of bias. The authors understand that every person alive is biased. We all create preferences in life based upon what we learn and experience in life. Preferences for food, friends, and hobbies are all the result of bias. Yet discussions on bias in A.I. often fail to capture these realities. As a consequence, we have witnessed some people suggesting that bias should be implemented into A.I. products to protect against bias. We find this argument to be illogical and in need of refinement.

Chapman University provides a resource for defining the different types of bias in A.I that offers a useful exercise in identifying bias that can help underscore how dynamic the concept of bias. Some key questions we find compelling for regulating bias: what is the bias? where is the “bias” taking place? Is it in the data collection process, training, or use? Does the bias arise from preconceived notions? Does the bias explicitly target someone or something?

More to come on this topic in a future article.

Additional Internal Controls Capable of Regulating A.I. Behavior

When people pay other to provide services like home cleaning or tax filings, there come strings attached. Most importantly, people must be capable of performing the service. This means competency but also professionalism as well. Humans are raised to self-regulate their own behaviors within social and work settings. A.I. as a source or service delivery should be considered the same.

1.     Emotional Intelligence. Checks on Bad A.I. Behavior This topic seems illogical when thinking about medical products but not when discussing human service delivery. Humans can benefit from emotional intelligence as a tool to self-regulate their own behavior. It is a source of checks on bad behavior or activities inappropriate for some social and work settings. Emotional intelligence also is a source of compassion and empathy for others. As such, it has the potential to act as a deterrent to any killer robot scenarios one might encounter in healthcare settings. You can find out more about this concept in our recent article on the subject.

2.     Simplifying the Complex: Transparency and Understand AI Decision-Making. The quest for transparency in AI decision-making parallels the need for clarity in a physician's diagnosis explanation to a patient⁵. Techniques that make AI's inner workings transparent are vital for building trust and ensuring accountability, transforming the "black box" of AI into a "glass box" that is accessible and understandable to all stakeholders⁶. You can find out more about this concept in our recent article on the subject.

One company demonstrating this capability is CalmWave. The company is aiming to solve the problem of alarm fatigue in hospitals by shifting the responsibility of monitoring to an A.I. labor force. This technology does have the potential to reduce adverse events and save money but in order for this technology to work for the intended use, the doctors and nurses on staff need to understand the decision-making process that goes into its conclusions. 

3.     Future Directions: Navigating Uncharted Waters. The future of AI in healthcare holds immense promise for advancements like advanced diagnostic tools and robotic surgery. To manage these developments effectively, a flexible regulatory approach is necessary. Continuous dialogue, ethical considerations, and interdisciplinary collaboration are essential to ensure AI’s integration in healthcare aligns with ethical and responsible principles. Internal controls play a vital role in maintaining this approach. They facilitate ongoing communication and reporting with regulatory bodies, which is particularly important as AI's applications evolve. These controls can help regulatory bodies narrow the gap between AI's advancements and agencies that regulate their use.

This is just a sampling of the internal controls we think necessary for A.I. There are a number of very smart people who are putting forward other categories that also bear consideration. Our hope is that this robust conversation makes its way to Congress and the federal government who are currently working on pre-market and post-market reforms.

Quantifying The Benefits of Ethical Frameworks & Internal Controls for A.I. in Healthcare
As a means of quantifying the value of internal controls in A.I., consider these two sets of data: the green bars represent Patient Care Enhancement scores, and the red bars denote Data Privacy Measures scores over different quarters, starting from Q1 2021 to Q3 2022.The annotations directly above the bars show the exact scores for each quarter.

We believe this graph. though based upon a hypothetical, articulates the benefits of ethical frameworks for A.I. and offers a clear and immediate understanding of the improvements in performance over time. Additionally, a key is provided on the bottom right of the graph, explaining the score range and its meaning, making it accessible for anyone to understand the performance index at a glance.

Conclusion

We have previously written on a performance-based approach to confirming safety and efficacy by the FDA instead of the medical device pathway currently used by the agency. Internal controls within the A.I. are keys to this regulatory approach. Internal controls are also a way to discuss the risks associated with A.I. use but with a purpose of translating those concerns into functional and regulatory solutions.

We believe that the opportunity for A.I. in healthcare requires modern regulatory approaches to realize the gains. The Administration and Congress understand this and are moving ahead with various reform efforts which we will be highlighting in future articles.

Resources
1. Ethical Dimensions of Using Artificial Intelligence in Health Care: https://journalofethics.ama-assn.org/article/ethical-dimensions-using-artificial-intelligence-health-care/2019-02
2. Artificial Intelligence in Health: Ethical Considerations for Research and Practice: https://www.himss.org/resources/artificial-intelligence-health-ethical-considerations-research-and-practice)
3. WHO Guidelines on Ethical Use of AI in Healthcare: https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health
4. Ethics Guidelines for Trustworthy AI: https://ec.europa.eu/futurium/en/ai-alliance-consultation
5. AMA's Policy on AI in Healthcare: https://www.ama-assn.org
6. Legal and Ethical Consideration in Artificial Intelligence in Healthcare: https://www.frontiersin.org/articles/10.3389/fsurg.2022.862322/full
7. The Human Role to Guarantee an Ethical AI in Healthcare: https://link.springer.com/article/10.1007/s43681-023-00353-x
8. Ethical Considerations of AI in Healthcare | BSI America: https://www.bsigroup.com/en-US/blog/Digital-trust-blog/2023/ethical-considerations-of-ai-in-healthcare/
9. Emerging Technologies in Mental Health and Public Health: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8521996/
10. Global Evolution of Research on AI in Health and Medicine: https://bmcmedethics.biomedcentral.com/articles/10.1186/s12910-021-00687-3
11. Medical Product Innovation and Regulation: Benefits and Risks :https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10492632/

Previous
Previous

The Language of A.I.: Reframing the Issue of Bias in Society

Next
Next

New FDA Approach for Determining A.I. Safety & Effectiveness