Reimagining FDA Regulation of A.I. & Service-Delivery Capable Medical Products (FDA Reform for A.I./Digital Part 2)

We Grew Up At Midnight

 Introduction

The first article explored the problems with using FDAs current safety and effectiveness approach for digital medical products like A.I.

Artificial Intelligence (A.I), and other digital health products, are so unique functionally that using key aspects of the FDA’s medical device approval process, such as human clinical trials, don’t make much common or regulatory sense.

This article introduces a new concept for lawmakers and the FDA to consider. A new product category and approval process more in line with the unique functional nature of these products. We understand that some of this thinking may be outside the norm in Washington D.C., but we do hope it aids efforts to update the statutory and regulatory environment for digital medical product approvals.

Please note: This article uses A.I. products as examples of a broader category of digital medical products and services including digital medicines and therapies while noting differences will exist.

Medical Service-Delivery Products

Previous Forest Hill Labs articles have explored the unique nature of A.I. as alternative workforces in the healthcare sector. We think this line of thinking useful when considering how the FDA should regulate market access for these products. Especially A.I. medical products that are intended to change their own functional nature post-FDA approval.

A.I. that helps operate FDA-regulated medical devices is in essence a part of the medical device, and therefore should be tested and regulated as a part of the medical device. The gray areas emerge when the A.I./software/analytics are the products people are purchasing without being imbedded into an FDA-regulated medical device.

The Software Act, passed as part of the 21st Century Cures Act in 2016, established stand-alone software as a unique category of medical device for purposes of FDA regulation. The FDA, to its credit, has done a great job of developing and implementing regulatory approaches to digital medical products. The issue, as we see it, is that the form and function of the medical device statutes are not well-suited for some digital medical product categories like A.I.

For instance, current FDA statutes require medical devices that pose a high-risk to patient safety if they do not perform as intended to demonstrate safety and effectiveness in human clinical trials. This is because the statutory definition the FDA uses to regulate medical devices considers these products to be directly interacting with the human body. Stand-alone A.I. products, however, do not have this consideration. The implications pf this difference can help inform the way forward.

Regulating Medical Service Delivery Products Differently

For stand-alone A.I. products, there is an entirely different risk profile that centers around the accuracy, appropriateness, and validity of information and recommendations the A.I. produces. Therefore, testing and approving using human clinical trials is not ideal for this product category.

Rather, we encourage Congress and the FDA to take a different approach to regulating A.I. and digital health.: as sources of healthcare and medical service delivery similar to human workforces. If the FDA intends to regulate digital health based upon its functional nature, we think the agency should create a new category for A.I. and other digital service delivery products.

The key regulatory decision the FDA is facing: how to evaluate a product as ‘safe and effective’ when it can change its own functions. The FDA currently only regulates digital product functions it considers a risk to patient safety. So how does that work for products the FDA knows can change itself?

These are the same questions regulators face when it comes to medical workforces. Functionally speaking, a product that changes its own behavior based upon its own experiences sounds more like how a medical professional operates than a medical product. Pursuing this line of reasoning suggests that the way forward for lawmakers should be to consider these products as medical professionals for the purposes of reviewing them as medical products

Medical professionals are currently ‘regulated’ using a process of education and training that precedes licensure. Medical boards test new service provider candidates before a medical license is granted, and require continuing medical education regularly to maintain a medical license. Finally, consumer protection laws among other post-licensure safety nets keep watch to ensure quality issues are addressed.

The FDA could take similar steps. A new category of digital medical service delivery devices with a safety and efficacy approach more geared toward testing product performance and reliability as well as the quality and appropriateness of information it produces. The new category could apply to both stand-alone software products as well as those imbedded in traditional medical devices dependent upon the functional nature of the combined products in ways similar to how the agency regulates combination products.

Agency approval in this instance would require a set of secondary steps using existing and where necessary new authorities to set up a licensure and continuing data sharing and testing process post approval to verify where product performance remains within the guidelines used to approve the product. In instances when the functional nature of a product or product category was operating outside these limits, a secondary approval process built off of the first could be used to measure the impact on risk to a patient or medical provider.

Conclusion

The main issue facing the FDA, predicting the future safety and effectiveness of medical products that can change their own functional capabilities, isn’t winnable at the current time. Rather, we feel the regulatory question should be focused on whether a system is in place can be counted on to provide appropriate and quality information for a specific medical need.  

In such an instance, safe and effective would represent the ability of the product to start performing service delivery accompanied by a plan in place to help nurture and maintain quality as the product matures over time. As the lifetime of an A.I. could extend well beyond the norm for medical devices, an initial approval combined with a post-market of ongoing meetings and testing could provide the means of “continuous” A.I. products. More on this later.

fact that a product can begin service delivery while ensuring frequent interactions between the product and educators and regulators to ensure it is performing at the top of its capabilities. Some of this can be accomplished through data sharing, and others will require more direct interactions. However, shifting the FDAs regulatory processes to something that is an achievable standard from something that today is not would be a positive step in the right direction. At the end of the day, it is still up to the FDA if and when these products should be practicing and when they should not for an intended use.

Last, we envision licensure bodies outside the government playing a role. These and the agency would be aided by would we image will be a system of continued learning and regulatory improvement would develop over time as the data and experiences from these frequent interactions fuel new insights and best practices that can be used to continually update and otherwise improve the regulatory environment for all parties.

Previous
Previous

A.I. Healthcare: The Go-To-Market Opportunity for Digital Healthcare

Next
Next

‘Obamacare’ exchanges fall short of promises and expectations