New FDA Approach for Determining A.I. Safety & Effectiveness
Part 3: Reimaging FDA Regulation for the Digital Age
by Robert James Horne / Published February 6th 2024
Key Takeaways
The functional nature of Artificial Intelligence (A.I.) is more closely aligned with medical service providers than traditional medical devices - the FDA should consider them similarly for purposes of regulation.
Congress should update FDA’s statutes to establish a new regulatory pathway for medical products like A.I. capable of delivering medical services like human workforces.
We predict that Congress will grant the Food and Drug Administration (FDA) new statutory authorities for regulating digital medical products like A.I. in the next agency user-fee reauthorization expected 2027.
Introduction
Forest Hill Labs (FHL) considers the functional nature of Artificial Intelligence (A.I.) to be more closely aligned with medical service providers than traditional medical products. We think Congress should allow the FDA to regulate these products similarly.
Congress is required to update FDA user-fees for product manufacturers in 2027. The main committees with jurisdiction over the FDA - the House Energy & Commerce and Senate HELP Committees – should use the opportunity this mandatory legislation affords to create a new regulatory pathway and modern ways of determining whether A.I. products are safe and effective for use in health and medical care markets. Some of this thinking may be outside convention in D.C., but we hope it encourages a robust conversation about the unique opportunity to update the FDA’s approach.
Please note: This article focuses on A.I. products but the points made also apply in part or total to other categories of digital medical products.
A.I. & Digital Have Unique Medical Product Functions
A.I and other stand-alone digital medical products, like therapeutics, are so functionally unique that using key aspects of the FDA’s current medical device approval process doesn’t make much common or regulatory sense. The FDA has done a great job of modifying current medical device standards for stand-alone medical software products like A.I. but the agency acknowledges that additional statutory authorities are needed to further regulate digital medical products.
The Software Act, passed as part of the 21st Century Cures Act in 2016, established stand-alone software as a unique category of medical device for purposes of FDA regulation. The FDA, to its credit, has done a great job of developing and implementing regulatory approaches to digital medical products. The issue, as we see it, is that the form and function of the medical device statutes are not well-suited for some digital medical product categories like A.I.
Current FDA statutes require some medical devices that pose a high risk to patient safety to demonstrate safety and effectiveness in human clinical trials. The statutory definition used by the FDA to inform this approach considers medical devices to be directly interacting with the human body. Stand-alone A.I. products, however, do not connect to the human body. This and other functional differences come with entirely different risk profiles than traditional medical devices: data accuracy, appropriateness, validity, and transparent decision-making processes. Testing and approving these products using human clinical trials is not ideal when performance is the key regulatory question.
New FDA Category for ‘Medical Service Delivery’ Products
If the FDA intends to continue regulating digital health based upon a product’s functional nature, which is advisable, we think the agency should create a new regulatory approval process for service-delivery product functions.
As noted earlier, the key regulatory decision the FDA faces with A.I. is how to evaluate a product as ‘safe and effective’ when it can change its own functions after approval. These are similar questions that medical licensure boards face when regulating doctors and nurses. Functionally speaking, a product that changes its own behavior based upon its own experiences sounds more like how a medical professional functions than a medical product. Pursuing this line of reasoning suggests that the way forward for lawmakers should be to consider these products as medical professionals for the purposes of reviewing them as medical products.
Medical professionals are currently ‘regulated’ using a process of education and training that precedes licensure. Medical boards test new service provider candidates before a medical license is granted and require continuing medical education regularly to maintain the license. Consumer protection laws, among other post-licensure steps, can ensure that medical workforces continue to increase their proficiency along with the pace of innovation in service delivery.
The FDA could take similar steps to regulating A.I. products. One approach would be to establish a regulatory approval pathway based on whether an A.I. medical product can safely and effectively provide medical services, and for how long. As the lifetime of an A.I. product could extend well beyond the norm for traditional medical devices, reconsidering the concept of product lifetimes may be necessary as well.
Such an approach could be paired with a set of post-market requirements in line with how medical licensure boards and states regulate the practice of medicine. The include continuing medical education, performance retesting, and consumer protections when the quality of service is deficient. In instances when the functional nature of a product or product category begins to operate well outside the limits used to approve them, a secondary approval process built off the first could be applied. In total, FHL envisions a continuous regulatory loop of learning and ongoing improvements in regulatory performance will manifest as a result.
Conclusion
Asking the FDA to predict the future safety and effectiveness of medical products that can change their own functional capabilities isn’t achievable at the current time. Treating A.I. like traditional medical devices for the purposes of regulation isn’t advisable either.
Reimaging the FDAs current regulatory processes into something achievable and capable of protecting patient safety is the goal. At the end of the day, it is still up to the FDA to determine if, how, and when it is appropriate to use these products for medical care. Congress has the ability to redefine how the FDA satisfies ‘safety and effectiveness’ for A.I. and other related digital medical products, and it appears the agency would welcome the help.