FDA publishes an Action Plan serving as a roadmap to a regulatory framework governing AI/ML based products
The Action Plan furthers and builds on concepts covered in a discussion paper released in April 2019
On 12 January 2021, the US Food and Drug Administration (FDA) published a five part action plan which provides short-term actions to regulate products that incorporate artificial intelligence and/or machine learning (AI/ML). This 'Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device Action Plan' was released by the Digital Health Centre of Excellence (DCE). The DCE launched on 22 September 2020 and exists within the FDA's Centre for Devices and Radiological Health. The DCE's aim is to further the FDA's overarching dedication to the advancement of digital health technology.
In April 2019, the FDA released a discussion paper on AI/ML-based software devices titled, 'Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning-Based Software as a Medical Device (SaMD).' The feedback from stakeholders to the April 2019 paper was taken into consideration by the FDA and resulted in the Action Plan. The April 2019 paper remains open for comments. Manufacturers of relevant devices can provide feedback to the FDA on its AI/ML policy proposals before the FDA finalises its regulatory framework.
The Action Plan is divided into five main sections, which are briefly discussed in order below:
1) Tailored Regulatory Framework for AI/ML-based SaMDs:
The April 2019 paper proposed a framework for modifications to AI/ML-based SaMDs on the principle of a 'Predetermined Change Control Plan.' The plan includes the types of anticipated modifications and the associated methodology used to implement the identified changes in a controlled manner. This would enable the FDA and manufacturers to monitor a product right from its premarket development through to its post-market performance. The Action Plan states that the FDA will further develop this and publish a draft guidance document on a predetermined change control plan later this year.
2) Good Machine Learning Process (GMLP)
There was a consensus among the comments made on the April 2019 paper on the importance of Good Machine Learning Practice (GMLP). GMLP refers to AI/ML best practices analogous to good quality system practices or software engineering practices. This protocol would cover data management, validation, documentation, and algorithm training, amongst others. As per the Action Plan, the FDA, in collaboration with its Medical Device Cybersecurity Program, will support the development of GMLP to evaluate and improve AI/ML algorithms. The FDA will also continue participating in global working groups focused on furthering and harmonising GMLP principles.
3) Patient-Centred Approach to Incorporating Transparency to Users:
In the Action Plan, the FDA committed to holding a public workshop on how device labelling supports transparency for users and develops trust in AI/ML-based SaMD. The FDA intends to gather input from stakeholders to enhance end-user transparency. Notably, the FDA will seek input on the sort of information manufacturers should include in the labelling of AI/ML-based SaMDs. This effort is for the end user's benefit to better understand the AI/ML-based SaMD's benefits and risks.
4) Regulatory Science Methods Related to Algorithm Bias & Robustness:
A significant portion of the comments on the April 2019 paper were devoted to the need for improved methods to evaluate and address algorithmic bias. The worry of a bias in AI/ML systems stems from the historical data sets used to train the AI/ML algorithms. As per the Action Plan, the FDA will continue working with its esteemed research partners to develop means and a scientific methodology to identify, evaluate, and address bias in AI/ML algorithms. These efforts include collaborating with researchers from Stanford University, the University of California San Francisco, and John Hopkins University.
5) Real-world performance
The April 2019 paper expounded on the role of real-world data in monitoring the safety and effectiveness of AI/ML-based SaMDs. It stated that to fully adopt a total product lifecycle approach to the oversight of AI/ML-based SaMDs, modifications could be supported by collecting and monitoring real-world data. The FDA can then leverage this data to be informed and aware of product changes and evaluate algorithm behavior. The Action Plan stated that the FDA, working with stakeholders on a voluntary basis, will pilot a program aimed at developing a framework that can be used to gather and validate real world metrics and parameters for AI/ML devices. There is currently no timeline provided for this pilot program.
Developers might feel like the Action Plan is only moderate in its approach, with the only concrete commitment made being to publish a draft guidance on Predetermined Change Control Plans. However, most importantly, the FDA has opened itself for continuous engagement and discussion. Developers can see this as a means to influence the FDA's thinking and steer key concepts into finding a place in a comprehensive framework.
AI/ML-based SaMD is a quickly progressing and rapidly growing field, and thus it is only reasonable to expect the Action Plan to evolve and update itself constantly. The Action Plan proposes a route to advance a regulatory framework. An operational framework, however, still appears to be ways down the road.