Health

FDA releases ‘guiding principles’ for AI/ML device development

The U.S. Food and Drug Administration released a list of “guiding principles” this week aimed at helping promote the safe and effective development of medical devices that use artificial intelligence and machine learning.  

The FDA, along with its U.K. and Canadian counterparts, said the principles are intended to lay the foundation for Good Machine Learning Practice.  

“As the AI/ML medical device field evolves, so too must GMLP best practice and consensus standards,” said the agency regarding the principles.  

WHY IT MATTERS  

As the FDA notes, AI and ML technologies have the potential to radically expand the healthcare industry – but their complexity also presents unique considerations.  

The 10 guiding principles identify points at which international standards organizations and other collaborative bodies, including the International Medical Device Regulators Forum, could work to advance GMLP.  

The agency says stakeholders can use the principles to tailor and adopt good practices from other sectors to be used in the health tech sector, as well as to create new specific practices.  

The principles are:  

  1. The total product life cycle uses multidisciplinary expertise.
  2. The model design is implemented with good software engineering and security practices.
  3. Participants and data sets represent the intended patient population.
  4. Training data sets are independent of test sets.
  5. Selected reference data sets are based upon best available methods.
  6. Model design is tailored to the available data and reflects intended device use.
  7. Focus is placed on the performance of the human-AI team.
  8. Testing demonstrates device performance during clinically relevant conditions.
  9. Users are provided clear, essential information.
  10. Deployed models are monitored for performance, and retraining risks are managed. 

“Areas of [international] collaboration include research, creating educational tools and resources, international harmonization, and consensus standards, which may help inform regulatory policies and regulatory guidelines,” said FDA officials.

THE LARGER TREND  

The FDA has weighed in on the development and oversight of AI and ML-driven health tools several times over the past few years, especially where bias is concerned.  

At a virtual meeting of the agency’s Center for Devices and Radiological Health and Patient Engagement Advisory Committee last October, Bakul Patel, director of the Digital Health Center of Excellence, emphasized the importance of balancing innovation with patient protection.  

“We all know there are some constraints, because of location or the amount of information available, about the cleanliness of the data,” said Patel at the time. “That might drive inherent bias. We don’t want to set up a system where we figure out, after the product is out in the market, that it is missing a certain type of population, or demographic, or other aspect that we have accidentally not realized.”  

ON THE RECORD  

“Strong partnerships with our international public health partners will be crucial if we are to empower stakeholders to advance responsible innovations in this area,” said FDA officials as they unveiled the new principles. “Thus, we expect this initial collaborative work can inform our broader international engagements, including with the IMDRF.”

 

Kat Jercich is senior editor of Healthcare IT News.
Twitter: @kjercich
Email: [email protected]
Healthcare IT News is a HIMSS Media publication.



File source

Tags
Show More
Back to top button
Close