FEderal Health Technology Regulatory Authority finalizes on Wednesday new rules Forcing software vendors to disclose how their artificial intelligence tools are trained, developed and tested is a move to protect patients from bias and harmful decisions about their care.
The rules are intended to put guardrails around a new generation of AI models that are being rapidly deployed in hospitals and clinics across the country. These tools are intended to help predict health risks and emergent medical problems, but little is known about their validity, reliability, and fairness.
Starting in 2025, electronic health records vendors who develop or provide these tools, which increasingly use AI, known as machine learning, will be required to make more disclosures. Technical information Educate clinical users about performance and testing and steps taken to manage potential risks.