Interest in artificial intelligence (“AI”)-based technology solutions has increased in recent months. AI has its roots in neural networks dating back to the 1940s, but new technologies such as generative AI models like the Generative Pre-Trained Transformer (“GPT”)-3 have recently captured the attention of industry and regulators. .
There is also growing interest in AI-based solutions in the healthcare sector, driven by their potential to dramatically transform healthcare reimbursement and delivery systems and accelerate innovation in healthcare. In today’s healthcare industry, predictive models are used to inform and assist decision-making through clinical decision support (“CDS”) to a variety of decision-makers, including clinicians, payers, researchers, and individuals. Used and trusted more and more. tool. Certified healthcare IT is often a key component and data source for these predictive models, providing the data used to build and train the algorithms and acting as a vehicle to influence day-to-day decision making.
The growing interest in and use of AI has raised new concerns. As a result, federal agencies are optimizing their use of AI while at the same time optimizing potential impacts in the development and use of predictive models and AI, including efforts to promote transparency and notice and ensure fairness and non-discrimination. There have been bipartisan efforts to address these risks. to protect the privacy and security of health information.
A flurry of early-stage regulatory activity in late 2022 and early 2023 suggests that the development of an AI-specific regulatory framework is in its infancy. Understanding this evolving regulatory model is important for organizations considering the use of AI technology, or those that may be impacted by AI technology. These efforts include a “blueprint” for AI regulatory policy by the White House, a request for comment by the Department of Commerce, and proposed regulations by the Department of Health and Human Services (“HHS”). As government agencies actively seek comment on these policies, potentially affected groups, including healthcare providers, innovators, payers, and advisors, will have an important role to play in shaping the future of AI policy in healthcare. Opportunity.
White House Blueprint for AI Bill of Rights
In October 2022, the White House Office of Science and Technology Policy (“OSTP”) released a “blueprint” document containing a proposed framework for a so-called “Bill of Rights” for the use and regulation of AI technology (available) here). While the blueprint document has limited legal status, it sets out key principles that we believe will reflect the White House’s approach to future binding regulation on AI.
This blueprint illustrates the tension between promoting useful applications of AI while limiting foreseeable damage. On the one hand, the OSTP is critical of AI tools in the healthcare sector, arguing that AI can “limit our opportunities and hinder our access to critical resources and services” and that it “helps patient care.” It has been proven that the system that should have stopped working,” he warned. Unsafe, ineffective, or biased. On the one hand, the OSTP said, so long as such advances do not undermine the “fundamental principles of America” of civil rights and democratic values, “these tools will redefine every part of society and improve the lives of all.” It has the potential to make things better,” he said.
To achieve this balance, OSTP has identified five guiding principles for AI development. 1) Standards for safety and effectiveness. Includes requirements for external consultation, appropriate testing, risk identification and mitigation, oversight and oversight. 2) Algorithmic Discrimination Protection. This includes non-discrimination based on protected classes and use of appropriately robust data. 3) data privacy requirements (disclosure, appropriate consent, security, oversight standards, etc.); 4) notification and accountability standards (documentation, automated system explanations, reporting requirements, etc.); 5) AI. Define rules for human roles, including process alternatives, consideration of issues and complaints, and fallbacks, including governance standards and rules for overriding AI.
The blueprint features health use cases prominently. For example, health data is considered a “sensitive area” subject to higher regulatory concerns. Many problematic examples of blueprints involve situations in which AI technology is used to deny coverage, limit care, or provide care in a suboptimal (and often discriminatory) manner. It contains. Perhaps the blueprint will serve as a reference when formulating regulations for the healthcare sector.
Commerce Request for Comment on AI Accountability
On April 13, 2023, the Department of Commerce’s National Telecommunications and Information Administration (“NTIA”) published a formal Notice and Request for Comment (“RFC”) in the Federal Register regarding potential directions for AI regulation. Specifically, the NTIA seeks to establish “self-regulatory, regulatory, and other measures and measures aimed at providing assurance that AI systems are legal, effective, ethical, safe, and otherwise trustworthy.” I requested information on “policies”. (RFC available here). The RFC informs the NTIA of the preparation of a formal report on AI liability policy that influences regulatory development.
The RFC cites the Blueprint and, based on it, specifically solicits comments on voluntary and mandatory policy tools to mitigate the various hazards identified in the Blueprint. This takes accountability measures into account, including internal and external audits or assessments, governance policies, documentation standards, reporting requirements, testing and evaluation standards. NTIA raises questions regarding review, use of sensitive data, and timing requirements within the AI lifecycle. NTIA asks over 30 specific questions regarding the current regulatory landscape, types of AI technologies, strengths and weaknesses of existing AI oversight mechanisms, and potential impacts of specific regulatory approaches. Of particular concern to health care providers, the NTIA considers whether AI accountability mechanisms can effectively address systemic and/or collective risks of harm, including harm related to worker health and health disparities. I am specifically looking for information.
The RFCs are still in the early stages of policy development, but they are important because they reflect the ongoing impact of the Blueprint. RFCs provide AI developers or users with an important opportunity to ensure that their perspective is reflected in the early stages of AI policy development.
Office of the National Coordinator Proposed HTI-1 Rule
On April 11, 2023, the HHS National Office of the Coordinating Office of Health Information Technology (“ONC”) issued “Healthcare Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing” (“HTI ”) released a draft rulemaking title. -1 rule proposal”), available here. The proposed HTI-1 rule implements provisions for 21st-century cures and sets out the White House blueprint, the Biden-Harris administration’s Executive Order, “Ensuring a Data-Driven Response to COVID-19,” and “Future It incorporates guidance from government agencies such as the Advanced Measures for – Consequences of Public Health Threats,” and guidance to promote racial and other health equity. The HTI-1 proposed rule is intended to improve interoperability and improve transparency and trust in the use of predictive decision support interventions and electronic health information.
One of the key provisions of the proposed rule deals with the use of AI for clinical decision support (“CDS”). Developers are currently required to comply with the CDS standards in order to provide certified health IT, including Certified Electronic Health Record Technology (“CEHRT”). Providers using CEHRT are eligible for additional funding and/or fine avoidance under government payment programs. Under the proposed HTI-1 rule, developers would have to meet additional “Decision Support Intervention” (“DSI”) accreditation criteria in order to obtain accreditation, including CEHRT status. To promote transparency, the standard requires that a certified healthcare IT module enabling or working with a predictive DSI allows a user to see information about the source attributes used in her DSI. You will need. This standard aims to ensure that health equity-related data, such as race, ethnicity, and social harm to health, is recognized when used in his DSI by ensuring that the user is aware of it. will also promote
This certification standard requires developers to undergo interventional risk management practices for interfacing predictive DSIs. Risk management practices include risk analysis, risk mitigation and governance. The developer must maintain detailed documentation of its risk management practices and provide such documentation to her ONC upon request. Developers are also expected to publish their risk management practices via easily accessible links. Under the proposed rule, developers will have until December 31, 2024 to comply with these standards.
The ONC points out that predictive DSI can promote positive outcomes and avoid harm if it is a “favorite,” i.e. fair, appropriate, effective, effective and safe. ONC does not propose to establish or define regulatory baselines, measurements, or thresholds for FAVES for predictive DSI, but users should make their own judgment as to whether predictive DSI is effective. , or to establish requirements for information that will allow you to determine whether you are interfacing with it. The Health IT module is found to be fair, appropriate, valid, effective and safe.
This proposed rule would amend the CEHRT requirements so that developers of healthcare IT modules seeking to obtain or maintain certification as well as developers of such technology in order to provide healthcare services and receive reimbursement for those services may also affect health care providers who use and rely on ONC will accept public comments on the proposed rule by: June 20th.