Home Health Care Biden’s October 30, 2023, Executive Order on AI: Key Takeaways for Health Care Stakeholders | Polsinelli

Biden’s October 30, 2023, Executive Order on AI: Key Takeaways for Health Care Stakeholders | Polsinelli

by Universalwellnesssystems

Interest in artificial intelligence (“AI”) has skyrocketed over the past year with the advent of generative machine learning models such as ChatGPT. This growing interest extends to the healthcare industry, where AI has the potential to dramatically transform healthcare reimbursement and delivery systems and accelerate healthcare innovation. Nevertheless, the use of AI is not without concerns, and the risks need to be carefully considered and balanced. Amid growing questions about the safety of the technology, there is a bipartisan effort to help federal agencies optimize the development and use of AI while addressing potential inherent risks. These efforts include promoting transparency and notification, ensuring fair and nondiscriminatory practices, and protecting the privacy and security of health information. We've talked about this before. here.

To advance these efforts, on October 30, 2023, the Biden administration introduced a long-awaited executive order (EO). Development and utilization of safe, secure, and reliable artificial intelligence. The EO aims to foster AI innovation while protecting against potentially harmful consequences, and specifically addresses risks associated with the development and use of AI in healthcare. The EO aims to enable the secure implementation of AI-enabled technologies in healthcare delivery, including quality measurement, performance improvement, program alignment, benefits management, and patient experience, but all such uses require Please note that appropriate levels must be included. Due to human supervision.

The use of AI in healthcare is well underway, but stakeholders are wondering how to use it effectively while adhering to guardrails that may be mandated by state and federal legislatures in the near future. Eager to understand. Below are key takeaways and key timelines from the EO that healthcare professionals should know. These provide opportunities for stakeholder engagement on AI issues that may impact the healthcare sector.

Important points

  • Section 8 of the EO focuses on risks and developments related to AI in the healthcare industry. Specifically, it covers key areas such as the use of AI in drug development, predictive/diagnostic AI use cases, safety, healthcare delivery and financing, and documentation and reporting requirements.
  • EO directs HHS Create an AI task force, AI assurance policy, and AI safety program. As a result of these directives, HHS may end up collaborating with several other key agencies, including the Office of the National Coordinator for Health Information Technology (ONC), the Centers for Medicare and Medicaid Services (CMS), and the Civil Affairs Bureau . Rights (OCR) etc.1
  • EO is not the current May impact regulations, rules, or reporting requirements. The EO itself does not set out any legal requirements related to the use of AI in the medical or other fields. However, this sets the stage for new rules and regulations to be created in the near future, especially related to healthcare.
  • Long-standing privacy and security rules continue to apply to the use of AI in healthcare. Companies should continue to comply with existing privacy and security laws, such as HIPAA, HITECH, and the Federal Trade Commission (FTC) Act, among other things, when their AI systems process, create, receive, transmit, or maintain data that: must be considered and complied with. Protected Health Information (PHI) and Personally Identifiable Information (PII).

Main timeline

Health-related frameworks and safety mechanisms must be established according to the following timelines, starting from October 30, 2023, the date on which the EO was announced:

  • Within 60 days – HHS must appoint a Chief AI Officer who will be responsible for driving AI innovation within HHS while managing the risks associated with the use of AI.
  • Within 90 days – HHS must establish an AI Task Force to establish a framework for the “responsible use” of AI in healthcare.
  • Within 180 days – HHS shall direct its constituent agencies to develop strategies to determine whether AI-enabled technologies comply with the strategic plan set forth in Section 8(b)(i) of the EO. (summarized in Appendix A of this alert).
  • Within 180 days – HHS must establish an AI assurance policy that incorporates premarket evaluation and postmarket oversight of AI-enabled healthcare technologies.
  • Within 180 days – HHS must “consider appropriate measures” to ensure that health care providers receiving federal financial assistance comply with federal antidiscrimination laws.
  • Within 365 days – HHS must establish a strategy to regulate the use of AI in drug development.
  • Within 365 days – HHS, in consultation with the Secretary of Defense and the Secretary of Veterans Affairs, must establish an AI safety program in partnership with any federally listed patient safety organization.
  • The EO also requires HHS to create incentives under its grant-making authority to promote/encourage responsible AI development and use. This may include collaboration with the private sector. The EO does not specify a deadline for this activity.

Although the EO does not create new legal requirements, it does require HHS and its constituent agencies to take certain regulatory actions. Therefore, stakeholders should expect more specific rules and guidance from government agencies in the near future. At this point, the EO requires HHS to issue procedural requirements for the use of AI (such as developing security systems, safety frameworks, and documentation methods) rather than substantive rules (such as prohibiting AI implementation in certain use cases). I'm asking you to focus on.

Implementation guidance from the Office of Management and Budget (OMB): Shortly after the EO was announced, OMB released a proposal. Memorandum for heads of executive departments and agencies (“OMB Memo”). If finalized, the OMB memo would require most federal agencies to appoint a chief AI officer and develop an AI strategy that meets specific requirements. Agencies will also be required to submit compliance plans that ensure their operations are consistent with his OMB memo and publish an annual AI use case inventory. The OMB memo emphasizes that government agencies must ensure their IT infrastructure, data management systems, workforce, and cybersecurity platforms are sufficiently robust to support AI applications. Finally, agencies must end the use of noncompliant AI applications by August 1, 2024. OMB is accepting comments on this draft. here Until December 5, 2023.

The OMB memo covers medical transportation, delivery of biological or chemical agents, prescription drug-related activities, decisions regarding the use of medical devices, clinical diagnostic tools, health risk assessments, and interventions related to mental health care. These use cases require agencies to implement safeguards such as formal AI impact assessments, real-world testing, independent evaluation of AI tools, and processes for continuous monitoring and mitigation of risks. there is. For use cases that impact rights, the OMB memo also provides standards for detecting algorithmic bias, addressing disparate impacts, creating representative datasets, and obtaining feedback from affected groups. Masu. Stakeholders should review these lists carefully, but also note that they are not exhaustive and do not take into account future or new AI use cases.

In line with the Biden administration's EO and corresponding OMB memo, FDA has also been particularly active in paving the way for the regulation and development of AI.For example, the FDA previously announced discussion paper A discussion of the use of AI in drug development, released in May 2023. The FDA also announced the following releases: information about Regulate AI products as medical devices.

Industry participants should closely monitor the regulatory and policy framework, updates, and guidance in the coming months, particularly from HHS and the federal agencies that operate under HHS.

Appendix A

Section 8(b)(i) of the EO provides that the HHS AI Task Force shall develop a strategic plan for the responsible deployment of AI that covers the following areas:

  • Developing, maintaining, and using predictive and generative AI-enabled technologies in healthcare delivery and financing
  • Long-term safety monitoring of AI technologies in the medical field
  • Incorporate principles of fairness to combat prejudice and unwanted discrimination
  • Incorporating safety, privacy, and security standards to protect PII
  • Documentation to help users determine whether they can safely use AI in local settings
  • Plans to advance forward-looking use cases and promote best practices in state, local tribal, and territorial settings
  • Identifying ways to use AI to drive workplace efficiency and satisfaction

[1] See Appendix A for a list of areas that the EO requires the HHS AI Task Force to cover in its AI Strategic Plan.

You may also like

Leave a Comment

The US Global Health Company is a United States based holistic wellness & lifestyle company, specializing in Financial, Emotional, & Physical Health.  

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

Copyright ©️ All rights reserved. | US Global Health