On December 17, 2024, the House Task Force on Artificial Intelligence (Task Force) released a highly-anticipated report titled, “Bipartisan House Task Force Report on Artificial Intelligence,” (the Report) which establishes guiding principles and issues recommendations to guide U.S. innovation in artificial intelligence (AI), including in the healthcare sector. The Report is intended to serve as a blueprint for Members of Congress as they conduct oversight and introduce legislation to address advances in AI technologies, including the regulation of health-specific AI applications.

I.  Overview

In February 2024, Speaker Mike Johnson (R-LA) and Democratic Leader Hakeem Jeffries (D-NY) created the bipartisan Task Force to explore Congress’s role in encouraging U.S. leadership in AI innovation and providing guardrails for any possible current and emerging threats. The Task Force was led by co-chairs Jay Obernolte (R-CA) and Ted Lieu (D-CA) and had twenty-four members, equally divided between Democrats and Republicans.

Over the course of several months, the Task Force interviewed hundreds of AI experts in order to compile its findings in the Report. The Report comprises 15 chapters and includes guiding principles, 66 key findings and 89 recommendations. It states that when considering new policies Congress should adopt an approach that allows it to respond appropriately and in a targeted matter that considers all available evidence. In the Report, the Task Force adopts the following several high-level principles to frame policymaking.

  • Identify AI Issue Novelty
  • Promote AI Innovation
  • Protect Against AI Risks and Harms
  • Empower Government with AI
  • Affirm the use of a Sectoral Regulatory Structure
  • Take an Incremental Approach
  • Keep Humans at the Center of AI Policy

II.  Healthcare Key Findings and Recommendations

The Report’s section on healthcare focuses on the potential of AI technologies to improve healthcare research, diagnosis, and care delivery. It presents evidence-based findings and discusses previous agency work and regulations to address certain AI issues, including AI adoption in the healthcare system; health insurance decisions; and policy challenges confronting AI adoption in health care (i.e., data issues, transparency, bias, privacy and cybersecurity, interoperability and liability). It concludes by identifying key findings and issuing recommendations.

A. AI Adoption in the Healthcare System

The Report addresses the use of AI, including machine learning (ML) and generative AI, in drug development and discussed previous U.S. Food and Drug Administration (FDA) efforts to issue policy papers discussing the use of AI in drug development and manufacturing. It discusses the use of AI in biomedical research, diagnostics, population health management, and development of medical devices and software. It also highlights the use of AI-enabled tools that are included in electronic health records (EHRs), namely clinical decision support tools and administrative tools, that may be used to alleviate workforce burden and burnout.

B. Health Insurance Decisions

The Report discusses issues related to coverage and reimbursement of AI-provided services and devices in addition to the use of AI tools in the health insurance industry. Certain AI-provided services and devices are covered by the Centers for Medicare & Medicaid Services (CMS) under the Medicare program. It states that as more evidence is developed regarding applying certain tools in healthcare settings, particularly among Medicare populations, further evaluation of current CMS payment systems will be necessary. In addition, the Report raises concerns about lack of transparency in coverage decisions when insurers implement AI tools in insurance decision making. Specifically, there are concerns that AI tools could create unnecessary denials and lack of access to necessary treatments when AI produces inaccurate or biased results.

C. Policy Challenges Confronting AI Adoption in Healthcare

Data Availability, Utility, and Quality: Given that AI models are developed from large data sets, often using non standardized data from EHRs, there may be issues with integration that may prevent making data sets representative of population groups. The Report identifies several issues related to using data to develop AI models, including using de-identified data to ensure that datasets are not biased.

Transparency: The Report expresses concerns surrounding a potential lack of transparency in AI decision making, which can have far-reaching implications if AI tools are used in patient care. It states that medical professionals may also lack the training to understand whether an error occurred in the AI decision-making process.

Bias: The Report states that there is a potential for bias when using large data sets to develop and train AI models. It states that AI in health care could benefit from standards and evaluations to detect and mitigate biased and erroneous outputs by these systems.

Privacy and Cybersecurity: The Report states that because AI tools require large amounts of data, there are concerns among providers, health systems, and patients about risks to patients’ data privacy. It also cites concern about cybersecurity issues and mentions recent cyberattacks against the health sector, including the Change Healthcare attack. The Report states that the Health Insurance Portability and Accountability Act of 1996 (HIPAA) and its implementing regulations, in addition to state laws, serve to protect health data. It further states that the HIPAA regulations may need to be updated to meet challenges created by AI systems deployed in health contexts.

Interoperability: The Report acknowledges that AI-enabled tools must be able to integrate with existing healthcare systems, including EHR systems. Integrating AI tools may cause additional challenges as health systems and vendors work to facilitate information exchange. 

Liability: The Report states that there is limited legal and ethical guidance regarding accountability when AI produces incorrect diagnoses or harmful recommendations. This is one reason why determining liability becomes complex as multiple parties become involved in developing and deploying an AI system. The Report mentions the Department of Health and Human Services (HHS) Office of Civil Rights (OCR) Section 1557 Nondiscrimination Rule, which aims to prevent discrimination when using patient support decision tools, including AI tools. It states that the regulation’s requirements for providers “places the responsibility for some AI-related actions on healthcare providers rather than AI developers.”

D. Key Findings

The Report included the following healthcare key findings.

  • AI’s use in healthcare can potentially reduce administrative burdens and speed up drug development and clinical diagnosis.
  • The lack of ubiquitous, uniform standards for medical data and algorithms impede system interoperability and data sharing.

E. Recommendations

The Report issued the following healthcare recommendations.

  • Encourage the practices needed to ensure AI in healthcare is safe, transparent, and effective: This recommendation states that policymakers should promote collaboration among stakeholders to develop and adopt AI tools in health care. Policymakers could develop or expand high-quality data access mechanisms that protect patients’ health data. This could include voluntary standards for collecting and sharing data, creating data commons, and using incentives to encourage data sharing of high-quality data held by public or private actors. The Report also recommends that Congress should continue to monitor the use of predictive technologies to approve or deny care and coverage and conduct oversight accordingly.
  • Maintain robust support for healthcare research related to AI. The Report recommends support for funding research through the National Institutes of Health (NIH).
  • Create incentives and guidance to encourage risk management of AI technologies in healthcare across various deployment conditions to support AI adoption and improve privacy, enhance security, and prevent disparate health outcomes. This recommendation states that stakeholders would benefit from standardized testing and voluntary guidelines that support the evaluation of AI technologies, promote interoperability and data quality, and help covered entities meet their legal requirements under HIPAA. Specifically, it recommends using de-identification techniques and privacy-enhancing technologies to protect patient privacy. Furthermore, it states that Congress should explore whether the current laws and regulations need to be enhanced to ensure that the FDA’s post-market evaluation process ensure that AI technologies in healthcare are continually monitored for safety, efficacy, and reliability.
  • Support the development of standards for liability related to AI issues. This recommendation states that Congress should examine liability laws to ensure patients are protected in the event that AI models produce incorrect diagnoses or make erroneous and harmful recommendations.
  • Support appropriate payment mechanisms without stifling innovation. This recommendation states that Congress should continue to evaluate emerging technologies to ensure Medicare benefits adequately recognize appropriate AI-related medical technologies.

III. Takeaways

The Report represents consensus among at least some bipartisan members of Congress and identifies areas of Congressional interest for potential legislation. Stakeholders should review the Report and evaluate how their organizations’ products or services are addressed by the Task Force. Because enactment of future legislation may require changes in regulatory compliance requirements, stakeholders should also continue to monitor policy developments and consider consulting on a federal engagement strategy with Members of Congress and other policymakers.

For more information, please contact the professionals listed below, or your regular Crowell contact.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Jodi G. Daniel Jodi G. Daniel

Jodi Daniel is a partner in Crowell & Moring’s Health Care Group and a member of the group’s Steering Committee. She is also a director at C&M International (CMI), an international policy and regulatory affairs consulting firm affiliated with Crowell & Moring. She…

Jodi Daniel is a partner in Crowell & Moring’s Health Care Group and a member of the group’s Steering Committee. She is also a director at C&M International (CMI), an international policy and regulatory affairs consulting firm affiliated with Crowell & Moring. She leads the firm’s Digital Health Practice and provides strategic, legal, and policy advice to all types of health care and technology clients navigating the dynamic regulatory environment related to technology in the health care sector to help them achieve their business goals. Jodi is a contributor to the Uniform Law Commission Telehealth Committee, which drafts and proposes uniform state laws related to telehealth services, including the definition of telehealth, formation of the doctor-patient relationship via telehealth, creation of a registry for out-of-state physicians, insurance coverage and payment parity, and administrative barriers to entity formation.

Photo of Stephen Holland Stephen Holland

Stephen Holland is a senior counsel in Crowell & Moring’s Government Affairs Group, where he leverages his extensive experience advising members of Congress and their staff as a policy advisor and attorney active in health care legislation. Stephen has been responsible for crafting

Stephen Holland is a senior counsel in Crowell & Moring’s Government Affairs Group, where he leverages his extensive experience advising members of Congress and their staff as a policy advisor and attorney active in health care legislation. Stephen has been responsible for crafting dozens of provisions in law to improve food, drug, and medical device innovation and regulation at the Food and Drug Administration (FDA), health coverage and access, public health communication and coordination, prescription drug affordability, and emergency preparedness and response.

Prior to joining Crowell, Stephen served in senior policy roles in the U.S. House of Representatives for over 10 years. Most recently, Stephen spent five years on the Energy and Commerce Committee staff under the leadership of Ranking Member and former Chairman Frank Pallone of New Jersey.  On the Committee staff, he was responsible for legislative action related to numerous agencies and programs, including the FDA, the Biomedical Advanced Research and Development Authority (BARDA), and the 340B drug program. Notably, his work on the Committee included leading negotiations and drafting of the Food and Drug Omnibus Reform Act of 2022 (FDORA), a package of more than 50 policies to expand research, development, and innovation for drugs, medical devices, and personal care products. During the COVID-19 response, Stephen worked to secure billions of dollars for research, development, distribution, and promotion of vaccines, treatment, and diagnostic tests in the CARES Act, the Fiscal Year 2021 Omnibus, and the American Rescue Plan Act.

Photo of Allison Kwon Allison Kwon

Allison Kwon supports Crowell Health Solutions, a strategic consulting firm affiliated with Crowell & Moring, to help clients pursue and deliver innovative alternatives to the traditional approaches of providing and paying for health care, including through digital health, health equity, and value-based health…

Allison Kwon supports Crowell Health Solutions, a strategic consulting firm affiliated with Crowell & Moring, to help clients pursue and deliver innovative alternatives to the traditional approaches of providing and paying for health care, including through digital health, health equity, and value-based health care. She is a health care policy consultant in the Washington, D.C. office.