Rapid developments and competition in artificial intelligence (AI) will drive proliferation of new AI technologies in health care in the coming years, along with a number of legal and ethical issues.
ChatGPT 3.5 created a huge splash, rife with controversy, when it was released in November 2022. Launched by the San Francisco-based startup OpenAI, ChatGPT is a natural language processing (NLP) model (a type of machine learning (ML)), that automatically learns and recognizes patterns. ChatGPT uses a neural network architecture to generate human-sounding responses to questions, providing users with large amounts of potentially useful information in seconds. According to a recent review, ChatGPT demonstrated that it was capable of passing all three parts of the U.S. Medical Licensing Exam (USMLE), which tests medical students on topics including the basic sciences, clinical knowledge and patient treatment and diagnosis, without any specialized training. ChatGPT also showed proficiency in medical charting, diagnosing, and performing nonclinical tasks. OpenAI recently launched ChatGPT 4.0, which offers expanded capabilities and improved performance on various professional and academic assessments.
Companies are leveraging this technology in health care for myriad tasks, including entering information from conversations between clinicians and patients directly into electronic health record systems, creating chatbot tools to help clinicians streamline a variety of time-consuming administrative tasks, such as medical charting, drafting letters to colleagues and patients, and faxing preauthorization and appeal letters to insurers. Recently, we have seen efforts to incorporate ChatGPT into electronic health records. Chatbots are also being used in an effort to support mental health needs.
Chatbots are the most recent iteration of AI/ML technology in health care; however, AI/ML technology has been used in the health care space for a number of years to support diagnosis, treatment, and patient monitoring. Medical device manufacturers continue to incorporate AI/ML functionalities in medical devices, and pharmaceutical companies use AI technologies to advance pharmaceutical development and improve design techniques.
The promise of AI/ML is tremendous. However, the application of these technologies in medicine and health care raise significant concerns in law, policy, and ethics. For example:
- When is it appropriate for health care providers to rely on AI/ML, if at all?
- When does a technology cross the line into the practice of medicine under state law and by the Food and Drug Administration (FDA)?
- Who is liable if the technology “makes an error” or produces output based on biased information?
- Should there be standards or validation for AI/ML tools that are not subject to FDA regulation?
- What kind of consent and personal data privacy protections should be in place to ensure adequate protection of consumers?
- What personal data can or should the AI/ML be able to use in its algorithms?
- Who should have access to AI/ML technology, especially if it can be life-saving, and how can these technologies be used and accessed equitably?
The U.S. federal government has taken initial steps to anticipate consumer needs and ethical concerns related to AI and health care. In 2020, Congress established the National Artificial Intelligence Initiative (NAII), which was created to ensure continued U.S. leadership in AI research and development and provide for a coordinated program across the entire federal government to accelerate AI research and application for U.S. economic prosperity and national security. In December 2020, the White House issued Executive Order 13960, “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government,” to outline nine principles that agencies must follow when designing and using AI in the federal government. In response, the U.S. Department of Health and Human Services (HHS) Office of the Chief Artificial Intelligence Officer published the Trustworthy AI Playbook in September 2021 to help agencies meet requirements outlined in the Executive Order. The HHS Trustworthy AI Playbook focuses on the design, development, acquisition, and use of AI in a manner that fosters public trust and confidence. The HHS playbook describes the building blocks of AI, principles for the use of trustworthy AI in government, internal AI deployment considerations, and external considerations. These aforementioned playbooks and guidance documents focused on AI’s use in the federal government. More recently, the White House published guidance on AI that is directed both at the government and externally to businesses across industries.
The White House Office of Science and Technology Policy (OSTP) published the October 2022 “Blueprint for an AI Bill of Rights” that establishes five principles and associated practices to protect civil rights and promote democratic values when building, deploying, and governing automated systems. While it is intended to serve as a non-binding guidance document, the AI Bill of Rights provides clear insight into the Administration’s AI regulatory policy goals and may influence future guidance which would impact health care providers, payers, technology companies, health care innovators, programmers, and other entities that employ AI technologies. In fact, the blueprint states that future sector-specific guidance will likely be necessary and important for guiding the use of automated systems in certain settings – and specifically identifies AI in automated health diagnostic systems as an example.
The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People
The blueprint provides a national values statement and toolkit that is sector-agnostic to build protections into technological design processes and to inform policy decisions.
The blueprint outlines the following five principles to govern automated systems and provides numerous examples related to health care access and delivery.
- Safe and Effective Systems
Individuals and communities should be protected from unsafe or ineffective systems. Systems should be developed with consultation from diverse stakeholders and experts and should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring. Individuals should be protected from inappropriate or irrelevant data use in the design, development, and deployment of automated systems as well as from the compounded harm of its reuse.
- Algorithmic Discrimination Protections
Algorithms and systems should be designed in an equitable manner and should not disfavor individuals based on classifications, such as race, color, ethnicity, sex (including pregnancy, childbirth, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law. AI system developers should use proactive and continuous measures to guard against algorithmic discrimination, including equity assessments and algorithmic impact assessments featuring both independent evaluation and plain language reporting. The Blueprint specifically flags concern about clinical algorithms (used by physicians to guide clinical decisions) that may include sociodemographic variables that adjust the algorithm’s output on the basis of a patient’s race or ethnicity, which can lead to race-based health inequities.
- Data Privacy
Individuals should be protected from abusive data practices via built-in protections, including ensuring that data collection conforms to reasonable expectations and that only data strictly necessary for the specific context is collected. Automated systems developers are encouraged to seek consent before using personal data. Consent should only be used to justify data collection in cases where it can be “appropriately and meaningfully given.” If it is not possible to obtain consent in advance, developers are encouraged to implement privacy by design safeguards. The principle also states that data in sensitive domains, including health care-related data, should be subject to enhanced protections and restrictions. For example, the principle warns that location data acquired from data brokers can be used to identify people who visit abortion clinics and that such data relates to a sensitive domain and requires enhanced protections.
- Notice and Explanation
AI system developers should provide timely and accessible descriptions in plain language to describe overall system functioning and the role automation plays, notice that automated systems are in use, the individual or organization responsible for the AI system, and explanations of outcomes. It encourages automated systems to provide explanations that are technically valid, meaningful, and useful to operators of the system.
- Human Alternatives, Consideration, and Fallback
Individuals should be able to opt out from automated systems in favor of a human alternatives, where appropriate or required by law. Appropriateness should be determined based on reasonable expectations in a given context in addition to ensuring broad accessibility and protecting the public from especially harmful impacts. The principle states that automated systems with an intended use within sensitive domains should be tailored to the purpose, provide meaningful access for oversight, include training for any people interacting with the system, and incorporate human consideration for adverse or high-risk decisions. For example, technologies that help doctors diagnose disease can lead to inaccurate or dangerous outcomes and should include extra protections.
Recent Agency Actions on AI
With the release of the Blueprint for an AI bill of rights, the Administration also announced further actions across the federal government to advance the principles outlined in the blueprint, including the following taken by the Department of Health and Human Services (HHS) and the Department of Veterans Affairs (VA) to protect patients and support health care providers. Many of these actions are efforts by federal agencies, including HHS, to begin imposing AI-related regulations.
- Mitigating Bias in Algorithms: In July 2022, HHS issued the calendar year 2023 Medicare Hospital Outpatient Prospective Payment System and Ambulatory Surgical Center Payment System proposed rule and solicited comments on how to encourage software developers and other vendors to prevent and mitigate bias in their algorithms and predictive modeling. In the final rule, HHS outlined the comments it received in response to the request, including recommendations to improve collaboration between federal agencies and to use bias-control strategies when developing products and test algorithms in various populations.
- Protecting Veterans: In July 2022, the VA instituted a Principle-Based Ethics Framework for Access to and Use of Veteran Data, which ensures uniform ethics standards for data practices and addresses concerns that are beyond traditional privacy and confidentiality practices. The VA also launched the AI@VA Community and Network to pilot programs that will provide veterans with information about any AI system used in their health care and ensure AI risks are managed during human subjects research.
- Health Care Discrimination: In August 2022, HHS issued a proposed rule that includes a provision that would prohibit discrimination, under Section 1557 of the Affordable Care Act, by covered entities through the use of algorithms used in clinical decision-making. The proposed rule acknowledges the increasing reliance on clinical algorithms to inform decision-making in health care and establishes a complaint process in which the HHS Office of Civil Rights would investigate complaints alleging discrimination from use of a clinical algorithm in decision-making against a covered entity. This proposal has created controversy, particularly among hospitals and providers, as the proposal attempts to place liability on covered entities, who may not have insight into the algorithms to the same extent as developers and program designers. Additionally, HHS will release an evidence-based examination of health care algorithms and racial and ethnic disparities for public comment.
- Health Equity: On February 16, 2023, President Joe Biden issued an executive order directing federal agencies to advance equity when designing, developing, acquiring and using AI and automated systems and ensuring that agencies’ respective civil rights offices are consulted on decisions regarding the design, development, acquisition, and use of AI and automated systems in order to prevent and address discrimination and advance equity. Building on an executive order issued in 2021, the recent order seeks to promote equity in science and root out bias in the design and use of new technologies, including in AI.
- Transparency for Health Technology: On April 11, 2023, HHS published a proposed rule that includes criteria to support transparency for decision support intervention (DSI) and predictive models, that is “technology intended to support decision-making based on algorithms or models that derive relationships from training or example data and then are used to produce an output or outputs related to prediction, classification, recommendation, evaluation, or analysis.” These proposed requirements for developers of certified health IT are intended to support transparency of predictive DSIs and to establish decision support configuration requirements and intervention risk management practices.
The Blueprint for an AI Bill of Rights, along with the various federal actions described above, demonstrate that the White House and federal agencies are carefully watching advances in AI/ML and their impact across industries, including health care. The blueprint raises ethical considerations and proposes guardrails to protect consumers of AI/ML and the general public. Technologies like ChatGPT have not necessarily incorporated these guardrails into their technologies yet but would benefit to do so for the ethical and consumer protection reasons outlined in the Blueprint.
Several Congressional bills related to AI have been proposed, but no movement has been seen yet. Nonetheless, we expect federal and state legislatures and agencies to enact laws and promulgate regulations on AI in health care in the near future. In the meantime, the Administration is taking action through proposed regulation and guidance to set parameters for the use of AI/ML in health care. Businesses creating and using AI would be wise to monitor guidance now to get a head start on where laws and regulations are likely to land in the future. Doing so may allow businesses to get ahead of future government regulation – which once imposed, may create business compliance challenges that otherwise could have been avoided if considered during the early stages of design and deployment.
Specifically, health care organizations should evaluate whether their AI technologies include the protections identified in the Blueprint, including protections related to privacy, consent, freedom of choice, and access, among others. Organizations should also use the resources offered in the Blueprint to create or update policies in their compliance programs related to AI. Given that the Blueprint provides high-level principles, it will be important for innovative health care organizations and technology companies to consider ethical and legal issues in developing and operationalizing this technology to limit risk, build trust, and ensure that innovation meets the goals of improving health and health outcomes equitably.
For more information, please contact the professionals listed below, or your regular Crowell Health Solutions contact.