On April 4, The Coalition for Health AI (“CHAI”) released the “Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare,” (“CHAI Blueprint”) which addresses the lack of industry-accepted standard governing the development and implementation of artificial intelligence (“AI”) tools in health care, outlines key elements to establish standards on trustworthy AI, issues recommendations for health systems to deploy AI tools in clinical settings, and proposes specifications to be included in a potential assurance standards guide.

In sum, the CHAI Blueprint outlines principles and recommendations to develop guidelines in order to facilitate trusted use of AI in health care: (i) useful (valid and reliable, testable, usable, and beneficial), (ii) safe, (iii) accountable and transparent, (iv) explainable and interpretable, (v) fair – with harmful bias managed, (vi) secure and resilient, and (vii) privacy-enhanced. Moreover, the CHAI Blueprint proposes to establish an assurance lab and related consulting services to help stakeholders evaluate their processes for readiness to implement AI tools.

By publishing the CHAI Blueprint, the Coalition’s stated goal is to avoid disparate, conflicting approaches on AI adoption and implementation in the clinical setting and to agree on a canonical structure for health AI assurance standards throughout the application’s lifecycle. The CHAI Blueprint specifically raises concerns that AI/ML technologies may introduce or worsen bias, thereby increasing the risks of negative outcomes for patients. It states that there is an urgent need to ensure that AI in healthcare benefits all populations, including groups from underserved and underrepresented communities.

The CHAI Blueprint was developed by representatives from the health care, technology, and other industry sectors, who collaborated under the observation of several federal agencies over the past year. Launched in 2022, the CHAI was created to identify health care AI standards and best practices and to provide guidance where needed. Founding members include: Change Healthcare, Duke AI Health, Google, Johns Hopkins University, Mayo Clinic, Microsoft, MITRE, SAS, Stanford Medicine, University of California at Berkeley, and University of California, San Francisco. In addition, the White House Office of Science and Technology Policy (“OSTP”) acts as a federal observer to CHAI in addition to the Agency for Healthcare Research and Quality, Centers for Medicare & Medicaid Services, U.S. Food and Drug Administration (“FDA”), Office of the National Coordinator for Health Information Technology, and National Institutes of Health.

The CHAI Blueprint builds upon the OSTP Blueprint for an AI Bill of Rights and the U.S. Department of Commerce’s National Institute of Standards and Technology (“NIST”) AI Risk Management Framework. While publications from the OSTP and NIST provide general guidelines intended to apply to all sectors, the CHAI Blueprint focuses on the health care industry and specifically addresses health systems’ AI matters.

High-Level Takeaways:

  • Health systems and others developing health technology that leverages AI should become familiar with the CHAI Blueprint’s principles, particularly given that they were developed in coordination with federal health agencies and may thus influence future AI legislation or regulation, and that there will be continuing work led by the National Academies of Medicine (“NAM”).
  • As discussed previously (read Crowell Health Solutions blog), Health systems may consider creating or update existing governance processes and policies on AI to incorporate principles from the OSTP Blueprint for an AI Bill of Rights, but may also want to consider the CHAI Blueprint’s principles in these efforts.
  • Health systems and other health care organizations should monitor for additional developments, including draft assurance standards guide governing AI tools (which may provide an opportunity for public comment) from CHAI, NAM activities, and federal policies in this area.

The CHAI Blueprint’s Key Elements of Trustworthy AI in Health Care

The CHAI Blueprint outlines key elements of trustworthy AI in health care. It uses definitions included in the NIST AI Risk Management Framework and provides additional details and examples on how the definitions apply to health systems and in clinical care settings. Throughout its discussion of principles, the CHAI Blueprint proposes to develop an assurance standards guide that would address current gaps in oversight and help health systems to define governance processes.

  1. Useful: The CHAI Blueprint states that for an algorithm to be useful, it must provide a specific benefit to patients and/or health care delivery and be usable, beyond being valid and reliable. Useful algorithms have the following qualities: valid with respect to accuracy, operability, and meeting intended purpose and benefit (i.e., clinical validation); reliable; testable; usable; and beneficial. The CHAI Blueprint adds more specificity regarding these qualities.
  2. Safe: The CHAI Blueprint states that safe AI systems in health care should prevent worse outcomes for the patient, provider, or health system from occurring as a result of the use of an ML algorithm. The CHAI Blueprint states that an assurance guide can define metrics and provenance information, including how safety is measured and by whom this information is captured; define how safety events caused by AI could be identified and reported; define and enable the parties that provide data (e.g., hospital electronic health records, patient-generated health data) on roles and responsibilities for maintaining safe AI; and offer opportunities on how to reevaluate the status quo.
  3. Accountable and Transparent: The CHAI Blueprint defines accountability as the responsibility of individuals involved in the development, deployment, and maintenance of AI systems to maintain auditability, minimize harm, report negative impact, and communicate design tradeoffs and opportunities for redress. It defines transparency as the extent to which individuals interacting with an AI system or whose data are input into an AI system have access to information about that system and its outputs (regardless of whether they are aware that they are interacting with AI). It further explains that transparency is enabled when criteria involving the selection and curation of underlying datasets, the validation and reliability of the models, and the engagement of stakeholders, patients, and end-users are considered. The CHAI Blueprint states that an assurance guide can help address transparency when multiple datasets and/or models are combined.
  4. Explainable and Interpretable: The CHAI Blueprint states that explainability refers to a representation of the mechanisms underlying AI systems’ operation, whereas interpretability refers to the meaning of AI systems’ output in the context of their designed functions.
  5. Fair – with harmful bias managed: The CHAI Blueprint states that this principle means ensuring that health AI, through action or inaction, does not increase a specific group’s risk for bias or adverse fairness outcomes. To help evaluate AI for potential bias, equity, and fairness, the CHAI Blueprint states that there should be frameworks and checklists to help guide decisions. Moreover, it explains that there should be multiple checkpoints for every stage in the AI design, development, and implementation lifecycle and at different points during the stages of evaluation and continual monitoring.
  6. Secure and resilient: The CHAI Blueprint states that AI systems, including those in health care, that can maintain confidentiality, integrity, and availability through protection mechanisms that prevent unauthorized access and use may be said to be secure. Furthermore, AI systems are resilient if they are able to withstand unexpected adverse events or unexpected changes in their environment or use, or if they can maintain their functions and structure in the face of internal and external change, degrading safely and gracefully when necessary.
  7. Privacy-enhanced: The CHAI Blueprint restates NIST’s definition of privacy, which refers generally to the norms and practices that help to safeguard human autonomy, identity, and dignity. It further states that health care requires unique standards for privacy, including those established by the 1996 Health Information Portability and Accountability Act (HIPAA).

Operationalizing Trustworthy AI for Health Systems

The CHAI Blueprint states there is a need for health care institutions to use common, agreed-upon set of principles to build and facilitate use of AI tools. Building on its discussion of the principles, it proposes to establish an assurance lab that would enable health systems and tool developers and vendors to submit processes and tools for evaluation to ensure readiness to employ AI tools. Given the disparate resources possessed by large medical centers and small resource-constrained health systems (including those located in rural areas), the CHAI Blueprint states that an advisory body may need to advance the industry with use of AI technologies and to ensure equity so that patient access to trustworthy health AI would not depend on geographic location or with which health system they are interacting.

  • Setting up Assurance Lab and Advisory Service Infrastructure: The CHAI Blueprint proposes that interdependent assurance labs and associated consulting services will help in creating an ecosystem that has at minimum four infrastructure components: a shared definition of value in addition to components including registries listing AI tools, templates of legal agreements for the participation of the data providers and the algorithm developers for validation, and “sandbox” environments for testing AI tools. The CHAI Blueprint states that it is important to ensure a clear value proposition for the patient and the organization for deploying AI solutions, beginning with a value proposition evaluation.
  • Institutionalizing Trustworthy AI Systems: The CHAI Blueprint summarizes several prerequisite components for institutionalizing trustworthy AI systems that have been included in a number of frameworks, such as the Trustworthy AI Executive Order, OSTP Blueprint for an AI Bill of Rights, and others. These frameworks include the following relevant prerequisites: 1) create an inventory or registry of various models/tools in the system; 2) define which types of models from the inventory are subject to which guidelines; and 3) define organizational structures, such as who is responsible for overseeing trustworthy AI systems, and for responding to requests in governance processes. According to the CHAI Blueprint, once organizational structures and oversight processes are established, then there is a basis for creating an established set of maturity levels against which health systems can be evaluated. The CHAI Blueprint also states that establishing assurance standards and ensuring ongoing monitoring of AI tools should be conducted by adjudicating/assurance bodies.
  • Energizing a Coalition of the Willing: CHAI states that there is an opportunity between CHAI, NAM, and other health care stakeholders to collaborate. The CHAI Blueprint states that there must be a business case for putting in the effort to build and coalesce around a national standard. It also mentions that there is a need to codify best practices and a corresponding “code of conduct” for AI in addition to a potential consensus publication, which would be driven by a public comment period.

Conclusion

As stated earlier, health systems and organizations should review the CHAI Blueprint since it is one of the few AI/ML frameworks that focuses specifically on the use of AI tools in health care and clinical settings. While the government and industry have released framework and/or guidance with applications across all sectors, we expect additional publications and guidance specific to the health care industry as stakeholders seek consensus on ethical use of health AI tools and establishing protections for patients and consumers. Now more than ever these policy frameworks have become increasingly important as AI/ML technology develops rapidly, with industry experts wary of the pace of technological development outpacing guardrails and regulations.

Technological developments and regulatory developments in AI/ML will continue to move rapidly. Follow Crowell Health Solutions’ Trends in Transformation blog for the latest updates and analysis. For more information, please contact the professionals listed below, or your regular Crowell Health Solutions contact.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Jodi G. Daniel Jodi G. Daniel

Jodi Daniel is a partner in Crowell & Moring’s Health Care Group and a member of the group’s Steering Committee. She is also a director at C&M International (CMI), an international policy and regulatory affairs consulting firm affiliated with Crowell & Moring. She…

Jodi Daniel is a partner in Crowell & Moring’s Health Care Group and a member of the group’s Steering Committee. She is also a director at C&M International (CMI), an international policy and regulatory affairs consulting firm affiliated with Crowell & Moring. She leads the firm’s Digital Health Practice and provides strategic, legal, and policy advice to all types of health care and technology clients navigating the dynamic regulatory environment related to technology in the health care sector to help them achieve their business goals. Jodi is a contributor to the Uniform Law Commission Telehealth Committee, which drafts and proposes uniform state laws related to telehealth services, including the definition of telehealth, formation of the doctor-patient relationship via telehealth, creation of a registry for out-of-state physicians, insurance coverage and payment parity, and administrative barriers to entity formation.

Photo of Lidia Niecko-Najjum Lidia Niecko-Najjum

Lidia Niecko-Najjum is a counsel in Crowell & Moring’s Health Care Group and is part of the firm’s Digital Health Practice. With over 15 years of clinical, policy, and legal experience, Lidia provides strategic advice on health care regulatory and policy matters, with…

Lidia Niecko-Najjum is a counsel in Crowell & Moring’s Health Care Group and is part of the firm’s Digital Health Practice. With over 15 years of clinical, policy, and legal experience, Lidia provides strategic advice on health care regulatory and policy matters, with particular focus on artificial intelligence, machine learning, digital therapeutics, telehealth, interoperability, and privacy and security. Representative clients include health plans, health systems, academic medical centers, digital health companies, and long-term care facilities.

Lidia’s experience includes serving as a senior research and policy analyst at the Association of American Medical Colleges on the Policy, Strategy & Outreach team. Lidia also practiced as a nurse at Georgetown University Hospital in the general medicine with telemetry unit and the GI endoscopy suite, where she assisted with endoscopic procedures and administered conscious sedation.

Photo of Roma Sharma Roma Sharma

Roma Sharma is an associate in Crowell & Moring’s Washington, D.C. office and a member of the firm’s Health Care Group. Roma primarily works with health care clients seeking to comply with regulations for state and federal health care programs, health care anti-fraud…

Roma Sharma is an associate in Crowell & Moring’s Washington, D.C. office and a member of the firm’s Health Care Group. Roma primarily works with health care clients seeking to comply with regulations for state and federal health care programs, health care anti-fraud and abuse laws, and licensing laws.

Roma’s work incorporates her Master of Public Health degree in Health Policy as well as her past experiences as an extern at the Office of the General Counsel at the American Medical Association and as an intern at the Illinois Office of the Attorney General, Health Care Bureau.

Photo of Allison Kwon Allison Kwon

Allison Kwon supports Crowell Health Solutions, a strategic consulting firm affiliated with Crowell & Moring, to help clients pursue and deliver innovative alternatives to the traditional approaches of providing and paying for health care, including through digital health, health equity, and value-based health…

Allison Kwon supports Crowell Health Solutions, a strategic consulting firm affiliated with Crowell & Moring, to help clients pursue and deliver innovative alternatives to the traditional approaches of providing and paying for health care, including through digital health, health equity, and value-based health care. She is a health care policy consultant in the Washington, D.C. office.