On April 4, The Coalition for Health AI (“CHAI”) released the “Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare,” (“CHAI Blueprint”) which addresses the lack of industry-accepted standard governing the development and implementation of artificial intelligence (“AI”) tools in health care, outlines key elements to establish standards on trustworthy AI, issues recommendations for health systems to deploy AI tools in clinical settings, and proposes specifications to be included in a potential assurance standards guide.
In sum, the CHAI Blueprint outlines principles and recommendations to develop guidelines in order to facilitate trusted use of AI in health care: (i) useful (valid and reliable, testable, usable, and beneficial), (ii) safe, (iii) accountable and transparent, (iv) explainable and interpretable, (v) fair – with harmful bias managed, (vi) secure and resilient, and (vii) privacy-enhanced. Moreover, the CHAI Blueprint proposes to establish an assurance lab and related consulting services to help stakeholders evaluate their processes for readiness to implement AI tools.
By publishing the CHAI Blueprint, the Coalition’s stated goal is to avoid disparate, conflicting approaches on AI adoption and implementation in the clinical setting and to agree on a canonical structure for health AI assurance standards throughout the application’s lifecycle. The CHAI Blueprint specifically raises concerns that AI/ML technologies may introduce or worsen bias, thereby increasing the risks of negative outcomes for patients. It states that there is an urgent need to ensure that AI in healthcare benefits all populations, including groups from underserved and underrepresented communities.
The CHAI Blueprint was developed by representatives from the health care, technology, and other industry sectors, who collaborated under the observation of several federal agencies over the past year. Launched in 2022, the CHAI was created to identify health care AI standards and best practices and to provide guidance where needed. Founding members include: Change Healthcare, Duke AI Health, Google, Johns Hopkins University, Mayo Clinic, Microsoft, MITRE, SAS, Stanford Medicine, University of California at Berkeley, and University of California, San Francisco. In addition, the White House Office of Science and Technology Policy (“OSTP”) acts as a federal observer to CHAI in addition to the Agency for Healthcare Research and Quality, Centers for Medicare & Medicaid Services, U.S. Food and Drug Administration (“FDA”), Office of the National Coordinator for Health Information Technology, and National Institutes of Health.
The CHAI Blueprint builds upon the OSTP Blueprint for an AI Bill of Rights and the U.S. Department of Commerce’s National Institute of Standards and Technology (“NIST”) AI Risk Management Framework. While publications from the OSTP and NIST provide general guidelines intended to apply to all sectors, the CHAI Blueprint focuses on the health care industry and specifically addresses health systems’ AI matters.
- Health systems and others developing health technology that leverages AI should become familiar with the CHAI Blueprint’s principles, particularly given that they were developed in coordination with federal health agencies and may thus influence future AI legislation or regulation, and that there will be continuing work led by the National Academies of Medicine (“NAM”).
- As discussed previously (read Crowell Health Solutions blog), Health systems may consider creating or update existing governance processes and policies on AI to incorporate principles from the OSTP Blueprint for an AI Bill of Rights, but may also want to consider the CHAI Blueprint’s principles in these efforts.
- Health systems and other health care organizations should monitor for additional developments, including draft assurance standards guide governing AI tools (which may provide an opportunity for public comment) from CHAI, NAM activities, and federal policies in this area.
The CHAI Blueprint’s Key Elements of Trustworthy AI in Health Care
The CHAI Blueprint outlines key elements of trustworthy AI in health care. It uses definitions included in the NIST AI Risk Management Framework and provides additional details and examples on how the definitions apply to health systems and in clinical care settings. Throughout its discussion of principles, the CHAI Blueprint proposes to develop an assurance standards guide that would address current gaps in oversight and help health systems to define governance processes.
- Useful: The CHAI Blueprint states that for an algorithm to be useful, it must provide a specific benefit to patients and/or health care delivery and be usable, beyond being valid and reliable. Useful algorithms have the following qualities: valid with respect to accuracy, operability, and meeting intended purpose and benefit (i.e., clinical validation); reliable; testable; usable; and beneficial. The CHAI Blueprint adds more specificity regarding these qualities.
- Safe: The CHAI Blueprint states that safe AI systems in health care should prevent worse outcomes for the patient, provider, or health system from occurring as a result of the use of an ML algorithm. The CHAI Blueprint states that an assurance guide can define metrics and provenance information, including how safety is measured and by whom this information is captured; define how safety events caused by AI could be identified and reported; define and enable the parties that provide data (e.g., hospital electronic health records, patient-generated health data) on roles and responsibilities for maintaining safe AI; and offer opportunities on how to reevaluate the status quo.
- Accountable and Transparent: The CHAI Blueprint defines accountability as the responsibility of individuals involved in the development, deployment, and maintenance of AI systems to maintain auditability, minimize harm, report negative impact, and communicate design tradeoffs and opportunities for redress. It defines transparency as the extent to which individuals interacting with an AI system or whose data are input into an AI system have access to information about that system and its outputs (regardless of whether they are aware that they are interacting with AI). It further explains that transparency is enabled when criteria involving the selection and curation of underlying datasets, the validation and reliability of the models, and the engagement of stakeholders, patients, and end-users are considered. The CHAI Blueprint states that an assurance guide can help address transparency when multiple datasets and/or models are combined.
- Explainable and Interpretable: The CHAI Blueprint states that explainability refers to a representation of the mechanisms underlying AI systems’ operation, whereas interpretability refers to the meaning of AI systems’ output in the context of their designed functions.
- Fair – with harmful bias managed: The CHAI Blueprint states that this principle means ensuring that health AI, through action or inaction, does not increase a specific group’s risk for bias or adverse fairness outcomes. To help evaluate AI for potential bias, equity, and fairness, the CHAI Blueprint states that there should be frameworks and checklists to help guide decisions. Moreover, it explains that there should be multiple checkpoints for every stage in the AI design, development, and implementation lifecycle and at different points during the stages of evaluation and continual monitoring.
- Secure and resilient: The CHAI Blueprint states that AI systems, including those in health care, that can maintain confidentiality, integrity, and availability through protection mechanisms that prevent unauthorized access and use may be said to be secure. Furthermore, AI systems are resilient if they are able to withstand unexpected adverse events or unexpected changes in their environment or use, or if they can maintain their functions and structure in the face of internal and external change, degrading safely and gracefully when necessary.
- Privacy-enhanced: The CHAI Blueprint restates NIST’s definition of privacy, which refers generally to the norms and practices that help to safeguard human autonomy, identity, and dignity. It further states that health care requires unique standards for privacy, including those established by the 1996 Health Information Portability and Accountability Act (HIPAA).
Operationalizing Trustworthy AI for Health Systems
The CHAI Blueprint states there is a need for health care institutions to use common, agreed-upon set of principles to build and facilitate use of AI tools. Building on its discussion of the principles, it proposes to establish an assurance lab that would enable health systems and tool developers and vendors to submit processes and tools for evaluation to ensure readiness to employ AI tools. Given the disparate resources possessed by large medical centers and small resource-constrained health systems (including those located in rural areas), the CHAI Blueprint states that an advisory body may need to advance the industry with use of AI technologies and to ensure equity so that patient access to trustworthy health AI would not depend on geographic location or with which health system they are interacting.
- Setting up Assurance Lab and Advisory Service Infrastructure: The CHAI Blueprint proposes that interdependent assurance labs and associated consulting services will help in creating an ecosystem that has at minimum four infrastructure components: a shared definition of value in addition to components including registries listing AI tools, templates of legal agreements for the participation of the data providers and the algorithm developers for validation, and “sandbox” environments for testing AI tools. The CHAI Blueprint states that it is important to ensure a clear value proposition for the patient and the organization for deploying AI solutions, beginning with a value proposition evaluation.
- Institutionalizing Trustworthy AI Systems: The CHAI Blueprint summarizes several prerequisite components for institutionalizing trustworthy AI systems that have been included in a number of frameworks, such as the Trustworthy AI Executive Order, OSTP Blueprint for an AI Bill of Rights, and others. These frameworks include the following relevant prerequisites: 1) create an inventory or registry of various models/tools in the system; 2) define which types of models from the inventory are subject to which guidelines; and 3) define organizational structures, such as who is responsible for overseeing trustworthy AI systems, and for responding to requests in governance processes. According to the CHAI Blueprint, once organizational structures and oversight processes are established, then there is a basis for creating an established set of maturity levels against which health systems can be evaluated. The CHAI Blueprint also states that establishing assurance standards and ensuring ongoing monitoring of AI tools should be conducted by adjudicating/assurance bodies.
- Energizing a Coalition of the Willing: CHAI states that there is an opportunity between CHAI, NAM, and other health care stakeholders to collaborate. The CHAI Blueprint states that there must be a business case for putting in the effort to build and coalesce around a national standard. It also mentions that there is a need to codify best practices and a corresponding “code of conduct” for AI in addition to a potential consensus publication, which would be driven by a public comment period.
As stated earlier, health systems and organizations should review the CHAI Blueprint since it is one of the few AI/ML frameworks that focuses specifically on the use of AI tools in health care and clinical settings. While the government and industry have released framework and/or guidance with applications across all sectors, we expect additional publications and guidance specific to the health care industry as stakeholders seek consensus on ethical use of health AI tools and establishing protections for patients and consumers. Now more than ever these policy frameworks have become increasingly important as AI/ML technology develops rapidly, with industry experts wary of the pace of technological development outpacing guardrails and regulations.
Technological developments and regulatory developments in AI/ML will continue to move rapidly. Follow Crowell Health Solutions’ Trends in Transformation blog for the latest updates and analysis. For more information, please contact the professionals listed below, or your regular Crowell Health Solutions contact.