AI Transparency on FHIR, published by HL7 International / Electronic Health Records. This guide is not an authorized publication; it is the continuous build for version 0.1.0 built by the FHIR (HL7® FHIR® Standard) CI Build. This version is based on the current content of https://github.com/HL7/aitransparency-ig/ and changes regularly. See the Directory of published versions
Official URL: http://hl7.org/fhir/uv/aitransparency/ImplementationGuide/hl7.fhir.uv.aitransparency | Version: 0.1.0 | |||
Draft as of 2025-04-04 | Computable Name: aitransparency |
Transparency is necessary to establish standards for documenting and tracking the use of outputs from AI systems or inference algorithms, including generative Artificial Intelligence (AI) and Large Language Models (LLMs), within FHIR resources and operations.
AI represents amazing potential to improve outcomes in healthcare. However, it comes with a number of challenges, such as it is generally probabilistic in nature, can be influence by bias, and can suffer from hallucinations. It is critical that we provide guidance for how to tag data coming from AI in a way that it can be used responsibly by downstream systems and users.
This FHIR Implementation Guide (IG) aims to define standard methods for representing the use of generative AI and LLMs in FHIR resources and operations. The IG will address two main areas:
Define guidance on representing inferences from AI within FHIR resources, including, but not limited to, use of existing fields, extensions, and recommended codes. Thus insuring consistent representations that downstream systems can rely on to utilize the data appropriately, including:
Define standard patterns for representing FHIR operations that use AI/LLMs, including:
The purpose of this project is to enable observability for the use of AI algorithms in the production or manipulation of health data, thus enabling transparency for users of the data to determine the relevance, validity, applicability, and suitability of the data.
The purpose of the implementation guide is to provide a method for sharing data about the use of AI algorithms in the production or manipulation of health data. It is not the intent of this project to endorse, validate, or invalidate the use of these AI algorithms or the resulting data. Although the project intends to create infrastructure for reporting observability, it is not the intent of this project to provide the governance for transparency reporting expectations.
In this project, AI algorithm is defined broadly to include any computer-based logic that touches health data in a way that might change the understanding of the data downstream. Some examples include: an algorithm that attempts summarize clinical notes, an algorithm that attempts to interpret medical images, an algorithm that attempts to identify medical concepts within a clinical note, an algorithm used to generate synthetic health data, and so on. Some computer-based logic that touches health data, such as simple calculations and data transformations, may not be considered to be AI algorithms but observability of such events should also be supported by this implementation guide.
This is an R4 IG. None of the features it uses are changed in R4B, so it can be used as is with R4B systems. Packages for both R4 (hl7.fhir.uv.aitransparency.r4) and R4B (hl7.fhir.uv.aitransparency.r4b) are available.
No use of external IP
There are no Global profiles defined
Package hl7.fhir.uv.extensions#5.2.0 This IG defines the global extensions - the ones defined for everyone. These extensions are always in scope wherever FHIR is being used (built Mon, Feb 10, 2025 21:45+1100+11:00) |
Package hl7.fhir.uv.extensions.r5#5.1.0 This IG defines the global extensions - the ones defined for everyone. These extensions are always in scope wherever FHIR is being used (built Sat, Apr 27, 2024 18:39+1000+10:00) |