AI Transparency on FHIR, published by HL7 International / Electronic Health Records. This guide is not an authorized publication; it is the continuous build for version 0.1.0 built by the FHIR (HL7® FHIR® Standard) CI Build. This version is based on the current content of https://github.com/HL7/aitransparency-ig/ and changes regularly. See the Directory of published versions
Official URL: http://hl7.org/fhir/uv/aitransparency/ImplementationGuide/hl7.fhir.uv.aitransparency | Version: 0.1.0 | |||
Draft as of 2025-08-15 | Maturity Level: 0 | Computable Name: AITransparency |
Transparency is necessary to establish standards for documenting and tracking the use of outputs from AI systems or inference algorithms, including generative Artificial Intelligence (AI) and Large Language Models (LLMs), within FHIR resources and operations.
AI represents amazing potential to improve outcomes in healthcare. However, it comes with a number of challenges, such as it is generally probabilistic in nature, can be influenced by bias, and can suffer from hallucinations. It is critical that we provide guidance for how to tag data coming from AI in a way that it can be used responsibly by downstream systems and users.
This FHIR Implementation Guide (IG) defines standard methods for representing the use of generative AI and LLMs in FHIR resources.
It defines guidance on representing inferences from AI within FHIR resources, including, but not limited to, use of existing fields, extensions, and recommended codes. Thus insuring consistent representations that downstream systems can rely on to utilize the data appropriately, including:
The purpose of this project is to enable observability for the use of AI algorithms in the production or manipulation of health data, thus enabling transparency for users of the data to determine the relevance, validity, applicability, and suitability of the data.
The purpose of the implementation guide is to provide a method for sharing data about the use of AI algorithms in the production or manipulation of health data. It is not the intent of this project to endorse, validate, or invalidate the use of these AI algorithms or the resulting data. Although the project intends to create infrastructure for reporting observability, it is not the intent of this project to provide the governance for transparency reporting expectations.
In this project, AI algorithm is defined broadly to include any computer-based logic that touches health data in a way that might change the understanding of the data downstream. Some examples include: an algorithm that attempts summarize clinical notes, an algorithm that attempts to interpret medical images, an algorithm that attempts to identify medical concepts within a clinical note, an algorithm used to generate synthetic health data, and so on. Some computer-based logic that touches health data, such as simple calculations and data transformations, may not be considered to be AI algorithms but observability of such events should also be supported by this implementation guide.
This is an R4 IG. None of the features it uses are changed in R4B, so it can be used as is with R4B} systems. Packages for both R4 (hl7.fhir.uv.aitransparency.r4) and R4B (hl7.fhir.uv.aitransparency.r4b) are available.
This publication includes IP covered under the following statements.
There are no Global profiles defined
IG | Package | FHIR | Comment |
---|---|---|---|
hl7.fhir.uv.aitransparency#0.1.0 | R4 | ||
hl7.terminology.r4#6.5.0 | R4 | Automatically added as a dependency - all IGs depend on HL7 Terminology | |
hl7.fhir.uv.extensions.r4#5.2.0 | R4 | ||
hl7.fhir.uv.security-label-ds4p#1.0.0 | R4 | ||
hl7.terminology#5.1.0 | R4 |
Package hl7.fhir.uv.extensions.r4#5.2.0 This IG defines the global extensions - the ones defined for everyone. These extensions are always in scope wherever FHIR is being used (built Mon, Feb 10, 2025 21:45+1100+11:00) |
Package hl7.fhir.uv.security-label-ds4p#1.0.0 FHIR data segmentation for privacy security label implementation guide (built Mon, Apr 17, 2023 19:19+0000+00:00) |