AI Transparency on FHIR
1.0.0-comment - ci-build International flag

AI Transparency on FHIR, published by HL7 International / Electronic Health Records. This guide is not an authorized publication; it is the continuous build for version 1.0.0-comment built by the FHIR (HL7® FHIR® Standard) CI Build. This version is based on the current content of https://github.com/HL7/aitransparency-ig/ and changes regularly. See the Directory of published versions

Plain Language Summary goes here

Overview

Official URL: http://hl7.org/fhir/uv/aitransparency/ImplementationGuide/hl7.fhir.uv.aitransparency Version: 1.0.0-comment
IG Standards status: Trial-use Maturity Level: 0 Computable Name: AITransparency

Background

Artificial Intelligence (AI) has amazing potential to improve outcomes in healthcare. However, it comes with a number of challenges, such as bias, hallucinations, and non-determinism. In order to support responsible usage of AI, it is necessary to establish standards for documenting and tracking when health data has been created, updated, or otherwise influenced by AI. In particular, it is useful to know when a Fast Healthcare Interoperable Resource (FHIR) resource has been inferred, in whole or part, by an AI, such as Generative AI / Large Language Model (LLM).

This FHIR Implementation Guide (IG) provides guidance for representing the usage of AI in influencing FHIR resources. Starting with how to tag FHIR resources, and expanding into how to use Provenance, Device, and other data elements, this FHIR IG provides standards that enable downstream use cases to identify such resources. This allows the informed usage of AI-inferred health data.

Scope

The purpose of this project is to enable observability for the use of AI algorithms in the production or manipulation of health data, thus enabling transparency for users of the data to determine the relevance, validity, applicability, and suitability of the data.

The purpose of the implementation guide is to provide a method for sharing data about the use of AI algorithms in the production or manipulation of health data. It is not the intent of this project to endorse, validate, or invalidate the use of these AI algorithms or the resulting data. Although the project intends to create infrastructure for reporting observability, it is not the intent of this project to provide the governance for transparency reporting expectations.

In this project, AI algorithm is defined broadly to include any computer-based logic that touches health data in a way that might change the understanding of the data downstream. Some examples include: an algorithm that attempts summarize clinical notes, an algorithm that attempts to interpret medical images, an algorithm that attempts to identify medical concepts within a clinical note, an algorithm used to generate synthetic health data, and so on. Some computer-based logic that touches health data, such as simple calculations and data transformations, may not be considered to be AI algorithms but observability of such events should also be supported by this implementation guide.

Assumptions and Caveats

This IG assumes that health data are being represented in FHIR. While it is recognized that other standards, such as HL7 CDA and HL7 v2, may be used, this IG does not yet support them. Future work may seek to applying the Use-Cases and Observability Factors to these other standards.

Credits

  • Sam Schifman (Vantiq)
  • John Moehrke (Moehrke Research LLC)
  • May Terry (MITRE)
  • Brian Alper (Computable Publishing)
  • Michael Faughn (NIST)
  • Gregory Shemancik (CHAI)
  • Reynalda Davis (CMS)
  • Gail Winters
  • Mark Kramer (MITRE)

Cross Version Analysis

This is an R4 IG. None of the features it uses are changed in R4B, so it can be used as is with R4B systems. Packages for both R4 (hl7.fhir.uv.aitransparency.r4) and R4B (hl7.fhir.uv.aitransparency.r4b) are available.

Intellectual Property Considerations

This publication includes IP covered under the following statements.

Globals Profiles

There are no Global profiles defined

Dependencies

Package hl7.fhir.uv.extensions.r4#5.2.0

This IG defines the global extensions - the ones defined for everyone. These extensions are always in scope wherever FHIR is being used (built Mon, Feb 10, 2025 21:45+1100+11:00)

Package hl7.fhir.uv.security-label-ds4p#1.0.0

FHIR data segmentation for privacy security label implementation guide (built Mon, Apr 17, 2023 19:19+0000+00:00)

Package hl7.fhir.uv.tools.r4#0.8.0

This IG defines the extensions that the tools use internally. Some of these extensions are content that are being evaluated for elevation into the main spec, and others are tooling concerns (built Tue, Aug 5, 2025 20:09+1000+10:00)