AI Transparency on FHIR
0.1.0 - ci-build International flag

AI Transparency on FHIR, published by HL7 International / Electronic Health Records. This guide is not an authorized publication; it is the continuous build for version 0.1.0 built by the FHIR (HL7® FHIR® Standard) CI Build. This version is based on the current content of https://github.com/HL7/aitransparency-ig/ and changes regularly. See the Directory of published versions

Use Cases

Page standards status: Informative

Use Case 1: AI Attribution in Clinical or Administrative Documentation Review

Clinicians reviewing previous treatment plans must distinguish between content inferred by AI, decisions assisted by AI, and those made without AI involvement. This parallels standard practice of identifying the human clinician responsible for treatment plans, their role in development, and their clinical background.

Use Case 1: AI Attribution in Clinical or Administrative Documentation Review

Moreover, the use case can be expanded to different roles and contexts, as shown in the table below:

Data Viewing Questions by Actor

When this Actor is Viewing Data The key questions may be…
Clinician What is happening (to modify)? Why is it happening?
Payor What matches prior authorization criteria?
QI What matches the desired outcomes, or desired approach to care?
Safety Board Multiple questions for root cause analysis
Legal Who is responsible?