AI Transparency on FHIR, published by HL7 International / Electronic Health Records. This guide is not an authorized publication; it is the continuous build for version 0.1.0 built by the FHIR (HL7® FHIR® Standard) CI Build. This version is based on the current content of https://github.com/HL7/aitransparency-ig/ and changes regularly. See the Directory of published versions
Page standards status: Informative |
Clinicians reviewing previous treatment plans must distinguish between content inferred by AI, decisions assisted by AI, and those made without AI involvement. This parallels standard practice of identifying the human clinician responsible for treatment plans, their role in development, and their clinical background.
Moreover, the use case can be expanded to different roles and contexts, as shown in the table below:
Data Viewing Questions by Actor
When this Actor is Viewing Data | The key questions may be… |
---|---|
Clinician | What is happening (to modify)? Why is it happening? |
Payor | What matches prior authorization criteria? |
QI | What matches the desired outcomes, or desired approach to care? |
Safety Board | Multiple questions for root cause analysis |
Legal | Who is responsible? |