On the justified use of AI decision support in evidence-based medicine: Validity, explainability, and responsibility

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Standard

On the justified use of AI decision support in evidence-based medicine : Validity, explainability, and responsibility. / Holm, Sune.

I: Cambridge Quarterly of Healthcare Ethics, 09.06.2023.

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Harvard

Holm, S 2023, 'On the justified use of AI decision support in evidence-based medicine: Validity, explainability, and responsibility', Cambridge Quarterly of Healthcare Ethics. https://doi.org/10.1017/S0963180123000294

APA

Holm, S. (2023). On the justified use of AI decision support in evidence-based medicine: Validity, explainability, and responsibility. Cambridge Quarterly of Healthcare Ethics. https://doi.org/10.1017/S0963180123000294

Vancouver

Holm S. On the justified use of AI decision support in evidence-based medicine: Validity, explainability, and responsibility. Cambridge Quarterly of Healthcare Ethics. 2023 jun. 9. https://doi.org/10.1017/S0963180123000294

Author

Holm, Sune. / On the justified use of AI decision support in evidence-based medicine : Validity, explainability, and responsibility. I: Cambridge Quarterly of Healthcare Ethics. 2023.

Bibtex

@article{f105338be8dd4606b069c77664ebce04,
title = "On the justified use of AI decision support in evidence-based medicine: Validity, explainability, and responsibility",
abstract = "When is it justified to use opaque artificial intelligence (AI) output in medical decision-making? Consideration of this question is of central importance for the responsible use of opaque machine learning (ML) models, which have been shown to produce accurate and reliable diagnoses, prognoses, and treatment suggestions in medicine. In this article, I discuss the merits of two answers to the question. According to the Explanation View, clinicians must have access to an explanation of why an output was produced. According to the Validation View, it is sufficient that the AI system has been validated using established standards for safety and reliability. I defend the Explanation View against two lines of criticism, and I argue that within the framework of evidence-based medicine mere validation seems insufficient for the use of AI output. I end by characterizing the epistemic responsibility of clinicians and point out how a mere AI output cannot in itself ground a practical conclusion about what to do.",
author = "Sune Holm",
year = "2023",
month = jun,
day = "9",
doi = "10.1017/S0963180123000294",
language = "English",
journal = "Cambridge Quarterly of Healthcare Ethics",
issn = "0963-1801",
publisher = "Cambridge University Press",

}

RIS

TY - JOUR

T1 - On the justified use of AI decision support in evidence-based medicine

T2 - Validity, explainability, and responsibility

AU - Holm, Sune

PY - 2023/6/9

Y1 - 2023/6/9

N2 - When is it justified to use opaque artificial intelligence (AI) output in medical decision-making? Consideration of this question is of central importance for the responsible use of opaque machine learning (ML) models, which have been shown to produce accurate and reliable diagnoses, prognoses, and treatment suggestions in medicine. In this article, I discuss the merits of two answers to the question. According to the Explanation View, clinicians must have access to an explanation of why an output was produced. According to the Validation View, it is sufficient that the AI system has been validated using established standards for safety and reliability. I defend the Explanation View against two lines of criticism, and I argue that within the framework of evidence-based medicine mere validation seems insufficient for the use of AI output. I end by characterizing the epistemic responsibility of clinicians and point out how a mere AI output cannot in itself ground a practical conclusion about what to do.

AB - When is it justified to use opaque artificial intelligence (AI) output in medical decision-making? Consideration of this question is of central importance for the responsible use of opaque machine learning (ML) models, which have been shown to produce accurate and reliable diagnoses, prognoses, and treatment suggestions in medicine. In this article, I discuss the merits of two answers to the question. According to the Explanation View, clinicians must have access to an explanation of why an output was produced. According to the Validation View, it is sufficient that the AI system has been validated using established standards for safety and reliability. I defend the Explanation View against two lines of criticism, and I argue that within the framework of evidence-based medicine mere validation seems insufficient for the use of AI output. I end by characterizing the epistemic responsibility of clinicians and point out how a mere AI output cannot in itself ground a practical conclusion about what to do.

U2 - 10.1017/S0963180123000294

DO - 10.1017/S0963180123000294

M3 - Journal article

C2 - 37293823

JO - Cambridge Quarterly of Healthcare Ethics

JF - Cambridge Quarterly of Healthcare Ethics

SN - 0963-1801

ER -

ID: 353916146