Are demographically invariant models and representations in medical imaging fair?

Publikation: Working paperPreprintForskning

Standard

Are demographically invariant models and representations in medical imaging fair? / Petersen, Eike; Ferrante, Enzo; Ganz, Melanie; Feragen, Aasa.

2023.

Publikation: Working paperPreprintForskning

Harvard

Petersen, E, Ferrante, E, Ganz, M & Feragen, A 2023 'Are demographically invariant models and representations in medical imaging fair?'.

APA

Petersen, E., Ferrante, E., Ganz, M., & Feragen, A. (2023). Are demographically invariant models and representations in medical imaging fair?

Vancouver

Petersen E, Ferrante E, Ganz M, Feragen A. Are demographically invariant models and representations in medical imaging fair? 2023 maj 2.

Author

Petersen, Eike ; Ferrante, Enzo ; Ganz, Melanie ; Feragen, Aasa. / Are demographically invariant models and representations in medical imaging fair?. 2023.

Bibtex

@techreport{c5730526c6e3462f8725518b3daa7072,
title = "Are demographically invariant models and representations in medical imaging fair?",
abstract = "Medical imaging models have been shown to encode information about patient demographics (age, race, sex) in their latent representation, raising concerns about their potential for discrimination. Here, we ask whether it is feasible and desirable to train models that do not encode demographic attributes. We consider different types of invariance with respect to demographic attributes - marginal, class-conditional, and counterfactual model invariance - and lay out their equivalence to standard notions of algorithmic fairness. Drawing on existing theory, we find that marginal and class-conditional invariance can be considered overly restrictive approaches for achieving certain fairness notions, resulting in significant predictive performance losses. Concerning counterfactual model invariance, we note that defining medical image counterfactuals with respect to demographic attributes is fraught with complexities. Finally, we posit that demographic encoding may even be considered advantageous if it enables learning a task-specific encoding of demographic features that does not rely on human-constructed categories such as 'race' and 'gender'. We conclude that medical imaging models may need to encode demographic attributes, lending further urgency to calls for comprehensive model fairness assessments in terms of predictive performance.",
keywords = "cs.LG, cs.CY, eess.IV, stat.ML",
author = "Eike Petersen and Enzo Ferrante and Melanie Ganz and Aasa Feragen",
year = "2023",
month = may,
day = "2",
language = "English",
type = "WorkingPaper",

}

RIS

TY - UNPB

T1 - Are demographically invariant models and representations in medical imaging fair?

AU - Petersen, Eike

AU - Ferrante, Enzo

AU - Ganz, Melanie

AU - Feragen, Aasa

PY - 2023/5/2

Y1 - 2023/5/2

N2 - Medical imaging models have been shown to encode information about patient demographics (age, race, sex) in their latent representation, raising concerns about their potential for discrimination. Here, we ask whether it is feasible and desirable to train models that do not encode demographic attributes. We consider different types of invariance with respect to demographic attributes - marginal, class-conditional, and counterfactual model invariance - and lay out their equivalence to standard notions of algorithmic fairness. Drawing on existing theory, we find that marginal and class-conditional invariance can be considered overly restrictive approaches for achieving certain fairness notions, resulting in significant predictive performance losses. Concerning counterfactual model invariance, we note that defining medical image counterfactuals with respect to demographic attributes is fraught with complexities. Finally, we posit that demographic encoding may even be considered advantageous if it enables learning a task-specific encoding of demographic features that does not rely on human-constructed categories such as 'race' and 'gender'. We conclude that medical imaging models may need to encode demographic attributes, lending further urgency to calls for comprehensive model fairness assessments in terms of predictive performance.

AB - Medical imaging models have been shown to encode information about patient demographics (age, race, sex) in their latent representation, raising concerns about their potential for discrimination. Here, we ask whether it is feasible and desirable to train models that do not encode demographic attributes. We consider different types of invariance with respect to demographic attributes - marginal, class-conditional, and counterfactual model invariance - and lay out their equivalence to standard notions of algorithmic fairness. Drawing on existing theory, we find that marginal and class-conditional invariance can be considered overly restrictive approaches for achieving certain fairness notions, resulting in significant predictive performance losses. Concerning counterfactual model invariance, we note that defining medical image counterfactuals with respect to demographic attributes is fraught with complexities. Finally, we posit that demographic encoding may even be considered advantageous if it enables learning a task-specific encoding of demographic features that does not rely on human-constructed categories such as 'race' and 'gender'. We conclude that medical imaging models may need to encode demographic attributes, lending further urgency to calls for comprehensive model fairness assessments in terms of predictive performance.

KW - cs.LG

KW - cs.CY

KW - eess.IV

KW - stat.ML

M3 - Preprint

BT - Are demographically invariant models and representations in medical imaging fair?

ER -

ID: 346245985