Bias in context: What to do when complete bias removal is not an option

Publikation: Bidrag til tidsskriftLetterForskningfagfællebedømt

Dokumenter

  • Fulltext

    Forlagets udgivne version, 113 KB, PDF-dokument

It is widely recognized that machine learning algorithms may be biased in the sense that they perform worse on some demographic groups than others. This motivates algorithmic development to remove algorithmic bias, which in turn might lead to a hope—even an expectation—that algorithmic bias can be mitigated or removed (1). In this short comment, we make three points to qualify Wang et al.’s suggestion: 1) It may not be possible for algorithms to perform equally well across groups on all measures, 2) which inequalities count as morally unacceptable bias is an ethical question, and 3) the answer to the ethical question will vary across decision contexts.
OriginalsprogEngelsk
Artikelnummere2304710120
TidsskriftProceedings of the National Academy of Sciences of the United States of America
Vol/bind120
Udgave nummer23
Antal sider1
ISSN0027-8424
DOI
StatusUdgivet - 2023

Antal downloads er baseret på statistik fra Google Scholar og www.ku.dk


Ingen data tilgængelig

ID: 348163796