Should people have a right not to be subjected to AI profiling based on publicly available data? A comment on Ploug

Publikation: Bidrag til tidsskriftKommentar/debatForskningfagfællebedømt

Dokumenter

  • Full text

    Forlagets udgivne version, 551 KB, PDF-dokument

Several studies have documented that when presented with data from social media platforms machine learning (ML) models can make accurate predictions about users, e.g., about whether they are likely to suffer health-related conditions such as depression, mental disorders, and risk of suicide. In a recent article, Ploug (Philos Technol 36:14, 2023) defends a right not to be subjected to AI profiling based on publicly available data. In this comment, I raise some questions in relation to Ploug’s argument that I think deserves further discussion.

OriginalsprogEngelsk
Artikelnummer38
TidsskriftPhilosophy and Technology
Vol/bind36
Antal sider5
ISSN2210-5433
DOI
StatusUdgivet - 2023

Bibliografisk note

Funding Information:
N/A.

Publisher Copyright:
© 2023, The Author(s).

Antal downloads er baseret på statistik fra Google Scholar og www.ku.dk


Ingen data tilgængelig

ID: 352969905