Trustworthy AI: a cooperative approach

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Standard

Trustworthy AI : a cooperative approach. / Slosser, Jacob Livingston; Aasa, Birgit; Olsen, Henrik Palmer.

I: Technology and Regulation, Bind 2023, 2023, s. 58-68.

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Harvard

Slosser, JL, Aasa, B & Olsen, HP 2023, 'Trustworthy AI: a cooperative approach', Technology and Regulation, bind 2023, s. 58-68. https://doi.org/10.26116/techreg.2023.006

APA

Slosser, J. L., Aasa, B., & Olsen, H. P. (2023). Trustworthy AI: a cooperative approach. Technology and Regulation, 2023, 58-68. https://doi.org/10.26116/techreg.2023.006

Vancouver

Slosser JL, Aasa B, Olsen HP. Trustworthy AI: a cooperative approach. Technology and Regulation. 2023;2023:58-68. https://doi.org/10.26116/techreg.2023.006

Author

Slosser, Jacob Livingston ; Aasa, Birgit ; Olsen, Henrik Palmer. / Trustworthy AI : a cooperative approach. I: Technology and Regulation. 2023 ; Bind 2023. s. 58-68.

Bibtex

@article{2e0468036dd646acad59ce6e80ce2e68,
title = "Trustworthy AI: a cooperative approach",
abstract = "The EU has proposed harmonized rules on artificial intelligence (AI Act) and a directive on adapting non-contractual civil liability rules to AI (AI liability directive) due to increased demand for trustworthy AI. However, the concept of trustworthy AI is unspecific, covering various desired characteristics such as safety, transparency, and accountability. Trustworthiness requires a specific contextual setting that involves human interaction with AI technology, and simply involving humans in decision processes does not guarantee trustworthy outcomes. In this paper, the authors argue for an informed notion of what is meant for a system to be trustworthy and examine the concept of trust, highlighting its reliance on a specific relationship between humans that cannot be strictly transmuted into a relationship between humans and machines. They outline a trust-based model for a cooperative approach to AI and provide an example of what that might look like.",
author = "Slosser, {Jacob Livingston} and Birgit Aasa and Olsen, {Henrik Palmer}",
year = "2023",
doi = "10.26116/techreg.2023.006",
language = "English",
volume = "2023",
pages = "58--68",
journal = "Technology and Regulation",

}

RIS

TY - JOUR

T1 - Trustworthy AI

T2 - a cooperative approach

AU - Slosser, Jacob Livingston

AU - Aasa, Birgit

AU - Olsen, Henrik Palmer

PY - 2023

Y1 - 2023

N2 - The EU has proposed harmonized rules on artificial intelligence (AI Act) and a directive on adapting non-contractual civil liability rules to AI (AI liability directive) due to increased demand for trustworthy AI. However, the concept of trustworthy AI is unspecific, covering various desired characteristics such as safety, transparency, and accountability. Trustworthiness requires a specific contextual setting that involves human interaction with AI technology, and simply involving humans in decision processes does not guarantee trustworthy outcomes. In this paper, the authors argue for an informed notion of what is meant for a system to be trustworthy and examine the concept of trust, highlighting its reliance on a specific relationship between humans that cannot be strictly transmuted into a relationship between humans and machines. They outline a trust-based model for a cooperative approach to AI and provide an example of what that might look like.

AB - The EU has proposed harmonized rules on artificial intelligence (AI Act) and a directive on adapting non-contractual civil liability rules to AI (AI liability directive) due to increased demand for trustworthy AI. However, the concept of trustworthy AI is unspecific, covering various desired characteristics such as safety, transparency, and accountability. Trustworthiness requires a specific contextual setting that involves human interaction with AI technology, and simply involving humans in decision processes does not guarantee trustworthy outcomes. In this paper, the authors argue for an informed notion of what is meant for a system to be trustworthy and examine the concept of trust, highlighting its reliance on a specific relationship between humans that cannot be strictly transmuted into a relationship between humans and machines. They outline a trust-based model for a cooperative approach to AI and provide an example of what that might look like.

U2 - 10.26116/techreg.2023.006

DO - 10.26116/techreg.2023.006

M3 - Journal article

VL - 2023

SP - 58

EP - 68

JO - Technology and Regulation

JF - Technology and Regulation

ER -

ID: 368340787