Influence, Immersion, Intensity, Integration, Interaction: Five Frames for the Future of AI Law and Policy
Publikation: Bidrag til bog/antologi/rapport › Bidrag til bog/antologi › Forskning › fagfællebedømt
Standard
Influence, Immersion, Intensity, Integration, Interaction: Five Frames for the Future of AI Law and Policy. / Liu, Hin-Yan; Sobocki, Victoria.
Law and Artificial Intelligence: Regulating AI and Applying AI in Legal Practice. red. / Bart Custers; Eduard Fosch Villaronga . TMC Asser Press, 2022. s. 541-560 27 ( Information Technology and Law Series , Bind 35).Publikation: Bidrag til bog/antologi/rapport › Bidrag til bog/antologi › Forskning › fagfællebedømt
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - CHAP
T1 - Influence, Immersion, Intensity, Integration, Interaction: Five Frames for the Future of AI Law and Policy
AU - Liu, Hin-Yan
AU - Sobocki, Victoria
PY - 2022
Y1 - 2022
N2 - Law and policy discussions concerning the impact of artificial intelligence (AI) upon society are stagnating. By this, we mean that contemporary discussions adopt implicit assumptions in their approaches to AI, which presuppose the characteristics of entity, externality, and exclusivity. In other words, for law and policy purposes: AI is often treated as something (encapsulated by AI personhood proposals); as the other (discernible from concerns that human beings are the decision subjects of AI applications); and as artificial (thereby concentrating on the artefactual characteristics of AI). Taken together, these form an overly narrow model of AI and unnecessarily constrain the palette of law and policy responses to both the challenges and opportunities presented by the technology.As a step towards rounding out law and policy responses to AI, with a view to providing provide greater societal resilience to, and preparedness for, technologically-induced disruption, we suggest a more integrated and open-minded approach in how we model AI: influence, where human behaviour is directed and manipulated; immersion, where the distinctions between physical and virtual realities dissolve; intensity, where realities and experiences can be sharpened, lengthened, or otherwise altered; integration, where the boundaries between AI and human are being blurred; and interaction, where feedback loops undermine notions of linearity and causality. These pivots suggest different types of human relationships with AI, drawing attention to the legal and policy implications of engaging in AI-influenced worlds. We will ground these conceptually driven policy framing pivots in examples involving harm. These will demonstrate how contemporary law and policy framings are overly narrow and too dependent on previous comforting pathways. We will suggest that further problem-finding endeavours will be necessary to ensure more robust and resilient law and policy responses to the challenges posed by AI.
AB - Law and policy discussions concerning the impact of artificial intelligence (AI) upon society are stagnating. By this, we mean that contemporary discussions adopt implicit assumptions in their approaches to AI, which presuppose the characteristics of entity, externality, and exclusivity. In other words, for law and policy purposes: AI is often treated as something (encapsulated by AI personhood proposals); as the other (discernible from concerns that human beings are the decision subjects of AI applications); and as artificial (thereby concentrating on the artefactual characteristics of AI). Taken together, these form an overly narrow model of AI and unnecessarily constrain the palette of law and policy responses to both the challenges and opportunities presented by the technology.As a step towards rounding out law and policy responses to AI, with a view to providing provide greater societal resilience to, and preparedness for, technologically-induced disruption, we suggest a more integrated and open-minded approach in how we model AI: influence, where human behaviour is directed and manipulated; immersion, where the distinctions between physical and virtual realities dissolve; intensity, where realities and experiences can be sharpened, lengthened, or otherwise altered; integration, where the boundaries between AI and human are being blurred; and interaction, where feedback loops undermine notions of linearity and causality. These pivots suggest different types of human relationships with AI, drawing attention to the legal and policy implications of engaging in AI-influenced worlds. We will ground these conceptually driven policy framing pivots in examples involving harm. These will demonstrate how contemporary law and policy framings are overly narrow and too dependent on previous comforting pathways. We will suggest that further problem-finding endeavours will be necessary to ensure more robust and resilient law and policy responses to the challenges posed by AI.
U2 - 10.1007/978-94-6265-523-2_27
DO - 10.1007/978-94-6265-523-2_27
M3 - Book chapter
SN - 9789462655225
T3 - Information Technology and Law Series
SP - 541
EP - 560
BT - Law and Artificial Intelligence
A2 - Custers, Bart
A2 - Villaronga , Eduard Fosch
PB - TMC Asser Press
ER -
ID: 286565873