Investigating Representation of Text and Audio in Educational VR using Learning Outcomes and EEG

Publikation: KonferencebidragPaperForskningfagfællebedømt

Standard

Investigating Representation of Text and Audio in Educational VR using Learning Outcomes and EEG. / Baceviciute, Sarune; Mottelson, Aske; Terkildsen, Thomas Schjødt; Makransky, Guido.

2020. Paper præsenteret ved CHI 2020, Honolulu, Hawaii, USA.

Publikation: KonferencebidragPaperForskningfagfællebedømt

Harvard

Baceviciute, S, Mottelson, A, Terkildsen, TS & Makransky, G 2020, 'Investigating Representation of Text and Audio in Educational VR using Learning Outcomes and EEG', Paper fremlagt ved CHI 2020, Honolulu, USA, 25/04/2020 - 30/04/2020. https://doi.org/10.1145/3313831.3376872

APA

Baceviciute, S., Mottelson, A., Terkildsen, T. S., & Makransky, G. (2020). Investigating Representation of Text and Audio in Educational VR using Learning Outcomes and EEG. Paper præsenteret ved CHI 2020, Honolulu, Hawaii, USA. https://doi.org/10.1145/3313831.3376872

Vancouver

Baceviciute S, Mottelson A, Terkildsen TS, Makransky G. Investigating Representation of Text and Audio in Educational VR using Learning Outcomes and EEG. 2020. Paper præsenteret ved CHI 2020, Honolulu, Hawaii, USA. https://doi.org/10.1145/3313831.3376872

Author

Baceviciute, Sarune ; Mottelson, Aske ; Terkildsen, Thomas Schjødt ; Makransky, Guido. / Investigating Representation of Text and Audio in Educational VR using Learning Outcomes and EEG. Paper præsenteret ved CHI 2020, Honolulu, Hawaii, USA.13 s.

Bibtex

@conference{15f3e2ec7c314fb29cc0289315e4be10,
title = "Investigating Representation of Text and Audio in Educational VR using Learning Outcomes and EEG",
abstract = "This paper reports findings from a between-subjects experiment that investigates how different learning content representations in virtual environments (VE) affect the process and outcomes of learning. Seventy-eight participants were subjected to an immersive virtual reality (VR) application, where they received identical instructional information, rendered in three different formats: as text in an overlay interface, as text embedded semantically in a virtual book, or as audio. Learning outcome measures, self-reports, and an electroencephalogram (EEG) were used to compare conditions. Results show that reading was superior to listening for the learning outcomes of retention, self-efficacy, and extraneous attention. Reading text from a virtual book was reported to be less cognitively demanding, compared to reading from an overlay interface. EEG analyses show significantly lower theta and higher alpha activation in the audio condition. The findings provide important considerations for the design of educational V environments.",
keywords = "Faculty of Social Sciences, Virtual reality, Educational Technology, Learning, Cognitive Load, EEG",
author = "Sarune Baceviciute and Aske Mottelson and Terkildsen, {Thomas Schj{\o}dt} and Guido Makransky",
year = "2020",
doi = "10.1145/3313831.3376872",
language = "English",
note = "CHI 2020 ; Conference date: 25-04-2020 Through 30-04-2020",
url = "https://chi2020.acm.org/",

}

RIS

TY - CONF

T1 - Investigating Representation of Text and Audio in Educational VR using Learning Outcomes and EEG

AU - Baceviciute, Sarune

AU - Mottelson, Aske

AU - Terkildsen, Thomas Schjødt

AU - Makransky, Guido

PY - 2020

Y1 - 2020

N2 - This paper reports findings from a between-subjects experiment that investigates how different learning content representations in virtual environments (VE) affect the process and outcomes of learning. Seventy-eight participants were subjected to an immersive virtual reality (VR) application, where they received identical instructional information, rendered in three different formats: as text in an overlay interface, as text embedded semantically in a virtual book, or as audio. Learning outcome measures, self-reports, and an electroencephalogram (EEG) were used to compare conditions. Results show that reading was superior to listening for the learning outcomes of retention, self-efficacy, and extraneous attention. Reading text from a virtual book was reported to be less cognitively demanding, compared to reading from an overlay interface. EEG analyses show significantly lower theta and higher alpha activation in the audio condition. The findings provide important considerations for the design of educational V environments.

AB - This paper reports findings from a between-subjects experiment that investigates how different learning content representations in virtual environments (VE) affect the process and outcomes of learning. Seventy-eight participants were subjected to an immersive virtual reality (VR) application, where they received identical instructional information, rendered in three different formats: as text in an overlay interface, as text embedded semantically in a virtual book, or as audio. Learning outcome measures, self-reports, and an electroencephalogram (EEG) were used to compare conditions. Results show that reading was superior to listening for the learning outcomes of retention, self-efficacy, and extraneous attention. Reading text from a virtual book was reported to be less cognitively demanding, compared to reading from an overlay interface. EEG analyses show significantly lower theta and higher alpha activation in the audio condition. The findings provide important considerations for the design of educational V environments.

KW - Faculty of Social Sciences

KW - Virtual reality

KW - Educational Technology

KW - Learning

KW - Cognitive Load

KW - EEG

U2 - 10.1145/3313831.3376872

DO - 10.1145/3313831.3376872

M3 - Paper

T2 - CHI 2020

Y2 - 25 April 2020 through 30 April 2020

ER -

ID: 237997762