Laypersons versus experienced surgeons in assessing simulated robot-assisted radical prostatectomy

Research output: Contribution to journalJournal articleResearchpeer-review

Standard

Laypersons versus experienced surgeons in assessing simulated robot-assisted radical prostatectomy. / Olsen, Rikke Groth; Konge, Lars; Hayatzaki, Khalilullah; Mortensen, Mike Allan; Bube, Sarah Hjartbro; Røder, Andreas; Azawi, Nessn; Bjerrum, Flemming.

In: World Journal of Urology, Vol. 41, No. 12, 2023, p. 3745-3751.

Research output: Contribution to journalJournal articleResearchpeer-review

Harvard

Olsen, RG, Konge, L, Hayatzaki, K, Mortensen, MA, Bube, SH, Røder, A, Azawi, N & Bjerrum, F 2023, 'Laypersons versus experienced surgeons in assessing simulated robot-assisted radical prostatectomy', World Journal of Urology, vol. 41, no. 12, pp. 3745-3751. https://doi.org/10.1007/s00345-023-04664-w

APA

Olsen, R. G., Konge, L., Hayatzaki, K., Mortensen, M. A., Bube, S. H., Røder, A., Azawi, N., & Bjerrum, F. (2023). Laypersons versus experienced surgeons in assessing simulated robot-assisted radical prostatectomy. World Journal of Urology, 41(12), 3745-3751. https://doi.org/10.1007/s00345-023-04664-w

Vancouver

Olsen RG, Konge L, Hayatzaki K, Mortensen MA, Bube SH, Røder A et al. Laypersons versus experienced surgeons in assessing simulated robot-assisted radical prostatectomy. World Journal of Urology. 2023;41(12):3745-3751. https://doi.org/10.1007/s00345-023-04664-w

Author

Olsen, Rikke Groth ; Konge, Lars ; Hayatzaki, Khalilullah ; Mortensen, Mike Allan ; Bube, Sarah Hjartbro ; Røder, Andreas ; Azawi, Nessn ; Bjerrum, Flemming. / Laypersons versus experienced surgeons in assessing simulated robot-assisted radical prostatectomy. In: World Journal of Urology. 2023 ; Vol. 41, No. 12. pp. 3745-3751.

Bibtex

@article{4d10569ed16c496c9f67aed4cc26e950,
title = "Laypersons versus experienced surgeons in assessing simulated robot-assisted radical prostatectomy",
abstract = "Background: Feedback is important for surgical trainees but it can be biased and time-consuming. We examined crowd-sourced assessment as an alternative to experienced surgeons{\textquoteright} assessment of robot-assisted radical prostatectomy (RARP). Methods: We used video recordings (n = 45) of three RARP modules on the RobotiX, Simbionix simulator from a previous study in a blinded comparative assessment study. A group of crowd workers (CWs) and two experienced RARP surgeons (ESs) evaluated all videos with the modified Global Evaluative Assessment of Robotic Surgery (mGEARS). Results: One hundred forty-nine CWs performed 1490 video ratings. Internal consistency reliability was high (0.94). Inter-rater reliability and test–retest reliability were low for CWs (0.29 and 0.39) and moderate for ESs (0.61 and 0.68). In an Analysis of Variance (ANOVA) test, CWs could not discriminate between the skill level of the surgeons (p = 0.03–0.89), whereas ES could (p = 0.034). Conclusion: We found very low agreement between the assessments of CWs and ESs when they assessed robot-assisted radical prostatectomies. As opposed to ESs, CWs could not discriminate between surgical experience using the mGEARS ratings or when asked if they wanted the surgeons to perform their robotic surgery.",
keywords = "Assessment, Crowdsourcing, Prostatectomy, Robotic surgical procedures, Surgical education, Urology",
author = "Olsen, {Rikke Groth} and Lars Konge and Khalilullah Hayatzaki and Mortensen, {Mike Allan} and Bube, {Sarah Hjartbro} and Andreas R{\o}der and Nessn Azawi and Flemming Bjerrum",
note = "Publisher Copyright: {\textcopyright} 2023, The Author(s).",
year = "2023",
doi = "10.1007/s00345-023-04664-w",
language = "English",
volume = "41",
pages = "3745--3751",
journal = "World Journal of Urology",
issn = "0724-4983",
publisher = "Springer",
number = "12",

}

RIS

TY - JOUR

T1 - Laypersons versus experienced surgeons in assessing simulated robot-assisted radical prostatectomy

AU - Olsen, Rikke Groth

AU - Konge, Lars

AU - Hayatzaki, Khalilullah

AU - Mortensen, Mike Allan

AU - Bube, Sarah Hjartbro

AU - Røder, Andreas

AU - Azawi, Nessn

AU - Bjerrum, Flemming

N1 - Publisher Copyright: © 2023, The Author(s).

PY - 2023

Y1 - 2023

N2 - Background: Feedback is important for surgical trainees but it can be biased and time-consuming. We examined crowd-sourced assessment as an alternative to experienced surgeons’ assessment of robot-assisted radical prostatectomy (RARP). Methods: We used video recordings (n = 45) of three RARP modules on the RobotiX, Simbionix simulator from a previous study in a blinded comparative assessment study. A group of crowd workers (CWs) and two experienced RARP surgeons (ESs) evaluated all videos with the modified Global Evaluative Assessment of Robotic Surgery (mGEARS). Results: One hundred forty-nine CWs performed 1490 video ratings. Internal consistency reliability was high (0.94). Inter-rater reliability and test–retest reliability were low for CWs (0.29 and 0.39) and moderate for ESs (0.61 and 0.68). In an Analysis of Variance (ANOVA) test, CWs could not discriminate between the skill level of the surgeons (p = 0.03–0.89), whereas ES could (p = 0.034). Conclusion: We found very low agreement between the assessments of CWs and ESs when they assessed robot-assisted radical prostatectomies. As opposed to ESs, CWs could not discriminate between surgical experience using the mGEARS ratings or when asked if they wanted the surgeons to perform their robotic surgery.

AB - Background: Feedback is important for surgical trainees but it can be biased and time-consuming. We examined crowd-sourced assessment as an alternative to experienced surgeons’ assessment of robot-assisted radical prostatectomy (RARP). Methods: We used video recordings (n = 45) of three RARP modules on the RobotiX, Simbionix simulator from a previous study in a blinded comparative assessment study. A group of crowd workers (CWs) and two experienced RARP surgeons (ESs) evaluated all videos with the modified Global Evaluative Assessment of Robotic Surgery (mGEARS). Results: One hundred forty-nine CWs performed 1490 video ratings. Internal consistency reliability was high (0.94). Inter-rater reliability and test–retest reliability were low for CWs (0.29 and 0.39) and moderate for ESs (0.61 and 0.68). In an Analysis of Variance (ANOVA) test, CWs could not discriminate between the skill level of the surgeons (p = 0.03–0.89), whereas ES could (p = 0.034). Conclusion: We found very low agreement between the assessments of CWs and ESs when they assessed robot-assisted radical prostatectomies. As opposed to ESs, CWs could not discriminate between surgical experience using the mGEARS ratings or when asked if they wanted the surgeons to perform their robotic surgery.

KW - Assessment

KW - Crowdsourcing

KW - Prostatectomy

KW - Robotic surgical procedures

KW - Surgical education

KW - Urology

U2 - 10.1007/s00345-023-04664-w

DO - 10.1007/s00345-023-04664-w

M3 - Journal article

C2 - 37882808

AN - SCOPUS:85174955604

VL - 41

SP - 3745

EP - 3751

JO - World Journal of Urology

JF - World Journal of Urology

SN - 0724-4983

IS - 12

ER -

ID: 396808595