Laypersons versus experienced surgeons in assessing simulated robot-assisted radical prostatectomy
Research output: Contribution to journal › Journal article › Research › peer-review
Standard
Laypersons versus experienced surgeons in assessing simulated robot-assisted radical prostatectomy. / Olsen, Rikke Groth; Konge, Lars; Hayatzaki, Khalilullah; Mortensen, Mike Allan; Bube, Sarah Hjartbro; Røder, Andreas; Azawi, Nessn; Bjerrum, Flemming.
In: World Journal of Urology, Vol. 41, No. 12, 2023, p. 3745-3751.Research output: Contribution to journal › Journal article › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - JOUR
T1 - Laypersons versus experienced surgeons in assessing simulated robot-assisted radical prostatectomy
AU - Olsen, Rikke Groth
AU - Konge, Lars
AU - Hayatzaki, Khalilullah
AU - Mortensen, Mike Allan
AU - Bube, Sarah Hjartbro
AU - Røder, Andreas
AU - Azawi, Nessn
AU - Bjerrum, Flemming
N1 - Publisher Copyright: © 2023, The Author(s).
PY - 2023
Y1 - 2023
N2 - Background: Feedback is important for surgical trainees but it can be biased and time-consuming. We examined crowd-sourced assessment as an alternative to experienced surgeons’ assessment of robot-assisted radical prostatectomy (RARP). Methods: We used video recordings (n = 45) of three RARP modules on the RobotiX, Simbionix simulator from a previous study in a blinded comparative assessment study. A group of crowd workers (CWs) and two experienced RARP surgeons (ESs) evaluated all videos with the modified Global Evaluative Assessment of Robotic Surgery (mGEARS). Results: One hundred forty-nine CWs performed 1490 video ratings. Internal consistency reliability was high (0.94). Inter-rater reliability and test–retest reliability were low for CWs (0.29 and 0.39) and moderate for ESs (0.61 and 0.68). In an Analysis of Variance (ANOVA) test, CWs could not discriminate between the skill level of the surgeons (p = 0.03–0.89), whereas ES could (p = 0.034). Conclusion: We found very low agreement between the assessments of CWs and ESs when they assessed robot-assisted radical prostatectomies. As opposed to ESs, CWs could not discriminate between surgical experience using the mGEARS ratings or when asked if they wanted the surgeons to perform their robotic surgery.
AB - Background: Feedback is important for surgical trainees but it can be biased and time-consuming. We examined crowd-sourced assessment as an alternative to experienced surgeons’ assessment of robot-assisted radical prostatectomy (RARP). Methods: We used video recordings (n = 45) of three RARP modules on the RobotiX, Simbionix simulator from a previous study in a blinded comparative assessment study. A group of crowd workers (CWs) and two experienced RARP surgeons (ESs) evaluated all videos with the modified Global Evaluative Assessment of Robotic Surgery (mGEARS). Results: One hundred forty-nine CWs performed 1490 video ratings. Internal consistency reliability was high (0.94). Inter-rater reliability and test–retest reliability were low for CWs (0.29 and 0.39) and moderate for ESs (0.61 and 0.68). In an Analysis of Variance (ANOVA) test, CWs could not discriminate between the skill level of the surgeons (p = 0.03–0.89), whereas ES could (p = 0.034). Conclusion: We found very low agreement between the assessments of CWs and ESs when they assessed robot-assisted radical prostatectomies. As opposed to ESs, CWs could not discriminate between surgical experience using the mGEARS ratings or when asked if they wanted the surgeons to perform their robotic surgery.
KW - Assessment
KW - Crowdsourcing
KW - Prostatectomy
KW - Robotic surgical procedures
KW - Surgical education
KW - Urology
U2 - 10.1007/s00345-023-04664-w
DO - 10.1007/s00345-023-04664-w
M3 - Journal article
C2 - 37882808
AN - SCOPUS:85174955604
VL - 41
SP - 3745
EP - 3751
JO - World Journal of Urology
JF - World Journal of Urology
SN - 0724-4983
IS - 12
ER -
ID: 396808595