Laypersons versus experienced surgeons in assessing simulated robot-assisted radical prostatectomy

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Dokumenter

  • Fulltext

    Forlagets udgivne version, 694 KB, PDF-dokument

Background: Feedback is important for surgical trainees but it can be biased and time-consuming. We examined crowd-sourced assessment as an alternative to experienced surgeons’ assessment of robot-assisted radical prostatectomy (RARP). Methods: We used video recordings (n = 45) of three RARP modules on the RobotiX, Simbionix simulator from a previous study in a blinded comparative assessment study. A group of crowd workers (CWs) and two experienced RARP surgeons (ESs) evaluated all videos with the modified Global Evaluative Assessment of Robotic Surgery (mGEARS). Results: One hundred forty-nine CWs performed 1490 video ratings. Internal consistency reliability was high (0.94). Inter-rater reliability and test–retest reliability were low for CWs (0.29 and 0.39) and moderate for ESs (0.61 and 0.68). In an Analysis of Variance (ANOVA) test, CWs could not discriminate between the skill level of the surgeons (p = 0.03–0.89), whereas ES could (p = 0.034). Conclusion: We found very low agreement between the assessments of CWs and ESs when they assessed robot-assisted radical prostatectomies. As opposed to ESs, CWs could not discriminate between surgical experience using the mGEARS ratings or when asked if they wanted the surgeons to perform their robotic surgery.

OriginalsprogEngelsk
TidsskriftWorld Journal of Urology
Vol/bind41
Udgave nummer12
Sider (fra-til)3745-3751
Antal sider7
ISSN0724-4983
DOI
StatusUdgivet - 2023

Bibliografisk note

Funding Information:
We would like to thank the team members of the patient organization, James Lind Care, and the Danish patient community, Forskningspanelet , Benjamin Markersen, and Rasmus Hjorth who helped with the recruitment of CWs.

Publisher Copyright:
© 2023, The Author(s).

ID: 396808595