Background: Videos of clinical interventions (VoCIs) demonstrating surgical and interventional procedures have become a mainstay in clinical practice and peer-reviewed academic literature. Despite the widespread availability of VoCI in the literature, there remain no established guidelines regarding the reporting of VoCI. We undertook a scoping review to investigate the current utilisation, application, and quality in VoCI reporting. Summary: A comprehensive literature search of Medline, Embase, Emcare, and CINAHL databases was performed to retrieve articles presenting VoCI, from January 2020 to December 2023. A customised data extraction tool assessed video characteristics (e.g., case presentation, outcomes), utility (e.g., target audience, reproducibility of procedure), and quality (subjective and objective). A total of 624 VoCIs were included (mean length 06:06), with over 62 h of VoCI reviewed. The most common VoCI perspectives were endoscopic (n = 153; 25%) and laparoscopic (n = 140; 22%). The clinical background and outcomes were described in 480 (76.9%) and 403 cases (64.6%), respectively, with disclosures (n = 23; 3.8%) rarely presented. VoCI primarily targeted trainees (n = 547; 87.7%) with most videos providing technical guidance (n = 394; 63.1%). In total, 248 videos (40%) were rated as medium or low quality on subjective assessment. Key Messages: There are significant heterogeneity and notably poor-quality control in VoCI reporting in peer-reviewed literature resulting in the omission of critical procedural steps and suboptimal visual quality. VoCI reporting guidelines are therefore urgently required to provide a set of minimum items that should be reported by clinicians when uploading VoCI.

Video technology has become an integral part of modern society and medical practice [1, 2]. Following the SARS-CoV-2 pandemic, patients, clinicians, and students have become comfortable embracing the multifaceted applications of video technology [3‒5]. For surgeons and proceduralists (including interventional radiologists and endoscopists), the concurrent advancement of minimally invasive techniques and video technology has led to a rapid rise in publication of videos of clinical interventions (VoCIs) [6, 7]. However, despite its prominence, the utilisation of VoCI is controversial [8].

The benefits are clear. For patients, recordings offer a new avenue for engagement and improved collaborative decision-making [9, 10]. However, numerous studies have raised concerns around the quality and accuracy of surgical videos available to patients on popular online platforms such as “YouTube” [11‒13]. For practitioners, VoCIs have varied applications. As “black box” recordings, VoCIs ensure quality control, permit peer-review, and act as medico-legal documentation [14]. Live-streamed VoCIs provide the opportunity for telemedicine and intraprocedural support from colleagues, experts, and interdisciplinary collaboration [13]. These innovations could enhance technical performance, reduce adverse events, and improve patient outcomes [15‒17]. In the era of big data, VoCI could help facilitate the implementation of artificial intelligence (AI) into theatres with automated procedural analysis or enhanced diagnostics [7]. Understandably, some practitioners fear VoCI could be used to punish or coerce surgeons, while others raise ethical and patient privacy concerns [14, 18].

VoCIs are a powerful adjunct to surgical education and training [19]. Visualising an operative procedure from the surgeon’s perspective provides essential information on anatomy and technique that facilitates accurate replication [20]. Recent innovative VoCI platforms, such as Touch Surgery and Proximie, have been shown to improve surgical performance [21, 22]. Interestingly, regardless of the availability of these platforms, many surgeons and trainees still use free online video-sharing sites such as “YouTube” or “X” [23].

Despite the benefits of VoCI, the quality of procedural videos in peer-reviewed journals and public forums is varied [24]. Omissions of critical procedural steps, poor visual and audio quality, and the lack of outcome data restrict the potential scholastic benefits. VoCI may demonstrate unsafe or misleading practices that may not be immediately recognised, especially by novice trainees [25]. It is therefore surprising that there are no established evidence-based recommendations for the application of televisual or presentation tools for craft specialties in clinical practice. Additionally, there is no established consensus or guidelines among international societies regarding the reporting of VoCI.

We therefore sought to investigate the utilisation, application, and quality of VoCIs in peer-reviewed literature through a scoping review. This scoping review aimed to understand how VoCIs are currently used, and how they intended to help their target audience and to determine the overall video and audio quality. This in turn could provide the basis for future reporting guidelines on peer-reviewed publication on VoCIs to achieve high-quality educational videos which will enhance surgical training and communication.

Protocol

An a priori scoping review protocol, based on internationally accepted guidelines, was developed by the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) registered collaborative group “Standards for Presenting and Reporting clinical InterveNtions Televisually” (SPRINT) [26‒28]. In keeping with established practice for scoping reviews, minor amendments to the protocol were made as the study progressed. Each change was discussed and agreed by the SPRINT team before its introduction.

The protocol is available via the Open Science Framework (osf.io/an2jt/). This manuscript has been written following the Preferred Reporting Items for Systematic reviews and Meta-analyses extension for Scoping Review (PRISMA-ScR) Checklist [29].

Search Strategy

With the assistance of specialist medical librarian, a comprehensive literature search of Medline, Embase, Emcare, and CINAHL databases has been performed (online suppl. Table S1; for all online suppl. material, see https://doi.org/10.1159/000545224). Specific search equations were formulated using relevant Medical Subject Headings terms including video, video-based surgical learning, MIS, minimal access surgery, laparoscopy, robotic surgery, robot-assisted surgery, endoscopy, training and education. We retrieved articles published in the English language between 1 January 2000 and 1 December 2023, which present VoCI.

Study Selection

A two-stage screening process using “title and abstract screening” and “full-text review” was performed by four independent reviewers (L.O.K.A., A.A., A.A., and OA) against the inclusion and exclusion criteria below, arriving at a final list of articles containing VoCI. Any disagreement was resolved by an independent reviewer (H.D.R.). This process was undertaken using the Covidence systematic review software [30]. Surgical videos from publicly available platforms, such as YouTube, were not utilised due to their user-dependent algorithms for content discovery, which was thought to introduce significant bias, limiting the relevance of the review’s findings. Furthermore, numerous studies have extensively analysed the relatively poor quality and content of VoCIs on social media platforms [11‒13].

Following the initial study selection process, the research team agreed to exclude studies published before 1 January 2020. As highlighted in existing literature, the SARS-CoV-2 pandemic swiftly changed existing attitudes to the use of VoCI. Therefore, this was deemed an appropriate date to reflect the cultural change in the use of VoCI. A methodological quality analysis for exclusion was not performed, as this may have substantially restricted the scope of the review.

Inclusion Criteria

The following were the inclusion criteria:

  • Video of recorded procedure including surgical, endoscopic, or interventional radiology.

  • Procedure performed on living human.

  • Audio or text in English.

  • Presented in peer-reviewed format with a target audience of clinical professionals or medical students.

Exclusion Criteria

Exclusion criteria were as follows:

  • Entire video not accessible.

  • Simulated procedure or procedure on non-human subject.

  • Non-peer-reviewed publication with target audience not applicable to clinical professionals.

Data Extraction

After study selection, the four independent reviewers (L.O.K.A., A.A., A.A., and O.A.) extracted data using a pre-established data charting tool to enable quantitative and qualitative assessment. The data charting tool was built in 3 domains:

  • 1.

    Video characteristics: first author, year, country, specialty, procedure, video format and duration, case presentation and outcome data (e.g., oncological/complications), presence of audio or text commentary, availability of subtitles and disclosures.

  • 2.

    Video utility: thematic analysis of the purpose of the video and the intended target audience. Reviewers were asked to assess whether they felt the intended audience would be able to reproduce the demonstrated skill or procedure.

  • 3.

    Video quality: subjective (low/medium/high) and objective measurements (using extracted audio and visual metadata). Reviewers were shown examples of each subjective measurement prior to data extraction.

Subgroup Analysis

For additional analysis, craft specialities were divided into the following categories: (1) gastrointestinal (oesophagogastric surgery, bariatric surgery, colorectal surgery, gastroenterology, paediatric and hepatobiliary surgery); (2) head and neck (oral and maxillofacial surgery, ear, nose and throat, ophthalmology, neurosurgery, and endocrine surgery); (3) soft tissue/extremity (orthopaedic and plastic surgery); (4) urogenital (obstetrics and gynaecology and urology); (5) cardiovascular (cardiology, vascular and cardiothoracic surgery). Data were extracted according to video characteristics, utility, and quality for each of the subgroup categories. These characteristics were compared between the five categories using Kruskal-Wallis rank-sum test for continuous variables, and Fisher’s exact test for categorical variables where appropriate (p values of ≤0.05 were considered statistically significant) using RStudio.

The initial search identified 5,519 peer-reviewed articles. After screening, these were reduced to 2,987 studies. Following an adjustment to the agreed date of inclusion, 823 articles underwent full-text review. A total of 199 videos were excluded (no access to video, n = 171; not VoCI, n = 16; duplicates, n = 12). A total of 624 peer-reviewed VoCIs (mean video length 06:06, range 00:36–24:44) were included in the final analysis (Fig. 1). In total, over 62 h of VoCI were reviewed.

Fig. 1.

PRISMA flow diagram.

Fig. 1.

PRISMA flow diagram.

Close modal

Video Characteristics

Of the included 624 peer-reviewed articles, the majority (n = 596; 95.5%) were a single video and in MP4 format (n = 451; 79%) (Table 1). The remaining articles (n = 28; 4.5%) contained between two and six VoCIs. The most frequent specialities identified were neurosurgery (n = 211; 33.8%), colorectal (n = 147; 23.6%), obstetrics and gynaecology (n = 82; 13.1%), and gastroenterology (n = 72; 11.5%). The most common perspectives in VoCI were endoscopic (n = 153; 24.5%), laparoscopic (n = 140; 22.4%), multiple views (e.g., open and endoscopic) (n = 136; 21.8%), and open/first person (n = 110; 17.6%). The clinical background to the case or intervention was described in 480 cases (76.9%), and clinical outcome data were reported in 403 cases (64.6%). Subtitles paired with audio (n = 91; 14.6%) and disclosures (n = 23; 3.8%) were rarely presented.

Table 1.

Characteristics of included videos

VoCIVideos, n (%)
Perspective 
 Cranioscopic 1 (0.2) 
 Endoscopic 153 (25) 
 Laparoscopic 140 (22) 
 Multiple 136 (22) 
 Open 110 (18) 
 Robotic 7 (1.1) 
 Other 77 (12) 
Video format 
 AVI 2 (0.4) 
 FLV 1 (0.2) 
 M4V 4 (0.7) 
 MMC 12 (2.1) 
 MOV 13 (2.3) 
 MP4 451 (79) 
 MP5 1 (0.2) 
 WMV 2 (0.4) 
 Other 84 (15) 
Purpose 
 Device demonstration 35 (5.6) 
 Educational technique 394 (63) 
 Educational topic 87 (14) 
 Research discussion 1 (0.2) 
 Unique case 107 (17) 
Audience 
 Medical students 4 (0.6) 
 Senior clinician 73 (12) 
 Training clinician 547 (88) 
Reproducible 
 No 107 (19) 
 Yes 414 (73) 
 Not applicable 47 (8.3) 
Objective video quality 
 2K 4 (0.6) 
 HD 273 (44) 
 SD (270) 14 (2.2) 
 SD (360) 30 (4.8) 
 SD (480) 42 (6.7) 
 SD (540) 63 (10) 
 Not available 198 (32) 
Subjective video quality 
 High 376 (60) 
 Low 40 (6.4) 
 Medium 208 (33) 
Audio quality 
 32,000 Hz 1 (0.2) 
 44,100 Hz 101 (18) 
 48,000 Hz 255 (47) 
 Not available 191 (35) 
Subjective audio quality 
 High 380 (63) 
 Low 54 (9.0) 
 Medium 151 (25) 
 Not applicable 18 (3.0) 
VoCIVideos, n (%)
Perspective 
 Cranioscopic 1 (0.2) 
 Endoscopic 153 (25) 
 Laparoscopic 140 (22) 
 Multiple 136 (22) 
 Open 110 (18) 
 Robotic 7 (1.1) 
 Other 77 (12) 
Video format 
 AVI 2 (0.4) 
 FLV 1 (0.2) 
 M4V 4 (0.7) 
 MMC 12 (2.1) 
 MOV 13 (2.3) 
 MP4 451 (79) 
 MP5 1 (0.2) 
 WMV 2 (0.4) 
 Other 84 (15) 
Purpose 
 Device demonstration 35 (5.6) 
 Educational technique 394 (63) 
 Educational topic 87 (14) 
 Research discussion 1 (0.2) 
 Unique case 107 (17) 
Audience 
 Medical students 4 (0.6) 
 Senior clinician 73 (12) 
 Training clinician 547 (88) 
Reproducible 
 No 107 (19) 
 Yes 414 (73) 
 Not applicable 47 (8.3) 
Objective video quality 
 2K 4 (0.6) 
 HD 273 (44) 
 SD (270) 14 (2.2) 
 SD (360) 30 (4.8) 
 SD (480) 42 (6.7) 
 SD (540) 63 (10) 
 Not available 198 (32) 
Subjective video quality 
 High 376 (60) 
 Low 40 (6.4) 
 Medium 208 (33) 
Audio quality 
 32,000 Hz 1 (0.2) 
 44,100 Hz 101 (18) 
 48,000 Hz 255 (47) 
 Not available 191 (35) 
Subjective audio quality 
 High 380 (63) 
 Low 54 (9.0) 
 Medium 151 (25) 
 Not applicable 18 (3.0) 

Video Utility

The VoCI in peer-reviewed literature primarily targeted trainees (n = 547; 87.7%) with a smaller proportion tailored to experts in the field (n = 73; 11.7%) and medical students (n = 4; 0.6%). Regarding thematic analysis, over half of VoCIs provided technical guidance (n = 394; 63.1%). The remaining demonstrated unique cases (n = 104, 16.7%), educational topics (n = 87; 13.9%), or device demonstrations (n = 35; 5.6%). When VoCI demonstrated a new technique, skill, or device, reviewers deemed them to be sufficiently detailed to be reproducible in 414 cases (73%).

Video Quality

A total of 376 cases (60%) were rated as high quality on subjective assessment, with 40 (6.4%) found to be of low quality. Metadata for objective video assessment could not be extracted for 32% (n = 198) of VoCIs. When available, most VoCIs were high-definition video with 720p (n = 273; 44%). Audio subjective quality had a similar distribution when present. In 380 cases (63%), the audio quality was high, and low in 54 cases (9.2%). Objective metadata was similarly difficult to extract and unavailable in 35% of VoCIs with audio. Extracted audio was mostly 48,000 Hz (n = 255; 47%) or 44,100 Hz (n = 101; 18%).

Subgroup Analysis

There was a total of 262, 230, 5, 105, and 22 VoCIs in the gastrointestinal, head and neck, soft tissue/extremity, urogenital, and cardiovascular categories, respectively (Table 2). There were significant differences found in all variables between the categories (p < 0.001) (Table 3). The video length was higher in the urogenital (median 7.05 min, interquartile range 5.85–8.00 min) and head and neck categories (median 6.76 min, interquartile range 4.24–9.25 min) (Fig. 2). Audio commentary and text on the screen were more frequently presented in the head and neck (n = 217; 94%, and n = 98; 93%) and urogenital (n = 219; 95%, and n = 100; 95%) categories, respectively (Fig. 3). Similarly, the case background and outcomes were more commonly reported in head and neck (n = 206; 90%, and n = 196; 85%) and urogenital (n = 84; 80%, and n = 69; 66%) categories, respectively.

Table 2.

Subgroup categorisation

SpecialtyCategory 1 – gastrointestinal, N = 262Category 2 – head and neck, N = 230Category 3 – soft tissue/extremity, N = 5Category 4 – urogenital, N = 105Category 5 – cardiovascular, N = 22
Upper GI surgery 23 (8.8%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) 
Cardiology 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2 (9.1%) 
Cardiothoracic surgery 0 (0%) 0 (0%) 0 (0%) 0 (0%) 14 (64%) 
Colorectal surgery 147 (56%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) 
Endocrine surgery 0 (0%) 3 (1.3%) 0 (0%) 0 (0%) 0 (0%) 
Otolaryngology 0 (0%) 14 (6.1%) 0 (0%) 0 (0%) 0 (0%) 
Gastroenterology 72 (27%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) 
Hepatobiliary surgery 16 (6.1%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) 
Neurosurgery 0 (0%) 211 (92%) 0 (0%) 0 (0%) 0 (0%) 
Obstetrics and gynaecology 0 (0%) 0 (0%) 0 (0%) 82 (78%) 0 (0%) 
Ophthalmology 0 (0%) 1 (0.4%) 0 (0%) 0 (0%) 0 (0%) 
Oral and maxillofacial surgery 0 (0%) 1 (0.4%) 0 (0%) 0 (0%) 0 (0%) 
Orthopaedic surgery 0 (0%) 0 (0%) 1 (20%) 0 (0%) 0 (0%) 
Paediatric surgery 4 (1.5%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) 
Plastic surgery 0 (0%) 0 (0%) 4 (80%) 0 (0%) 0 (0%) 
Urology 0 (0%) 0 (0%) 0 (0%) 23 (22%) 0 (0%) 
Vascular surgery 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6 (27%) 
SpecialtyCategory 1 – gastrointestinal, N = 262Category 2 – head and neck, N = 230Category 3 – soft tissue/extremity, N = 5Category 4 – urogenital, N = 105Category 5 – cardiovascular, N = 22
Upper GI surgery 23 (8.8%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) 
Cardiology 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2 (9.1%) 
Cardiothoracic surgery 0 (0%) 0 (0%) 0 (0%) 0 (0%) 14 (64%) 
Colorectal surgery 147 (56%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) 
Endocrine surgery 0 (0%) 3 (1.3%) 0 (0%) 0 (0%) 0 (0%) 
Otolaryngology 0 (0%) 14 (6.1%) 0 (0%) 0 (0%) 0 (0%) 
Gastroenterology 72 (27%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) 
Hepatobiliary surgery 16 (6.1%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) 
Neurosurgery 0 (0%) 211 (92%) 0 (0%) 0 (0%) 0 (0%) 
Obstetrics and gynaecology 0 (0%) 0 (0%) 0 (0%) 82 (78%) 0 (0%) 
Ophthalmology 0 (0%) 1 (0.4%) 0 (0%) 0 (0%) 0 (0%) 
Oral and maxillofacial surgery 0 (0%) 1 (0.4%) 0 (0%) 0 (0%) 0 (0%) 
Orthopaedic surgery 0 (0%) 0 (0%) 1 (20%) 0 (0%) 0 (0%) 
Paediatric surgery 4 (1.5%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) 
Plastic surgery 0 (0%) 0 (0%) 4 (80%) 0 (0%) 0 (0%) 
Urology 0 (0%) 0 (0%) 0 (0%) 23 (22%) 0 (0%) 
Vascular surgery 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6 (27%) 
Table 3.

Subgroup analysis

VariableCategory 1 – gastrointestinal, N = 2621Category 2 – head and neck, N = 2301Category 3 – soft tissue/extremity, N = 51Category 4 – urogenital, N = 1051Category 5 – cardiovascular, N = 221p value2
Video length, min 4.88 (3.16, 7.17) 6.76 (4.24, 9.25) 3.95 (2.35, 3.95) 7.05 (5.85, 8.00) 5.98 (2.69, 7.78) <0.001 
Audio commentary 186 (71%) 217 (94%) 3 (60%) 98 (93%) 14 (64%) <0.001 
Subtitles 21 (8.0%) 62 (27%) 0 (0%) 6 (5.7%) 2 (9.1%) <0.001 
Text on screen 209 (80%) 219 (95%) 4 (80%) 100 (95%) 17 (77%) <0.001 
Disclosures 18 (6.9%) 2 (0.9%) 0 (0%) 3 (2.9%) 0 (0%) 0.009 
Background 174 (66%) 206 (90%) 3 (60%) 84 (80%) 13 (59%) <0.001 
Outcomes 123 (47%) 196 (85%) 3 (60%) 69 (66%) 12 (55%) <0.001 
VariableCategory 1 – gastrointestinal, N = 2621Category 2 – head and neck, N = 2301Category 3 – soft tissue/extremity, N = 51Category 4 – urogenital, N = 1051Category 5 – cardiovascular, N = 221p value2
Video length, min 4.88 (3.16, 7.17) 6.76 (4.24, 9.25) 3.95 (2.35, 3.95) 7.05 (5.85, 8.00) 5.98 (2.69, 7.78) <0.001 
Audio commentary 186 (71%) 217 (94%) 3 (60%) 98 (93%) 14 (64%) <0.001 
Subtitles 21 (8.0%) 62 (27%) 0 (0%) 6 (5.7%) 2 (9.1%) <0.001 
Text on screen 209 (80%) 219 (95%) 4 (80%) 100 (95%) 17 (77%) <0.001 
Disclosures 18 (6.9%) 2 (0.9%) 0 (0%) 3 (2.9%) 0 (0%) 0.009 
Background 174 (66%) 206 (90%) 3 (60%) 84 (80%) 13 (59%) <0.001 
Outcomes 123 (47%) 196 (85%) 3 (60%) 69 (66%) 12 (55%) <0.001 

1Median (IQR); n (%).

2Kruskal-Wallis rank-sum test; Fisher’s exact test.

Fig. 2.

Subgroup video length.

Fig. 2.

Subgroup video length.

Close modal
Fig. 3.

Subgroup analysis.

Fig. 3.

Subgroup analysis.

Close modal

Table 4 summarises the video characteristics, utility, and quality according to each specialty category. Demonstration of educational technique was the most common regardless of the presented category. The video reproducibility was highest in the urogenital category (n = 80; 82%) followed by head and neck (n = 154; 74%). Video and audio quality recorded as high subjectively ranged from 50 to 68% and 50 to 80%, respectively, across the five categories.

Table 4.

Subgroup characteristics and analysis

VariableCategory 1 – gastrointestinal, N = 2621Category 2 – head and neck, N = 2301Category 3 – soft tissue/extremity, N = 51Category 4 – urogenital, N = 1051Category 5 – cardiovascular, N = 221
Perspective 
 Cranioscopic 0 (0%) 1 (0.4%) 0 (0%) 0 (0%) 0 (0%) 
 Endoscopic 114 (44%) 36 (16%) 0 (0%) 3 (2.9%) 0 (0%) 
 Laparoscopic 89 (34%) 4 (1.7%) 0 (0%) 44 (42%) 3 (14%) 
 Multiple 44 (17%) 60 (26%) 0 (0%) 24 (23%) 8 (36%) 
 Open 11 (4.2%) 60 (26%) 5 (100%) 28 (27%) 6 (27%) 
 Other 0 (0%) 69 (30%) 0 (0%) 3 (2.9%) 5 (23%) 
 Robotic 4 (1.5%) 0 (0%) 0 (0%) 3 (2.9%) 0 (0%) 
Video format 
 AVI 2 (0.8%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) 
 FLV 1 (0.4%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) 
 M4V 3 (1.2%) 0 (0%) 0 (0%) 1 (1.0%) 0 (0%) 
 MMC 3 (1.2%) 5 (2.5%) 0 (0%) 3 (3.0%) 1 (5.3%) 
 MOV 12 (4.8%) 1 (0.5%) 0 (0%) 0 (0%) 0 (0%) 
 MP4 217 (88%) 128 (65%) 5 (100%) 87 (86%) 14 (74%) 
 MP5 1 (0.4%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) 
 Other 7 (2.8%) 63 (32%) 0 (0%) 10 (9.9%) 4 (21%) 
 WMV 2 (0.8%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) 
Purpose 
 Device demonstration 24 (9.2%) 6 (2.6%) 0 (0%) 2 (1.9%) 3 (14%) 
 Educational technique 163 (62%) 134 (58%) 4 (80%) 82 (78%) 11 (50%) 
 Educational topic 32 (12%) 43 (19%) 1 (20%) 8 (7.6%) 3 (14%) 
 Research discussion 1 (0.4%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) 
 Unique case 42 (16%) 47 (20%) 0 (0%) 13 (12%) 5 (23%) 
Audience 
 Medical students 1 (0.4%) 1 (0.4%) 0 (0%) 2 (1.9%) 0 (0%) 
 Senior clinician 33 (13%) 26 (11%) 1 (20%) 9 (8.6%) 4 (18%) 
 Training clinician 228 (87%) 203 (88%) 4 (80%) 94 (90%) 18 (82%) 
Reproducible 
 No 47 (20%) 43 (21%) 1 (20%) 14 (14%) 2 (11%) 
 NA 28 (12%) 10 (4.8%) 1 (20%) 4 (4.1%) 4 (22%) 
 Yes 165 (69%) 154 (74%) 3 (60%) 80 (82%) 12 (67%) 
Objective video quality 
 2K 3 (1.1%) 1 (0.4%) 0 (0%) 0 (0%) 0 (0%) 
 HD 139 (53%) 82 (36%) 2 (40%) 42 (40%) 8 (36%) 
 Not available 49 (19%) 89 (39%) 1 (20%) 49 (47%) 10 (45%) 
 SD (270) 1 (0.4%) 13 (5.7%) 0 (0%) 0 (0%) 0 (0%) 
 SD (360) 16 (6.1%) 13 (5.7%) 0 (0%) 0 (0%) 1 (4.5%) 
 SD (480) 25 (9.5%) 10 (4.3%) 1 (20%) 6 (5.7%) 0 (0%) 
 SD (540) 29 (11%) 22 (9.6%) 1 (20%) 8 (7.6%) 3 (14%) 
Subjective video quality 
 High 173 (66%) 116 (50%) 3 (60%) 71 (68%) 13 (59%) 
 Low 21 (8.0%) 16 (7.0%) 0 (0%) 2 (1.9%) 1 (4.5%) 
 Medium 68 (26%) 98 (43%) 2 (40%) 32 (30%) 8 (36%) 
Audio quality 
 32,000 Hz 1 (0.4%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) 
 44,100 Hz 60 (25%) 27 (14%) 0 (0%) 13 (14%) 1 (5.6%) 
 48,000 Hz 131 (55%) 76 (40%) 3 (60%) 39 (41%) 6 (33%) 
 NA 47 (20%) 88 (46%) 2 (40%) 43 (45%) 11 (61%) 
Subjective audio quality 
 High 146 (59%) 151 (66%) 4 (80%) 69 (66%) 10 (50%) 
 Low 43 (17%) 6 (2.6%) 0 (0%) 5 (4.8%) 0 (0%) 
 Medium 46 (19%) 69 (30%) 0 (0%) 28 (27%) 8 (40%) 
 NA 11 (4.5%) 2 (0.9%) 1 (20%) 2 (1.9%) 2 (10%) 
VariableCategory 1 – gastrointestinal, N = 2621Category 2 – head and neck, N = 2301Category 3 – soft tissue/extremity, N = 51Category 4 – urogenital, N = 1051Category 5 – cardiovascular, N = 221
Perspective 
 Cranioscopic 0 (0%) 1 (0.4%) 0 (0%) 0 (0%) 0 (0%) 
 Endoscopic 114 (44%) 36 (16%) 0 (0%) 3 (2.9%) 0 (0%) 
 Laparoscopic 89 (34%) 4 (1.7%) 0 (0%) 44 (42%) 3 (14%) 
 Multiple 44 (17%) 60 (26%) 0 (0%) 24 (23%) 8 (36%) 
 Open 11 (4.2%) 60 (26%) 5 (100%) 28 (27%) 6 (27%) 
 Other 0 (0%) 69 (30%) 0 (0%) 3 (2.9%) 5 (23%) 
 Robotic 4 (1.5%) 0 (0%) 0 (0%) 3 (2.9%) 0 (0%) 
Video format 
 AVI 2 (0.8%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) 
 FLV 1 (0.4%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) 
 M4V 3 (1.2%) 0 (0%) 0 (0%) 1 (1.0%) 0 (0%) 
 MMC 3 (1.2%) 5 (2.5%) 0 (0%) 3 (3.0%) 1 (5.3%) 
 MOV 12 (4.8%) 1 (0.5%) 0 (0%) 0 (0%) 0 (0%) 
 MP4 217 (88%) 128 (65%) 5 (100%) 87 (86%) 14 (74%) 
 MP5 1 (0.4%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) 
 Other 7 (2.8%) 63 (32%) 0 (0%) 10 (9.9%) 4 (21%) 
 WMV 2 (0.8%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) 
Purpose 
 Device demonstration 24 (9.2%) 6 (2.6%) 0 (0%) 2 (1.9%) 3 (14%) 
 Educational technique 163 (62%) 134 (58%) 4 (80%) 82 (78%) 11 (50%) 
 Educational topic 32 (12%) 43 (19%) 1 (20%) 8 (7.6%) 3 (14%) 
 Research discussion 1 (0.4%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) 
 Unique case 42 (16%) 47 (20%) 0 (0%) 13 (12%) 5 (23%) 
Audience 
 Medical students 1 (0.4%) 1 (0.4%) 0 (0%) 2 (1.9%) 0 (0%) 
 Senior clinician 33 (13%) 26 (11%) 1 (20%) 9 (8.6%) 4 (18%) 
 Training clinician 228 (87%) 203 (88%) 4 (80%) 94 (90%) 18 (82%) 
Reproducible 
 No 47 (20%) 43 (21%) 1 (20%) 14 (14%) 2 (11%) 
 NA 28 (12%) 10 (4.8%) 1 (20%) 4 (4.1%) 4 (22%) 
 Yes 165 (69%) 154 (74%) 3 (60%) 80 (82%) 12 (67%) 
Objective video quality 
 2K 3 (1.1%) 1 (0.4%) 0 (0%) 0 (0%) 0 (0%) 
 HD 139 (53%) 82 (36%) 2 (40%) 42 (40%) 8 (36%) 
 Not available 49 (19%) 89 (39%) 1 (20%) 49 (47%) 10 (45%) 
 SD (270) 1 (0.4%) 13 (5.7%) 0 (0%) 0 (0%) 0 (0%) 
 SD (360) 16 (6.1%) 13 (5.7%) 0 (0%) 0 (0%) 1 (4.5%) 
 SD (480) 25 (9.5%) 10 (4.3%) 1 (20%) 6 (5.7%) 0 (0%) 
 SD (540) 29 (11%) 22 (9.6%) 1 (20%) 8 (7.6%) 3 (14%) 
Subjective video quality 
 High 173 (66%) 116 (50%) 3 (60%) 71 (68%) 13 (59%) 
 Low 21 (8.0%) 16 (7.0%) 0 (0%) 2 (1.9%) 1 (4.5%) 
 Medium 68 (26%) 98 (43%) 2 (40%) 32 (30%) 8 (36%) 
Audio quality 
 32,000 Hz 1 (0.4%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) 
 44,100 Hz 60 (25%) 27 (14%) 0 (0%) 13 (14%) 1 (5.6%) 
 48,000 Hz 131 (55%) 76 (40%) 3 (60%) 39 (41%) 6 (33%) 
 NA 47 (20%) 88 (46%) 2 (40%) 43 (45%) 11 (61%) 
Subjective audio quality 
 High 146 (59%) 151 (66%) 4 (80%) 69 (66%) 10 (50%) 
 Low 43 (17%) 6 (2.6%) 0 (0%) 5 (4.8%) 0 (0%) 
 Medium 46 (19%) 69 (30%) 0 (0%) 28 (27%) 8 (40%) 
 NA 11 (4.5%) 2 (0.9%) 1 (20%) 2 (1.9%) 2 (10%) 

1n (%).

To the best of our knowledge, this is the first scoping review to explore the modern utilisation, application, and quality of VoCIs in peer-reviewed literature. With over 600 articles and 60 h of VoCIs reviewed, we have presented an extensive and broad assessment of current VoCI applications and standards. The elements of the videos led to the identification of the nascent concepts in this area to comprise “TEACHES,” which we subsequently have applied to present the characterisation of the VoCIs:

  • Technical

  • Ethics and consent

  • Accessibility

  • Content, accuracy, and relevance

  • How teaching is achieved?

  • Evaluation and feedback

  • Suitability.

Technical

There was considerable consistency in technical aspects of studied videos, including pixel definition and formatting. Unfortunately, the objective assessments of audio quality were limited by the poor availability of suitable data. Despite relatively positive findings in objective technical assessments, subjective findings indicate a high prevalence of poor-quality VoCI. Despite each video undergoing peer-review, 40% of VoCI scored low and medium for visual quality assessment and 34% of VoCI have low and medium audio assessments. Inappropriate lighting, camera work, and frequently incomprehensible audio were noted. Additionally, subtitles were often missing which would have been particularly beneficial for people watching videos in their non-native language. This highlights a fundamental lack of quality control and significant heterogeneity in VoCI reporting in peer-reviewed literature. This mirrors existing literature on VoCI available on commercial internet platforms (YouTube) [23, 25, 30‒32].

Ethics and Consent

It is broadly understood that failing to state conflicts of interest raises the risk of bias and can ultimately lead to patient harm [33]. Therefore, it was particularly alarming that just 3% of videos contained relevant disclosures. It is essential for conflicts of interest to be routinely disclosed, especially when the procedure is performed with dedicated equipment. Notably, the ethical statements and consent requirements of each journal varied significantly, and this inconsistency may have contributed to inadequate conflict of interest reporting.

Accessibility

Of all 823 videos brought forward to full-text screening, 21% were excluded due to practical inaccessibility, with access restrictions around institutions and industries, passwords, cost to view, or embedded videos no longer available on source material. VoCIs are a powerful tool with widespread applications, and it is essential that medical educational sources are easily accessible for clinicians for online learning [34]. Therefore, it may be sensible for journals to host data repositories for storage of published VoCI.

Content, Accuracy, and Relevance

As would be expected, the majority of VoCI published within journals are educational, demonstrating either an emerging procedural technique or unique case. The most widely publicised and early adopting specialities include gastrointestinal and head and neck surgery. While the purpose of published VoCI is generally uniform, the contents and accuracy are varied. The name of the procedure performed, the nature of the pathology, and case presentation were not universally specified. In addition, the position of the patient and surgical/anaesthetic teams in theatre, along with port placement, were often missing. The scoping review also revealed a lack of outcome data, whether this was time-based metrics or complications. In addition, it was evident that very few VoCIs mentioned patient participation/input. Patient advocates or members of the public, who have a strong interest in surgical training and its impact on patient outcomes, may be able to provide a valuable patient-centred perspective on maintaining patient confidentiality and enhancing transparency during the video development process.

Interestingly, the accuracy and reporting of relevant VoCI features were heterogeneous between surgical specialities. In general, head and neck and urogenital categories reported the case background and clinical outcome data with a greater frequency than the other specialities, as well as a higher use of audio commentary and text on the screen. This may suggest more stringent publication guidelines in their journals, which is beneficial for the viewer.

The surprising prominence that poorly reported clinical details in VoCI, even in published literature, is concerning. As shown by de’Angelis et al. [34], trainees are less likely to identify surgical errors compared to senior colleagues. As the vast majority of VoCI identified in this review are intended to be educational and targeted at trainees, this raises the alarming possibility of junior clinicians adopting dangerous practices with inadequately reported clinical outcomes from what should be reputable sources. Previous attempts have been made to improve and standardise the quality of surgical VoCI. For example, the LAParoscopic Video Educational GuidelineS (LAP-VEGaS) video assessment tool was designed in 2021 to improve the standard of laparoscopic educational videos [35]. Despite its publication, the findings of this scoping review and those in existing literature [36], including work from within our own group [37], demonstrate an ongoing unmet need to establish universal, multidisciplinary guidelines on the report of VoCI to ensure the content of published videos is of the highest quality and accuracy.

How Teaching Is Achieved?

The vast majority of surgical VoCI identified used simple narration or text on screen to inform and guide their audience. While this technique has been shown to provide benefit to procedural trainees [38], there was little teaching innovation within the included dataset. The majority of VoCIs were designed for asynchronous teaching (i.e., similar to recorded lectures), and few currently adopt more recent approaches such as gamification, blended learning, and synchronous teaching. Emerging technologies such as Proximie and Touch Surgery provide interactive VoCI for trainees with some research demonstrating superior educational qualities [39, 40]. Arguably, there is an opportunity for more advanced educational videos, perhaps using interactive elements, within journals that are currently unexplored. Furthermore, the development of 3D Virtual and Augmented Reality technologies is yet to be utilised in published literature. Again, this would be an exciting and unique opportunity for publishers of educational material to exploit.

Evaluation and Feedback

Despite the primary goal of published VoCI to educate, there was little possibility for viewers to evaluate or provide feedback to the authors. The only option available to viewers is to contact authors directly or submit a letter to the editor. In general, these approaches could be considered laborious for most training proceduralists. Perhaps, publishers could consider a more innovative and direct rating scale available on their platforms. Feedback can be provided in a variety of forms, including constructive or corrective feedback, which is an essential tool for learning and developing performance in surgical education [41]. It could also help encourage viewer engagement and improve the overall quality of published VoCIs.

Suitability

Overall, included VoCIs were largely felt to be suitable for their intended audience. The majority, 88%, were designed for training clinicians, and 73% of VoCI provided enough detail for the procedure or case to be reproducible. However, this still leaves 27% of published VoCI within this review providing insufficient information and detail for the procedure to be reproducible. This again highlights the substantial inconsistency in published VoCI quality.

Following the above findings and to combat the deleterious effects of heterogenous and poor-quality VoCI reporting, the SPRINT group has been established. This group aims to derive a set of minimum items that should be reported by clinicians when uploading VoCI. SPRINT, which has been listed in the EQUATOR Network registry, will encourage high-quality educational videos, enhancing surgical training and communication, provide a benchmark internationally to facilitate video reporting, and improve patient outcomes and safety. The SPRINT checklist will be versatile and can be adopted by several specialties and key stakeholders including authors, peer-reviewers, and journal editors.

In addition, the increasing number of published operative videos provides a rich global dataset for AI algorithms, and therefore, it is essential that all public videos are of the highest possible standard [41]. The timing of SPRINT establishment is therefore prudent as it may be essential to ensure surgical safety in the age of evolving surgical AI.

Strengths and Limitations

To our knowledge, this is the first systematic scoping review to explore VoCI reporting in peer-reviewed literature, with the exclusion of non-peer-reviewed literature (e.g., social media and YouTube). The authors decided not to include non-peer-reviewed VoCIs for a variety of reasons. For example, existing literature already outlines the inferior quality of VoCIs available on online platforms; therefore, including these VoCIs would add little novelty to the study’s finding. While the study findings are novel, there are important study limitations that must be addressed. The decision was made to exclude videos prior to the COVID-19 pandemic. There were also a limited number of videos in certain specialty categories such as soft tissue/extremity and cardiovascular. This makes comparison of the specialty categories somewhat difficult. In addition, we have presented a subjective assessment of video and audio quality, as well as reproducibility of the procedure based on the video.

The findings of this scoping review highlighted significant heterogeneity in VoCI reporting including omission of critical procedural steps and poor visual quality. This highlighted that there is an urgent need to develop guidelines and standardise reporting VoCI. SPRINT guidelines will achieve high-quality educational videos improving surgical training and patient outcomes and promoting reproducibility and transparency. It will allow enhanced categorisation of VoCI for application of digital and AI training and education tools. These approaches may enhance usability and applicability of VoCI in future clinical training. SPRINT will also provide a universal, validated checklist for reviewing videos submitted for publication or conference presentation. We therefore anticipate that the VoCI reporting guidelines will benefit authors, editors, trainees, patients, healthcare providers, policymakers, and other key stakeholders.

Mr. Henry Douglas Robb, Mr. Michael G. Fadel, Dr Bibek Das, Mr. Laith Alghazawi, Ms. Olivia Ariarasa, Ms. Aksaan Arif, Ms Ayda Alizadeh, Dr. Zohaib Arain, and Dr. Matyas Fehervari have no conflicts of interest to declare or financial ties to disclose. Dr. Hutan Ashrafian is Chief Scientific Officer at Preemptive Medicine and Health, Flagship Pioneering.

This study was conducted without external funding.

Mr. Henry Douglas Robb and Mr. Michael G. Fadel both led to the manuscript design and writing equally. Dr. Bibek Das provided statistical analysis. Mr. Laith Alghazawi, Ms. Olivia Ariarasa, Ms. Aksaan Arif, Ms. Ayda Alizadeh, and Dr. Zohaib Arain screened and scored included videos. Dr. Matyas Fehervari was fundamental for study ideation and supervised. Dr. Hutan Ashrafian is the senior author and managed the project. Mr. Henry Douglas Robb, Mr. Michael G. Fadel, Dr. Bibek Das, Dr. Matyas Fehervari, Mr. Laith Alghazawi, and Dr. Hutan Ashrafian are the core team of the SPRINT collaboration.

Additional Information

Henry Douglas Robb and Michael G. Fadel contributed equally to this work.

All data generated or analysed during this study are included in this article and its online supplementary material. Further enquiries can be directed to the corresponding author.

1.
Kwiatkowska
M
.
From computers to ubiquitous computing, by 2020. Introduction
.
Philos Trans A Math Phys Eng Sci
.
2008
;
366
(
1881
):
3665
8
.
2.
Kuehner
G
,
Wu
W
,
Choe
G
,
Douaiher
J
,
Reed
M
.
Telemedicine implementation trends in surgical specialties before and after COVID-19 shelter in place: adjusting to a changing landscape
.
Surgery
.
2022
;
172
(
5
):
1471
7
.
3.
Mann
DM
,
Chen
J
,
Chunara
R
,
Testa
PA
,
Nov
O
.
COVID-19 transforms health care through telemedicine: evidence from the field
.
J Am Med Inform Assoc
.
2020
;
27
(
7
):
1132
5
.
4.
Howie
F
,
Kreofsky
BL
,
Ravi
A
,
Lokken
T
,
Hoff
MD
,
Fang
JL
.
Rapid rise of pediatric telehealth during COVID-19 in a large multispecialty health system
.
Telemed J e Health
.
2022
;
28
(
1
):
3
10
.
5.
Flinspach
AN
,
Sterz
J
,
Neef
V
,
Flinspach
MH
,
Zacharowski
K
,
Ruesseler
M
, et al
.
Rise of public e-learning opportunities in the context of COVID-19 pandemic-induced curtailment of face-to-face courses, exemplified by epidural catheterization on YouTube
.
BMC Med Educ
.
2023
;
23
(
1
):
406
.
6.
Mascagni
P
,
Alapatt
D
,
Sestini
L
,
Altieri
MS
,
Madani
A
,
Watanabe
Y
, et al
.
Computer vision in surgery: from potential to clinical value
.
NPJ Digit Med
.
2022
;
5
(
1
):
163
.
7.
Cheikh Youssef
S
,
Haram
K
,
Noël
J
,
Patel
V
,
Porter
J
,
Dasgupta
P
, et al
.
Evolution of the digital operating room: the place of video technology in surgery
.
Langenbecks Arch Surg
.
2023
;
408
(
1
):
95
.
8.
Langerman
A
,
Grantcharov
TP
.
Are we ready for our close-up?: why and how we must embrace video in the OR
.
Ann Surg
.
2017
;
266
(
6
):
934
6
.
9.
Adorisio
O
,
Silveri
M
,
Torino
G
.
Evaluation of educational value of YouTube videos addressing robotic pyeloplasty in children
.
J Pediatr Urol
.
2021
;
17
(
3
):
390.e1
4
.
10.
Abdel-Dayem
MA
,
Brown
DA
,
Haray
PN
.
Empowering patients and educating staff: an online solution for the COVID era and beyond
.
Ann Med Surg
.
2021
;
65
:
102238
.
11.
Hohenleitner
J
,
Barron
K
,
Bostonian
T
,
Demyan
L
,
Bonne
S
.
Educational quality of YouTube videos for patients undergoing elective procedures
.
J Surg Res
.
2023
;
292
:
206
13
.
12.
Ferhatoglu
MF
,
Kartal
A
,
Ekici
U
,
Gurkan
A
.
Evaluation of the reliability, utility, and quality of the information in sleeve gastrectomy videos shared on open access video sharing platform YouTube
.
Obes Surg
.
2019
;
29
(
5
):
1477
84
.
13.
Starks
C
,
Akkera
M
,
Shalaby
M
,
Munshi
R
,
Toraih
E
,
Lee
GS
, et al
.
Evaluation of YouTube videos as a patient education source for novel surgical techniques in thyroid surgery
.
Gland Surg
.
2021
;
10
(
2
):
697
705
.
14.
van Dalen
A
,
Legemaate
J
,
Schlack
WS
,
Legemate
DA
,
Schijven
MP
.
Legal perspectives on black box recording devices in the operating environment
.
Br J Surg
.
2019
;
106
(
11
):
1433
41
.
15.
Dahodwala
M
,
Geransar
R
,
Babion
J
,
de Grood
J
,
Sargious
P
.
The impact of the use of video-based educational interventions on patient outcomes in hospital settings: a scoping review
.
Patient Educ Couns
.
2018
;
101
(
12
):
2116
24
.
16.
Sinclair
PM
,
Kable
A
,
Levett-Jones
T
,
Booth
D
.
The effectiveness of Internet-based e-learning on clinician behaviour and patient outcomes: a systematic review
.
Int J Nurs Stud
.
2016
;
57
:
70
81
.
17.
Chhabra
KR
,
Thumma
JR
,
Varban
OA
,
Dimick
JB
.
Associations between video evaluations of surgical technique and outcomes of laparoscopic sleeve gastrectomy
.
JAMA Surg
.
2021
;
156
(
2
):
e205532
.
18.
Jue
J
,
Shah
NA
,
Mackey
TK
.
An interdisciplinary review of surgical data recording technology features and legal considerations
.
Surg Innov
.
2020
;
27
(
2
):
220
8
.
19.
Grenda
TR
,
Pradarelli
JC
,
Dimick
JB
.
Using surgical video to improve technique and skill
.
Ann Surg
.
2016
;
264
(
1
):
32
3
.
20.
Pape-Koehler
C
,
Immenroth
M
,
Sauerland
S
,
Lefering
R
,
Lindlohr
C
,
Toaspern
J
, et al
.
Multimedia-based training on Internet platforms improves surgical performance: a randomized controlled trial
.
Surg Endosc
.
2013
;
27
(
5
):
1737
47
.
21.
Tulipan
J
,
Miller
A
,
Park
AG
,
Labrum
JT
4th
,
Ilyas
AM
.
Touch surgery: analysis and assessment of validity of a hand surgery simulation “App”
.
Hand
.
2019
;
14
(
3
):
311
6
.
22.
Cooper
L
,
Din
AH
,
Fitzgerald O'Connor
E
,
Roblin
P
,
Rose
V
,
Mughal
M
.
Augmented reality and plastic surgery training: a qualitative study
.
Cureus
.
2021
;
13
(
10
):
e19010
.
23.
Rapp
AK
,
Healy
MG
,
Charlton
ME
,
Keith
JN
,
Rosenbaum
ME
,
Kapadia
MR
.
YouTube is the most frequently used educational video source for surgical preparation
.
J Surg Educ
.
2016
;
73
(
6
):
1072
6
.
24.
Jackson
HT
,
Hung
CMS
,
Potarazu
D
,
Habboosh
N
,
DeAngelis
EJ
,
Amdur
RL
, et al
.
Attending guidance advised: educational quality of surgical videos on YouTube
.
Surg Endosc
.
2022
;
36
(
6
):
4189
98
.
25.
Duncan
I
,
Yarwood-Ross
L
,
Haigh
C
.
YouTube as a source of clinical skills education
.
Nurse Educ Today
.
2013
;
33
(
12
):
1576
80
.
26.
Peters
MD
,
Godfrey
CM
,
Khalil
H
,
McInerney
P
,
Parker
D
,
Soares
CB
.
Guidance for conducting systematic scoping reviews
.
Int J Evid Based Healthc
.
2015
;
13
(
3
):
141
6
.
27.
Levac
D
,
Colquhoun
H
,
O’Brien
KK
.
Scoping studies: advancing the methodology
.
Implementation Sci
.
2010
;
5
(
1
):
69
.
28.
Arksey
H
,
O’Malley
L
.
Scoping studies: towards a methodological framework
.
Int J Soc Res Methodol
.
2005
;
8
(
1
):
19
32
.
29.
Tricco
AC
,
Lillie
E
,
Zarin
W
,
O'Brien
KK
,
Colquhoun
H
,
Levac
D
, et al
.
PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation
.
Ann Intern Med
.
2018
;
169
(
7
):
467
73
.
30.
VH, I. Covidence systematic review software
.
2020
.
31.
Singh
AG
,
Singh
S
,
Singh
PP
.
YouTube for information on rheumatoid arthritis: a wakeup call
.
J Rheumatol
.
2012
;
39
(
5
):
899
903
.
32.
Farag
M
,
Bolton
D
,
Lawrentschuk
N
.
Use of YouTube as a resource for surgical education-clarity or confusion
.
Eur Urol Focus
.
2020
;
6
(
3
):
445
9
.
33.
Dunn
AG
,
Coiera
E
,
Mandl
KD
,
Bourgeois
FT
.
Conflict of interest disclosure in biomedical research: a review of current practices, biases, and the role of public registries in improving transparency
.
Res Integr Peer Rev
.
2016
;
1
(
1
):
1
.
34.
de’Angelis
N
,
Gavriilidis
P
,
Martínez-Pérez
A
,
Genova
P
,
Notarnicola
M
,
Reitano
E
, et al
.
Educational value of surgical videos on YouTube: quality assessment of laparoscopic appendectomy videos by senior surgeons vs. novice trainees
.
World J Emerg Surg
.
2019
;
14
:
22
.
35.
Celentano
V
,
Smart
N
,
Cahill
RA
,
Spinelli
A
,
Giglio
MC
,
McGrath
J
, et al
.
Development and validation of a recommended checklist for assessment of surgical videos quality: the LAParoscopic surgery Video Educational GuidelineS (LAP-VEGaS) video assessment tool
.
Surg Endosc
.
2021
;
35
(
3
):
1362
9
.
36.
Rouhi
AD
,
Roberson
JL
,
Kindall
E
,
Ghanem
YK
,
Ndong
A
,
Yi
WS
, et al
.
What are trainees watching? Assessing the educational quality of online laparoscopic cholecystectomy training videos using the LAP-VEGaS guidelines
.
Surgery
.
2023
;
174
(
3
):
524
8
.
37.
Alghazawi
L
,
Fadel
MG
,
Chen
JY
,
Das
B
,
Robb
H
,
Rodriguez-Luna
MR
, et al
.
Development and evaluation of a quality assessment tool for laparoscopic sleeve gastrectomy videos: a review and comparison of academic and online video resources
.
Obes Surg
.
2024
;
34
(
5
):
1909
16
.
38.
Youssef
SC
,
Aydin
A
,
Canning
A
,
Khan
N
,
Ahmed
K
,
Dasgupta
P
.
Learning surgical skills through video-based education: a systematic review
.
Surg Innov
.
2023
;
30
(
2
):
220
38
.
39.
Noël
J
,
Moschovas
MC
,
Patel
E
,
Rogers
T
,
Marquinez
J
,
Rocco
B
, et al
.
Step-by-step optimisation of robotic-assisted radical prostatectomy using augmented reality
.
Int Braz J Urol
.
2022
;
48
(
3
):
600
1
.
40.
Mandler
AG
.
Touch surgery: a twenty-first century platform for surgical training
.
J Digit Imaging
.
2018
;
31
(
5
):
585
90
.
41.
Kawka
M
,
Gall
TM
,
Fang
C
,
Liu
R
,
Jiao
LR
.
Intraoperative video analysis and machine learning models will change the future of surgical training
.
Intell Surg
.
2022
;
1
:
13
5
.