Publications
2018
While standardized examinations and data from simulators and phantom models can assess knowledge and manual skills for ultrasound, an Objective Structured Clinical Examination (OSCE) could assess workflow understanding. We recruited 8 experts to develop an OSCE to assess workflow understanding in perioperative ultrasound. The experts used a binary grading system to score 19 graduating anesthesia residents at 6 stations. Overall average performance was 86.2%, and 3 stations had an acceptable internal reliability (Kuder-Richardson formula 20 coefficient >0.5). After refinement, this OSCE can be combined with standardized examinations and data from simulators and phantom models to assess proficiency in ultrasound.
Various metrics have been used in curriculum-based transesophageal echocardiography (TEE) training programs to evaluate acquisition of proficiency. However, the quality of task completion, that is the final image quality, was subjectively evaluated in these studies. Ideally, the endpoint metric should be an objective comparison of the trainee-acquired image with a reference ideal image. Therefore, we developed a simulator-based methodology of preclinical verification of proficiency (VOP) in trainees by tracking objective evaluation of the final acquired images. We utilized geometric data from the simulator probes to compare image acquisition of anesthesia residents who participated in our structured longitudinal simulator-based TEE educational program vs ideal image planes determined from a panel of experts. Thirty-three participants completed the study (15 experts, 7 postgraduate year (PGY)-1 and 11 PGY-4). The results of our study demonstrated a significant difference in image capture success rates between learners and experts (χ2 = 14.716, df = 2, P < 0.001) with the difference between learners (PGY-1 and PGY-4) not being statistically significant (χ2 = 0, df = 1, P = 1.000). Therefore, our results suggest that novices (i.e. PGY-1 residents) are capable of attaining a level of proficiency comparable to those with modest training (i.e. PGY-4 residents) after completion of a simulation-based training curriculum. However, professionals with years of clinical training (i.e. attending physicians) exhibit a superior mastery of such skills. It is hence feasible to develop a simulator-based VOP program in performance of TEE for junior anesthesia residents.
OBJECTIVE: To test the feasibility and reliability of using a vendor-neutral platform to evaluate right ventricular (RV) strain. Reliability was determined by comparing intra- and inter-observer variability between RV strain assessments. The secondary objective was to assess strain's correlation with conventional RV functional parameters to evaluate its feasibility as a RV systolic functional assessment tool.
DESIGN: This is a retrospective study.
SETTING: Tertiary hospital.
PARTICIPANTS: A total of 15 patients who underwent elective coronary artery bypass graft surgery were selected for inclusion.
INTERVENTIONS: None.
MEASUREMENTS AND MAIN RESULTS: Images obtained during routine, intraoperative, two-dimensional transesophageal echocardiography (2D TEE) were assessed for longitudinal strain (LS) and conventional parameters, including fractional area change (FAC), tricuspid annular plane systolic excursion (TAPSE), Doppler tissue imaging (DTI)-derived tricuspid lateral annular systolic velocity wave (S'), and RV dimensions using vendor-neutral software. There was good to excellent intra- and inter-observer reproducibility (intraclass correlation coefficient [ICC] from 0.75 to 1.00) with the exception of basal free wall longitudinal strain (FWLS) (for intra- and inter-observer reproducibility, ICC = 0.670 and 0.749, respectively). FWLS and global longitudinal strain (GLS) showed moderate to strong positive correlation with FAC, TAPSE, and S' (correlation coefficients from 0.667 to 0.721).
CONCLUSION: It is feasible to assess RV strain across multiple platforms in a reproducible and reliable fashion. Furthermore, RV strain demonstrated good correlation with conventional RV functional parameters, suggesting its feasibility as a sensitive RV function assessment tool.
While standardized examinations and data from simulators and phantom models can assess knowledge and manual skills for ultrasound, an Objective Structured Clinical Examination (OSCE) could assess workflow understanding. We recruited 8 experts to develop an OSCE to assess workflow understanding in perioperative ultrasound. The experts used a binary grading system to score 19 graduating anesthesia residents at 6 stations. Overall average performance was 86.2%, and 3 stations had an acceptable internal reliability (Kuder-Richardson formula 20 coefficient >0.5). After refinement, this OSCE can be combined with standardized examinations and data from simulators and phantom models to assess proficiency in ultrasound.
Various metrics have been used in curriculum-based transesophageal echocardiography (TEE) training programs to evaluate acquisition of proficiency. However, the quality of task completion, that is the final image quality, was subjectively evaluated in these studies. Ideally, the endpoint metric should be an objective comparison of the trainee-acquired image with a reference ideal image. Therefore, we developed a simulator-based methodology of preclinical verification of proficiency (VOP) in trainees by tracking objective evaluation of the final acquired images. We utilized geometric data from the simulator probes to compare image acquisition of anesthesia residents who participated in our structured longitudinal simulator-based TEE educational program vs ideal image planes determined from a panel of experts. Thirty-three participants completed the study (15 experts, 7 postgraduate year (PGY)-1 and 11 PGY-4). The results of our study demonstrated a significant difference in image capture success rates between learners and experts (χ2 = 14.716, df = 2, P < 0.001) with the difference between learners (PGY-1 and PGY-4) not being statistically significant (χ2 = 0, df = 1, P = 1.000). Therefore, our results suggest that novices (i.e. PGY-1 residents) are capable of attaining a level of proficiency comparable to those with modest training (i.e. PGY-4 residents) after completion of a simulation-based training curriculum. However, professionals with years of clinical training (i.e. attending physicians) exhibit a superior mastery of such skills. It is hence feasible to develop a simulator-based VOP program in performance of TEE for junior anesthesia residents.
BACKGROUND: Obtaining reliable and valid information on resident performance is critical to patient safety and training program improvement. The goals were to characterize important anesthesia resident performance gaps that are not typically evaluated, and to further validate scores from a multiscenario simulation-based assessment.
METHODS: Seven high-fidelity scenarios reflecting core anesthesiology skills were administered to 51 first-year residents (CA-1s) and 16 third-year residents (CA-3s) from three residency programs. Twenty trained attending anesthesiologists rated resident performances using a seven-point behaviorally anchored rating scale for five domains: (1) formulate a clear plan, (2) modify the plan under changing conditions, (3) communicate effectively, (4) identify performance improvement opportunities, and (5) recognize limits. A second rater assessed 10% of encounters. Scores and variances for each domain, each scenario, and the total were compared. Low domain ratings (1, 2) were examined in detail.
RESULTS: Interrater agreement was 0.76; reliability of the seven-scenario assessment was r = 0.70. CA-3s had a significantly higher average total score (4.9 ± 1.1 vs. 4.6 ± 1.1, P = 0.01, effect size = 0.33). CA-3s significantly outscored CA-1s for five of seven scenarios and domains 1, 2, and 3. CA-1s had a significantly higher proportion of worrisome ratings than CA-3s (chi-square = 24.1, P < 0.01, effect size = 1.50). Ninety-eight percent of residents rated the simulations more educational than an average day in the operating room.
CONCLUSIONS: Sensitivity of the assessment to CA-1 versus CA-3 performance differences for most scenarios and domains supports validity. No differences, by experience level, were detected for two domains associated with reflective practice. Smaller score variances for CA-3s likely reflect a training effect; however, worrisome performance scores for both CA-1s and CA-3s suggest room for improvement.