Assessment Tools for Use During Anesthesia-Centric Pediatric Advanced Life Support Training and Evaluation

Scott Watkins, Paul J. Nietert, Elisabeth Hughes, Eric T. Stickles, Tracy E. Wester, Matthew D. McEvoy

Research output: Contribution to journalArticle

Abstract

Background Pediatric perioperative cardiac arrests are rare events that require rapid, skilled and coordinated efforts to optimize outcomes. We developed an assessment tool for assessing clinician performance during perioperative critical events termed Anesthesia-centric Pediatric Advanced Life Support (A-PALS). Here, we describe the development and evaluation of the A-PALS scoring instrument. Methods A group of raters scored videos of a perioperative team managing simulated events representing a range of scenarios and competency. We assessed agreement with the reference standard grading, as well as interrater and intrarater reliability. Results Overall, raters agreed with the reference standard 86.2% of the time. Rater scores concerning scenarios that depicted highly competent performance correlated better with the reference standard than scores from scenarios that depicted low clinical competence (P < 0.0001). Agreement with the reference standard was significantly (P < 0.0001) associated with scenario type, item category, level of competency displayed in the scenario, correct versus incorrect actions and whether the action was performed versus not performed. Kappa values were significantly (P < 0.0001) higher for highly competent performances as compared to lesser competent performances (good: mean = 0.83 [standard deviation = 0.07] versus poor: mean = 0.61 [standard deviation = 0.14]). The intraclass correlation coefficient (interrater reliability) was 0.97 for the raters’ composite scores on correct actions and 0.98 for their composite scores on incorrect actions. Conclusions This study provides evidence for the validity of the A-PALS scoring instrument and demonstrates that the scoring instrument can provide reliable scores, although clinician performance affects reliability.

Original languageEnglish (US)
Pages (from-to)516-522
Number of pages7
JournalAmerican Journal of the Medical Sciences
Volume353
Issue number6
DOIs
StatePublished - Jun 1 2017
Externally publishedYes

Fingerprint

Training Support
Anesthesia
Pediatrics
Clinical Competence
Heart Arrest

Keywords

  • Anesthesia
  • Assessment
  • Interdisciplinary Education
  • Simulation
  • Teamwork

ASJC Scopus subject areas

  • Medicine(all)

Cite this

Assessment Tools for Use During Anesthesia-Centric Pediatric Advanced Life Support Training and Evaluation. / Watkins, Scott; Nietert, Paul J.; Hughes, Elisabeth; Stickles, Eric T.; Wester, Tracy E.; McEvoy, Matthew D.

In: American Journal of the Medical Sciences, Vol. 353, No. 6, 01.06.2017, p. 516-522.

Research output: Contribution to journalArticle

Watkins, Scott ; Nietert, Paul J. ; Hughes, Elisabeth ; Stickles, Eric T. ; Wester, Tracy E. ; McEvoy, Matthew D. / Assessment Tools for Use During Anesthesia-Centric Pediatric Advanced Life Support Training and Evaluation. In: American Journal of the Medical Sciences. 2017 ; Vol. 353, No. 6. pp. 516-522.
@article{9849ef3c4e3547e692f4e49129be15dc,
title = "Assessment Tools for Use During Anesthesia-Centric Pediatric Advanced Life Support Training and Evaluation",
abstract = "Background Pediatric perioperative cardiac arrests are rare events that require rapid, skilled and coordinated efforts to optimize outcomes. We developed an assessment tool for assessing clinician performance during perioperative critical events termed Anesthesia-centric Pediatric Advanced Life Support (A-PALS). Here, we describe the development and evaluation of the A-PALS scoring instrument. Methods A group of raters scored videos of a perioperative team managing simulated events representing a range of scenarios and competency. We assessed agreement with the reference standard grading, as well as interrater and intrarater reliability. Results Overall, raters agreed with the reference standard 86.2{\%} of the time. Rater scores concerning scenarios that depicted highly competent performance correlated better with the reference standard than scores from scenarios that depicted low clinical competence (P < 0.0001). Agreement with the reference standard was significantly (P < 0.0001) associated with scenario type, item category, level of competency displayed in the scenario, correct versus incorrect actions and whether the action was performed versus not performed. Kappa values were significantly (P < 0.0001) higher for highly competent performances as compared to lesser competent performances (good: mean = 0.83 [standard deviation = 0.07] versus poor: mean = 0.61 [standard deviation = 0.14]). The intraclass correlation coefficient (interrater reliability) was 0.97 for the raters’ composite scores on correct actions and 0.98 for their composite scores on incorrect actions. Conclusions This study provides evidence for the validity of the A-PALS scoring instrument and demonstrates that the scoring instrument can provide reliable scores, although clinician performance affects reliability.",
keywords = "Anesthesia, Assessment, Interdisciplinary Education, Simulation, Teamwork",
author = "Scott Watkins and Nietert, {Paul J.} and Elisabeth Hughes and Stickles, {Eric T.} and Wester, {Tracy E.} and McEvoy, {Matthew D.}",
year = "2017",
month = "6",
day = "1",
doi = "10.1016/j.amjms.2016.09.013",
language = "English (US)",
volume = "353",
pages = "516--522",
journal = "American Journal of the Medical Sciences",
issn = "0002-9629",
publisher = "Lippincott Williams and Wilkins",
number = "6",

}

TY - JOUR

T1 - Assessment Tools for Use During Anesthesia-Centric Pediatric Advanced Life Support Training and Evaluation

AU - Watkins, Scott

AU - Nietert, Paul J.

AU - Hughes, Elisabeth

AU - Stickles, Eric T.

AU - Wester, Tracy E.

AU - McEvoy, Matthew D.

PY - 2017/6/1

Y1 - 2017/6/1

N2 - Background Pediatric perioperative cardiac arrests are rare events that require rapid, skilled and coordinated efforts to optimize outcomes. We developed an assessment tool for assessing clinician performance during perioperative critical events termed Anesthesia-centric Pediatric Advanced Life Support (A-PALS). Here, we describe the development and evaluation of the A-PALS scoring instrument. Methods A group of raters scored videos of a perioperative team managing simulated events representing a range of scenarios and competency. We assessed agreement with the reference standard grading, as well as interrater and intrarater reliability. Results Overall, raters agreed with the reference standard 86.2% of the time. Rater scores concerning scenarios that depicted highly competent performance correlated better with the reference standard than scores from scenarios that depicted low clinical competence (P < 0.0001). Agreement with the reference standard was significantly (P < 0.0001) associated with scenario type, item category, level of competency displayed in the scenario, correct versus incorrect actions and whether the action was performed versus not performed. Kappa values were significantly (P < 0.0001) higher for highly competent performances as compared to lesser competent performances (good: mean = 0.83 [standard deviation = 0.07] versus poor: mean = 0.61 [standard deviation = 0.14]). The intraclass correlation coefficient (interrater reliability) was 0.97 for the raters’ composite scores on correct actions and 0.98 for their composite scores on incorrect actions. Conclusions This study provides evidence for the validity of the A-PALS scoring instrument and demonstrates that the scoring instrument can provide reliable scores, although clinician performance affects reliability.

AB - Background Pediatric perioperative cardiac arrests are rare events that require rapid, skilled and coordinated efforts to optimize outcomes. We developed an assessment tool for assessing clinician performance during perioperative critical events termed Anesthesia-centric Pediatric Advanced Life Support (A-PALS). Here, we describe the development and evaluation of the A-PALS scoring instrument. Methods A group of raters scored videos of a perioperative team managing simulated events representing a range of scenarios and competency. We assessed agreement with the reference standard grading, as well as interrater and intrarater reliability. Results Overall, raters agreed with the reference standard 86.2% of the time. Rater scores concerning scenarios that depicted highly competent performance correlated better with the reference standard than scores from scenarios that depicted low clinical competence (P < 0.0001). Agreement with the reference standard was significantly (P < 0.0001) associated with scenario type, item category, level of competency displayed in the scenario, correct versus incorrect actions and whether the action was performed versus not performed. Kappa values were significantly (P < 0.0001) higher for highly competent performances as compared to lesser competent performances (good: mean = 0.83 [standard deviation = 0.07] versus poor: mean = 0.61 [standard deviation = 0.14]). The intraclass correlation coefficient (interrater reliability) was 0.97 for the raters’ composite scores on correct actions and 0.98 for their composite scores on incorrect actions. Conclusions This study provides evidence for the validity of the A-PALS scoring instrument and demonstrates that the scoring instrument can provide reliable scores, although clinician performance affects reliability.

KW - Anesthesia

KW - Assessment

KW - Interdisciplinary Education

KW - Simulation

KW - Teamwork

UR - http://www.scopus.com/inward/record.url?scp=85021860045&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85021860045&partnerID=8YFLogxK

U2 - 10.1016/j.amjms.2016.09.013

DO - 10.1016/j.amjms.2016.09.013

M3 - Article

C2 - 28641713

AN - SCOPUS:85021860045

VL - 353

SP - 516

EP - 522

JO - American Journal of the Medical Sciences

JF - American Journal of the Medical Sciences

SN - 0002-9629

IS - 6

ER -