Crowdsourcing for assessment items to support adaptive learning

Sean Tackett, Mark Raymond, Rishi Desai, Steven A. Haist, Amy Morales, Shiv Gaglani, Stephen G. Clyman

Research output: Contribution to journalArticle

Abstract

Purpose: Adaptive learning requires frequent and valid assessments for learners to track progress against their goals. This study determined if multiple-choice questions (MCQs) “crowdsourced” from medical learners could meet the standards of many large-scale testing programs. Methods: Users of a medical education app (Osmosis.org, Baltimore, MD) volunteered to submit case-based MCQs. Eleven volunteers were selected to submit MCQs targeted to second year medical students. Two hundred MCQs were subjected to duplicate review by a panel of internal medicine faculty who rated each item for relevance, content accuracy, and quality of response option explanations. A sample of 121 items was pretested on clinical subject exams completed by a national sample of U.S. medical students. Results: Seventy-eight percent of the 200 MCQs met faculty reviewer standards based on relevance, accuracy, and quality of explanations. Of the 121 pretested MCQs, 50% met acceptable statistical criteria. The most common reasons for exclusion were that the item was too easy or had a low discrimination index. Conclusions: Crowdsourcing can efficiently yield high-quality assessment items that meet rigorous judgmental and statistical criteria. Similar models may be adopted by students and educators to augment item pools that support adaptive learning.

Original languageEnglish (US)
JournalMedical Teacher
DOIs
StateAccepted/In press - Jan 1 2018

Fingerprint

medical student
learning
discrimination
exclusion
medicine
educator
education
student

ASJC Scopus subject areas

  • Education

Cite this

Tackett, S., Raymond, M., Desai, R., Haist, S. A., Morales, A., Gaglani, S., & Clyman, S. G. (Accepted/In press). Crowdsourcing for assessment items to support adaptive learning. Medical Teacher. https://doi.org/10.1080/0142159X.2018.1490704

Crowdsourcing for assessment items to support adaptive learning. / Tackett, Sean; Raymond, Mark; Desai, Rishi; Haist, Steven A.; Morales, Amy; Gaglani, Shiv; Clyman, Stephen G.

In: Medical Teacher, 01.01.2018.

Research output: Contribution to journalArticle

Tackett, Sean ; Raymond, Mark ; Desai, Rishi ; Haist, Steven A. ; Morales, Amy ; Gaglani, Shiv ; Clyman, Stephen G. / Crowdsourcing for assessment items to support adaptive learning. In: Medical Teacher. 2018.
@article{03caf447c6d845f6931f330220f00818,
title = "Crowdsourcing for assessment items to support adaptive learning",
abstract = "Purpose: Adaptive learning requires frequent and valid assessments for learners to track progress against their goals. This study determined if multiple-choice questions (MCQs) “crowdsourced” from medical learners could meet the standards of many large-scale testing programs. Methods: Users of a medical education app (Osmosis.org, Baltimore, MD) volunteered to submit case-based MCQs. Eleven volunteers were selected to submit MCQs targeted to second year medical students. Two hundred MCQs were subjected to duplicate review by a panel of internal medicine faculty who rated each item for relevance, content accuracy, and quality of response option explanations. A sample of 121 items was pretested on clinical subject exams completed by a national sample of U.S. medical students. Results: Seventy-eight percent of the 200 MCQs met faculty reviewer standards based on relevance, accuracy, and quality of explanations. Of the 121 pretested MCQs, 50{\%} met acceptable statistical criteria. The most common reasons for exclusion were that the item was too easy or had a low discrimination index. Conclusions: Crowdsourcing can efficiently yield high-quality assessment items that meet rigorous judgmental and statistical criteria. Similar models may be adopted by students and educators to augment item pools that support adaptive learning.",
author = "Sean Tackett and Mark Raymond and Rishi Desai and Haist, {Steven A.} and Amy Morales and Shiv Gaglani and Clyman, {Stephen G.}",
year = "2018",
month = "1",
day = "1",
doi = "10.1080/0142159X.2018.1490704",
language = "English (US)",
journal = "Medical Teacher",
issn = "0142-159X",
publisher = "Informa Healthcare",

}

TY - JOUR

T1 - Crowdsourcing for assessment items to support adaptive learning

AU - Tackett, Sean

AU - Raymond, Mark

AU - Desai, Rishi

AU - Haist, Steven A.

AU - Morales, Amy

AU - Gaglani, Shiv

AU - Clyman, Stephen G.

PY - 2018/1/1

Y1 - 2018/1/1

N2 - Purpose: Adaptive learning requires frequent and valid assessments for learners to track progress against their goals. This study determined if multiple-choice questions (MCQs) “crowdsourced” from medical learners could meet the standards of many large-scale testing programs. Methods: Users of a medical education app (Osmosis.org, Baltimore, MD) volunteered to submit case-based MCQs. Eleven volunteers were selected to submit MCQs targeted to second year medical students. Two hundred MCQs were subjected to duplicate review by a panel of internal medicine faculty who rated each item for relevance, content accuracy, and quality of response option explanations. A sample of 121 items was pretested on clinical subject exams completed by a national sample of U.S. medical students. Results: Seventy-eight percent of the 200 MCQs met faculty reviewer standards based on relevance, accuracy, and quality of explanations. Of the 121 pretested MCQs, 50% met acceptable statistical criteria. The most common reasons for exclusion were that the item was too easy or had a low discrimination index. Conclusions: Crowdsourcing can efficiently yield high-quality assessment items that meet rigorous judgmental and statistical criteria. Similar models may be adopted by students and educators to augment item pools that support adaptive learning.

AB - Purpose: Adaptive learning requires frequent and valid assessments for learners to track progress against their goals. This study determined if multiple-choice questions (MCQs) “crowdsourced” from medical learners could meet the standards of many large-scale testing programs. Methods: Users of a medical education app (Osmosis.org, Baltimore, MD) volunteered to submit case-based MCQs. Eleven volunteers were selected to submit MCQs targeted to second year medical students. Two hundred MCQs were subjected to duplicate review by a panel of internal medicine faculty who rated each item for relevance, content accuracy, and quality of response option explanations. A sample of 121 items was pretested on clinical subject exams completed by a national sample of U.S. medical students. Results: Seventy-eight percent of the 200 MCQs met faculty reviewer standards based on relevance, accuracy, and quality of explanations. Of the 121 pretested MCQs, 50% met acceptable statistical criteria. The most common reasons for exclusion were that the item was too easy or had a low discrimination index. Conclusions: Crowdsourcing can efficiently yield high-quality assessment items that meet rigorous judgmental and statistical criteria. Similar models may be adopted by students and educators to augment item pools that support adaptive learning.

UR - http://www.scopus.com/inward/record.url?scp=85051976260&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85051976260&partnerID=8YFLogxK

U2 - 10.1080/0142159X.2018.1490704

DO - 10.1080/0142159X.2018.1490704

M3 - Article

C2 - 30096987

AN - SCOPUS:85051976260

JO - Medical Teacher

JF - Medical Teacher

SN - 0142-159X

ER -