Crowdsourcing for assessment items to support adaptive learning

Sean Tackett, Mark Raymond, Rishi Desai, Steven A. Haist, Amy Morales, Shiv Gaglani, Stephen G. Clyman

Research output: Contribution to journalArticlepeer-review

7 Scopus citations

Abstract

Purpose: Adaptive learning requires frequent and valid assessments for learners to track progress against their goals. This study determined if multiple-choice questions (MCQs) “crowdsourced” from medical learners could meet the standards of many large-scale testing programs. Methods: Users of a medical education app (Osmosis.org, Baltimore, MD) volunteered to submit case-based MCQs. Eleven volunteers were selected to submit MCQs targeted to second year medical students. Two hundred MCQs were subjected to duplicate review by a panel of internal medicine faculty who rated each item for relevance, content accuracy, and quality of response option explanations. A sample of 121 items was pretested on clinical subject exams completed by a national sample of U.S. medical students. Results: Seventy-eight percent of the 200 MCQs met faculty reviewer standards based on relevance, accuracy, and quality of explanations. Of the 121 pretested MCQs, 50% met acceptable statistical criteria. The most common reasons for exclusion were that the item was too easy or had a low discrimination index. Conclusions: Crowdsourcing can efficiently yield high-quality assessment items that meet rigorous judgmental and statistical criteria. Similar models may be adopted by students and educators to augment item pools that support adaptive learning.

Original languageEnglish (US)
Pages (from-to)838-841
Number of pages4
JournalMedical teacher
Volume40
Issue number8
DOIs
StatePublished - Aug 3 2018

ASJC Scopus subject areas

  • Education

Fingerprint

Dive into the research topics of 'Crowdsourcing for assessment items to support adaptive learning'. Together they form a unique fingerprint.

Cite this