Ten methodological lessons from the Multi-Country Evaluation of Integrated Management of Childhood Illness

Jennifer Bryce, Cesar G. Victora

Research output: Contribution to journalArticle

Abstract

Objective: To describe key methodological aspects of the Multi-Country Evaluation of the Integrated Management of Childhood Illness strategy (MCE-IMCI) and analyze their implications for other public health impact evaluations. Design: The MCE-IMCI evaluation designs are based on an impact model that defined expectations in the late 1990s about how IMCI would be implemented at country level and below, and the outcomes and impact it would have on child health and survival. MCE-IMCI studies include: feasibility assessments documenting IMCI implementation in 12 countries; in-depth studies using compatible designs in five countries; and cross-site analyses addressing the effectiveness of specific subsets of IMCI activities. The MCE-IMCI was designed to evaluate the impact of IMCI, and also to see that the findings from the evaluation were taken up through formal feedback sessions at national, sub-national and local levels. Results: Issues that arose early in the MCE-IMCI included: (1) defining the scope of the evaluation; (2) selecting study sites and developing research designs; (3) protecting objectivity; and (4) developing an impact model. Issues that arose mid-course included: (5) anticipating and addressing problems with external validity; (6) ensuring an appropriate time frame for the full evaluation cycle; (7) providing feedback on results to policymakers and programme implementers; and (8) modifying site-specific designs in response to early findings about the patterns and pace of programme implementation. Two critical issues could best be addressed only near the close of the evaluation: (9) factors affecting the uptake of evaluation results by policymakers and programme decision makers; and (10) the costs of the evaluation. Conclusions: Large-scale effectiveness evaluations present challenges that have not been addressed fully in the methodological literature. Although some of these challenges are context-specific, there are important lessons from the MCE that can inform future designs. Most of the issues described here are not addressed explicitly in research reports or evaluation textbooks. Describing and analyzing these experiences is one way to promote improved impact evaluations of new global health strategies.

Original languageEnglish (US)
JournalHealth Policy and Planning
Volume20
Issue numberSUPPL. 1
DOIs
StatePublished - Dec 2005
Externally publishedYes

Fingerprint

illness
childhood
evaluation
management
Textbooks
Feasibility Studies
objectivity
health
research planning
textbook
decision maker
Research Design
Public Health
public health
Costs and Cost Analysis

Keywords

  • Child health
  • Effectiveness evaluation
  • Evaluation
  • IMCI
  • Impact evaluation

ASJC Scopus subject areas

  • Nursing(all)
  • Health Policy
  • Health(social science)
  • Health Professions(all)
  • Public Health, Environmental and Occupational Health

Cite this

Ten methodological lessons from the Multi-Country Evaluation of Integrated Management of Childhood Illness. / Bryce, Jennifer; Victora, Cesar G.

In: Health Policy and Planning, Vol. 20, No. SUPPL. 1, 12.2005.

Research output: Contribution to journalArticle

@article{933f3f181f7f400889dff45c042ca470,
title = "Ten methodological lessons from the Multi-Country Evaluation of Integrated Management of Childhood Illness",
abstract = "Objective: To describe key methodological aspects of the Multi-Country Evaluation of the Integrated Management of Childhood Illness strategy (MCE-IMCI) and analyze their implications for other public health impact evaluations. Design: The MCE-IMCI evaluation designs are based on an impact model that defined expectations in the late 1990s about how IMCI would be implemented at country level and below, and the outcomes and impact it would have on child health and survival. MCE-IMCI studies include: feasibility assessments documenting IMCI implementation in 12 countries; in-depth studies using compatible designs in five countries; and cross-site analyses addressing the effectiveness of specific subsets of IMCI activities. The MCE-IMCI was designed to evaluate the impact of IMCI, and also to see that the findings from the evaluation were taken up through formal feedback sessions at national, sub-national and local levels. Results: Issues that arose early in the MCE-IMCI included: (1) defining the scope of the evaluation; (2) selecting study sites and developing research designs; (3) protecting objectivity; and (4) developing an impact model. Issues that arose mid-course included: (5) anticipating and addressing problems with external validity; (6) ensuring an appropriate time frame for the full evaluation cycle; (7) providing feedback on results to policymakers and programme implementers; and (8) modifying site-specific designs in response to early findings about the patterns and pace of programme implementation. Two critical issues could best be addressed only near the close of the evaluation: (9) factors affecting the uptake of evaluation results by policymakers and programme decision makers; and (10) the costs of the evaluation. Conclusions: Large-scale effectiveness evaluations present challenges that have not been addressed fully in the methodological literature. Although some of these challenges are context-specific, there are important lessons from the MCE that can inform future designs. Most of the issues described here are not addressed explicitly in research reports or evaluation textbooks. Describing and analyzing these experiences is one way to promote improved impact evaluations of new global health strategies.",
keywords = "Child health, Effectiveness evaluation, Evaluation, IMCI, Impact evaluation",
author = "Jennifer Bryce and Victora, {Cesar G.}",
year = "2005",
month = "12",
doi = "10.1093/heapol/czi056",
language = "English (US)",
volume = "20",
journal = "Health Policy and Planning",
issn = "0268-1080",
publisher = "Oxford University Press",
number = "SUPPL. 1",

}

TY - JOUR

T1 - Ten methodological lessons from the Multi-Country Evaluation of Integrated Management of Childhood Illness

AU - Bryce, Jennifer

AU - Victora, Cesar G.

PY - 2005/12

Y1 - 2005/12

N2 - Objective: To describe key methodological aspects of the Multi-Country Evaluation of the Integrated Management of Childhood Illness strategy (MCE-IMCI) and analyze their implications for other public health impact evaluations. Design: The MCE-IMCI evaluation designs are based on an impact model that defined expectations in the late 1990s about how IMCI would be implemented at country level and below, and the outcomes and impact it would have on child health and survival. MCE-IMCI studies include: feasibility assessments documenting IMCI implementation in 12 countries; in-depth studies using compatible designs in five countries; and cross-site analyses addressing the effectiveness of specific subsets of IMCI activities. The MCE-IMCI was designed to evaluate the impact of IMCI, and also to see that the findings from the evaluation were taken up through formal feedback sessions at national, sub-national and local levels. Results: Issues that arose early in the MCE-IMCI included: (1) defining the scope of the evaluation; (2) selecting study sites and developing research designs; (3) protecting objectivity; and (4) developing an impact model. Issues that arose mid-course included: (5) anticipating and addressing problems with external validity; (6) ensuring an appropriate time frame for the full evaluation cycle; (7) providing feedback on results to policymakers and programme implementers; and (8) modifying site-specific designs in response to early findings about the patterns and pace of programme implementation. Two critical issues could best be addressed only near the close of the evaluation: (9) factors affecting the uptake of evaluation results by policymakers and programme decision makers; and (10) the costs of the evaluation. Conclusions: Large-scale effectiveness evaluations present challenges that have not been addressed fully in the methodological literature. Although some of these challenges are context-specific, there are important lessons from the MCE that can inform future designs. Most of the issues described here are not addressed explicitly in research reports or evaluation textbooks. Describing and analyzing these experiences is one way to promote improved impact evaluations of new global health strategies.

AB - Objective: To describe key methodological aspects of the Multi-Country Evaluation of the Integrated Management of Childhood Illness strategy (MCE-IMCI) and analyze their implications for other public health impact evaluations. Design: The MCE-IMCI evaluation designs are based on an impact model that defined expectations in the late 1990s about how IMCI would be implemented at country level and below, and the outcomes and impact it would have on child health and survival. MCE-IMCI studies include: feasibility assessments documenting IMCI implementation in 12 countries; in-depth studies using compatible designs in five countries; and cross-site analyses addressing the effectiveness of specific subsets of IMCI activities. The MCE-IMCI was designed to evaluate the impact of IMCI, and also to see that the findings from the evaluation were taken up through formal feedback sessions at national, sub-national and local levels. Results: Issues that arose early in the MCE-IMCI included: (1) defining the scope of the evaluation; (2) selecting study sites and developing research designs; (3) protecting objectivity; and (4) developing an impact model. Issues that arose mid-course included: (5) anticipating and addressing problems with external validity; (6) ensuring an appropriate time frame for the full evaluation cycle; (7) providing feedback on results to policymakers and programme implementers; and (8) modifying site-specific designs in response to early findings about the patterns and pace of programme implementation. Two critical issues could best be addressed only near the close of the evaluation: (9) factors affecting the uptake of evaluation results by policymakers and programme decision makers; and (10) the costs of the evaluation. Conclusions: Large-scale effectiveness evaluations present challenges that have not been addressed fully in the methodological literature. Although some of these challenges are context-specific, there are important lessons from the MCE that can inform future designs. Most of the issues described here are not addressed explicitly in research reports or evaluation textbooks. Describing and analyzing these experiences is one way to promote improved impact evaluations of new global health strategies.

KW - Child health

KW - Effectiveness evaluation

KW - Evaluation

KW - IMCI

KW - Impact evaluation

UR - http://www.scopus.com/inward/record.url?scp=28644449555&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=28644449555&partnerID=8YFLogxK

U2 - 10.1093/heapol/czi056

DO - 10.1093/heapol/czi056

M3 - Article

VL - 20

JO - Health Policy and Planning

JF - Health Policy and Planning

SN - 0268-1080

IS - SUPPL. 1

ER -