Ghana LEAP 1000 Impact Evaluation: Overview of Study Design

Ghana LEAP 1000 Impact Evaluation: Overview of Study Design

AUTHOR(S)
Richard de Groot

Published: 2016 Innocenti Research Briefs

Sharing of good, practical research practices and lessons learned from development and humanitarian contexts is in high demand not only within UNICEF, but also in the broader international development and humanitarian community, ‘Impact Evaluation in the Field’ complements other methodological briefs by discussing how textbook approaches are applied in often challenging, under-resourced development contexts as well as the innovative solutions that are needed to ensure that practical demands do not compromise methodological rigour. The series will grow over time, allowing UNICEF staff and partners to share new experiences and approaches as they emerge from applied research. The overarching aim is to contribute to strengthening capacity in research and evaluation, improving UNICEF and partners’ ability to provide evidence-based, strategic, long-term solutions for children. This brief documents the impact evaluation design of the Ghana Livelihood Empowerment against Poverty (LEAP) 1000 programme which is being piloted in ten districts in two regions and targets about 6,000 households initially.

Utilizing Qualitative Methods in the Ghana LEAP 1000 Impact Evaluation

Utilizing Qualitative Methods in the Ghana LEAP 1000 Impact Evaluation

AUTHOR(S)
Michelle Mills; Clare Barrington

Published: 2016 Innocenti Research Briefs

Sharing of good, practical research practices and lessons learned from development and humanitarian contexts is in high demand not only within UNICEF, but also in the broader international development and humanitarian community, ‘Impact Evaluation in the Field’ complements other methodological briefs by discussing how textbook approaches are applied in often challenging, under-resourced development contexts as well as the innovative solutions that are needed to ensure that practical demands do not compromise methodological rigour. The series will grow over time, allowing UNICEF staff and partners to share new experiences and approaches as they emerge from applied research. The overarching aim is to contribute to strengthening capacity in research and evaluation, improving UNICEF and partners’ ability to provide evidence-based, strategic, long-term solutions for children. This methodological brief focuses on the qualitative component of the evaluation of the Ghana Livelihood Empowerment against Poverty (LEAP) 1000. Quantitative measures will indicate if LEAP 1000 reduces child poverty, stunting and other measures of well-being, while qualitative research explores in more depth the reasons why and how this may or may not be happening.

Evaluative Criteria: Methodological Briefs - Impact Evaluation No. 3

Evaluative Criteria: Methodological Briefs - Impact Evaluation No. 3

AUTHOR(S)
Greet Peersman

Published: 2014 Methodological Briefs
Evaluation relies on a combination of facts and values to judge the merit of an intervention. Evaluative criteria specify the values that will be used in an evaluation. While evaluative criteria can be used in different types of evaluations, this brief specifically addresses their use in impact evaluations.
Overview: Strategies for Causal Attribution: Methodological Briefs - Impact Evaluation No. 6

Overview: Strategies for Causal Attribution: Methodological Briefs - Impact Evaluation No. 6

AUTHOR(S)
Patricia Rogers

Published: 2014 Methodological Briefs
One of the essential elements of an impact evaluation is that it not only measures or describes changes that have occurred but also seeks to understand the role of particular interventions (i.e., programmes or policies) in producing these changes. This process is known as causal attribution. In impact evaluation, there are three broad strategies for causal attribution: 1) estimating the counterfactual; 2) checking the consistency of evidence for the causal relationships made explicit in the theory of change; and 3) ruling out alternative explanations, through a logical, evidence-based process. The ‘best fit’ strategy for causal attribution depends on the evaluation context as well as what is being evaluated.
Randomized Controlled Trials (RCTs): Methodological Briefs - Impact Evaluation No. 7

Randomized Controlled Trials (RCTs): Methodological Briefs - Impact Evaluation No. 7

AUTHOR(S)
Howard White; Shagun Sabarwal; Thomas de Hoop

Published: 2014 Methodological Briefs
A randomized controlled trial (RCT) is an experimental form of impact evaluation in which the population receiving the programme or policy intervention is chosen at random from the eligible population, and a control group is also chosen at random from the same eligible population. It tests the extent to which specific, planned impacts are being achieved. The distinguishing feature of an RCT is the random assignment of units (e.g. people, schools, villages, etc.) to the intervention or control groups. One of its strengths is that it provides a very powerful response to questions of causality, helping evaluators and programme implementers to know that what is being achieved is as a result of the intervention and not anything else.
Quasi-Experimental Design and Methods: Methodological Briefs - Impact Evaluation No. 8

Quasi-Experimental Design and Methods: Methodological Briefs - Impact Evaluation No. 8

AUTHOR(S)
Howard White; Shagun Sabarwal

Published: 2014 Methodological Briefs
Quasi-experimental research designs, like experimental designs, test causal hypotheses. Quasi-experimental designs identify a comparison group that is as similar as possible to the intervention group in terms of baseline (pre-intervention) characteristics. The comparison group captures what would have been the outcomes if the programme/policy had not been implemented (i.e., the counterfactual). The key difference between an experimental and quasi-experimental design is that the latter lacks random assignment.
Comparative Case Studies: Methodological Briefs - Impact Evaluation No. 9

Comparative Case Studies: Methodological Briefs - Impact Evaluation No. 9

AUTHOR(S)
Delwyn Goodrick

Published: 2014 Methodological Briefs
Comparative case studies involve the analysis and synthesis of the similarities, differences and patterns across two or more cases that share a common focus or goal in a way that produces knowledge that is easier to generalize about causal questions – how and why particular programmes or policies work or fail to work. They may be selected as an appropriate impact evaluation design when it is not feasible to undertake an experimental design, and/or when there is a need to explain how the context influences the success of programme or policy initiatives. Comparative case studies usually utilize both qualitative and quantitative methods and are particularly useful for understanding how the context influences the success of an intervention and how better to tailor the intervention to the specific context to achieve the intended outcomes.
Developing and Selecting Measures of Child Well-Being: Methodological Briefs - Impact Evaluation No. 11

Developing and Selecting Measures of Child Well-Being: Methodological Briefs - Impact Evaluation No. 11

AUTHOR(S)
Howard White; Shagun Sabarwal

Published: 2014 Methodological Briefs
Indicators provide a signal to decision makers by indicating whether, and to what extent, a variable of interest has changed. They can be used at all levels of the results framework from inputs to impacts, and should be linked to the programme’s theory of change. Most important at the lower levels of the causal chain are monitoring indicators such as inputs (e.g., immunization kits supplied), activities (e.g., immunization days held) and outputs (e.g., clinics built). For higher-level indicators of outcomes and impact, however, monitoring tells us what has happened but not why it happened. To understand this, impact evaluation must be used to increase our understanding of the factors behind achieving or not achieving the goal.
Présentation de l'évaluation d’impact : Note méthodologique - Évaluation d'impact n° 1

Présentation de l'évaluation d’impact : Note méthodologique - Évaluation d'impact n° 1

Published: 2014 Methodological Briefs
L’évaluation d’impact fournit des informations sur les effets induits par une intervention. Elle peut être réalisée dans le cadre d’un programme, d’une politique ou de travail en amont, par exemple le renforcement des capacités, le plaidoyer politique et l’appui à la mise en place d’un environnement favorable. Cela va au-delà d’une simple étude des buts et objectifs et examine également les impacts inattendus.
Présentation des stratégies d'attribution causale : Note méthodologique - Évaluation d'impact n° 6

Présentation des stratégies d'attribution causale : Note méthodologique - Évaluation d'impact n° 6

AUTHOR(S)
Patricia Rogers

Published: 2014 Methodological Briefs
L’un des éléments essentiels d’une évaluation d’impact est qu’il ne s’agit pas seulement de mesurer ou de décrire les changements survenus, mais également de comprendre le rôle joué par certaines interventions particulières (programmes ou politiques) dans ces changements. Ce processus est appelé attribution causale. Il existe trois grandes stratégies d’attribution causale dans les évaluations d’impact : 1) l’estimation du scénario contrefactuel ; 2) la vérification de la cohérence des données probantes pour les relations de cause à effet explicitement exposées dans la théorie du changement ; et 3) l’exclusion d’autres explications par le biais d’un processus logique fondé sur des données probantes. La stratégie d’attribution causale la mieux adaptée dépend du contexte d’évaluation et de ce qui est évalué.
Présentation des méthodes de collecte et d'analyse de données dans l'évaluation d'impact : Note méthodologique - Évaluation d'impact n° 10

Présentation des méthodes de collecte et d'analyse de données dans l'évaluation d'impact : Note méthodologique - Évaluation d'impact n° 10

AUTHOR(S)
Greet Peersman

Published: 2014 Methodological Briefs
Les évaluations d’impact ne doivent pas se cantonner à déterminer l’ampleur des effets (c’est-à-dire l’impact moyen), mais doivent également identifier qui a bénéficié de ces programmes ou politiques et comment. Il convient de préciser dès le début ce qui constitue une « réussite » et la façon dont les données seront analysées et synthétisées pour répondre aux questions clés d’évaluation. La collecte de données doit en effet permettre d’obtenir l’ensemble de données probantes nécessaires pour porter des jugements appropriés sur le programme ou la politique.
Sinopsis de la Evaluación de Impacto: Síntesis metodológica - Sinopsis de la evaluación de impacto n° 1

Sinopsis de la Evaluación de Impacto: Síntesis metodológica - Sinopsis de la evaluación de impacto n° 1

AUTHOR(S)
Patricia Rogers

Published: 2014 Methodological Briefs
La evaluación de impacto proporciona información sobre los impactos que produce una intervención. Puede realizarse una evaluación de impacto de un programa o una política o del trabajo preliminar, como la creación de capacidad, la promoción de políticas y el apoyo a la creación de un entorno propicio. Esto supone examinar no solo las metas y los objetivos, sino también los impactos imprevistos.
1 - 12 of 14
first previus 1 2 go to next page go to last page