search advanced search
UNICEF Innocenti
Office of Research-Innocenti
search menu

Publications

UNICEF Innocenti's complete catalogue of research and reports
Places and Spaces: Environments and children’s well-being
SPOTLIGHT

Places and Spaces: Environments and children’s well-being

Report Card 17 explores how 43 OECD/EU countries are faring in providing healthy environments for children. Do children have clean water to drink? Do they have good-quality air to breathe? Are their homes free of lead and mould? How many children live in overcrowded homes? How many have access to green play spaces, safe from road traffic? Data show that a nation’s wealth does not guarantee a healthy environment. Far too many children are deprived of a healthy home, irreversibly damaging their current and future well-being. Beyond children’s immediate environments, over-consumption in some of the world’s richest countries is destroying children’s environments globally. This threatens both children worldwide and future generations. To provide all children with safe and healthy environments, governments, policymakers, businesses and all stakeholders are called to act on a set of policy recommendations.
READ THE FULL REPORT

RESULTS:   30     SORT BY:

FILTER BY:

PUBLICATION DATE:
13 - 24 of 30
Theory of Change: Methodological Briefs - Impact Evaluation No. 2
Theory of Change: Methodological Briefs - Impact Evaluation No. 2

AUTHOR(S)
Patricia Rogers

Published: 2014 Methodological Briefs
A theory of change explains how activities are understood to produce a series of results that contribute to achieving the final intended impacts. It can be developed for any level of intervention – an event, a project, a programme, a policy, a strategy or an organization. In an impact evaluation, a theory of change is useful for identifying the data that need to be collected and how they should be analysed. It can also provide a framework for reporting.
Evaluative Criteria: Methodological Briefs - Impact Evaluation No. 3
Evaluative Criteria: Methodological Briefs - Impact Evaluation No. 3

AUTHOR(S)
Greet Peersman

Published: 2014 Methodological Briefs
Evaluation relies on a combination of facts and values to judge the merit of an intervention. Evaluative criteria specify the values that will be used in an evaluation. While evaluative criteria can be used in different types of evaluations, this brief specifically addresses their use in impact evaluations.
Evaluative Reasoning: Methodological Briefs - Impact Evaluation No. 4
Evaluative Reasoning: Methodological Briefs - Impact Evaluation No. 4

AUTHOR(S)
E. Jane Davidson

Published: 2014 Methodological Briefs
Decision makers frequently need evaluation to help them work out what to do to build on strengths and address weaknesses. To do so, they must know not only what the strengths and weaknesses are, but also which are the most important or serious, and how well or poorly the programme or policy is performing on them. Evaluative reasoning is the process of synthesizing the answers to lower- and mid-level evaluation questions into defensible judgements that directly answer the key evaluation questions.
Participatory Approaches: Methodological Briefs - Impact Evaluation No. 5
Participatory Approaches: Methodological Briefs - Impact Evaluation No. 5

AUTHOR(S)
Irene Guijt

Published: 2014 Methodological Briefs
Using participatory approaches in impact evaluation means involving stakeholders, particularly the participants in a programme or those affected by a given policy, in specific aspects of the evaluation process. The term covers a wide range of different types of participation and stakeholders can be involved at any stage of the impact evaluation process, including: its design, data collection, analysis, reporting and managing the study.
Overview: Strategies for Causal Attribution: Methodological Briefs - Impact Evaluation No. 6
Overview: Strategies for Causal Attribution: Methodological Briefs - Impact Evaluation No. 6

AUTHOR(S)
Patricia Rogers

Published: 2014 Methodological Briefs
One of the essential elements of an impact evaluation is that it not only measures or describes changes that have occurred but also seeks to understand the role of particular interventions (i.e., programmes or policies) in producing these changes. This process is known as causal attribution. In impact evaluation, there are three broad strategies for causal attribution: 1) estimating the counterfactual; 2) checking the consistency of evidence for the causal relationships made explicit in the theory of change; and 3) ruling out alternative explanations, through a logical, evidence-based process. The ‘best fit’ strategy for causal attribution depends on the evaluation context as well as what is being evaluated.
Randomized Controlled Trials (RCTs): Methodological Briefs - Impact Evaluation No. 7
Randomized Controlled Trials (RCTs): Methodological Briefs - Impact Evaluation No. 7

AUTHOR(S)
Howard White; Shagun Sabarwal; Thomas de Hoop

Published: 2014 Methodological Briefs
A randomized controlled trial (RCT) is an experimental form of impact evaluation in which the population receiving the programme or policy intervention is chosen at random from the eligible population, and a control group is also chosen at random from the same eligible population. It tests the extent to which specific, planned impacts are being achieved. The distinguishing feature of an RCT is the random assignment of units (e.g. people, schools, villages, etc.) to the intervention or control groups. One of its strengths is that it provides a very powerful response to questions of causality, helping evaluators and programme implementers to know that what is being achieved is as a result of the intervention and not anything else.
Quasi-Experimental Design and Methods: Methodological Briefs - Impact Evaluation No. 8
Quasi-Experimental Design and Methods: Methodological Briefs - Impact Evaluation No. 8

AUTHOR(S)
Howard White; Shagun Sabarwal

Published: 2014 Methodological Briefs
Quasi-experimental research designs, like experimental designs, test causal hypotheses. Quasi-experimental designs identify a comparison group that is as similar as possible to the intervention group in terms of baseline (pre-intervention) characteristics. The comparison group captures what would have been the outcomes if the programme/policy had not been implemented (i.e., the counterfactual). The key difference between an experimental and quasi-experimental design is that the latter lacks random assignment.
Comparative Case Studies: Methodological Briefs - Impact Evaluation No. 9
Comparative Case Studies: Methodological Briefs - Impact Evaluation No. 9

AUTHOR(S)
Delwyn Goodrick

Published: 2014 Methodological Briefs
Comparative case studies involve the analysis and synthesis of the similarities, differences and patterns across two or more cases that share a common focus or goal in a way that produces knowledge that is easier to generalize about causal questions – how and why particular programmes or policies work or fail to work. They may be selected as an appropriate impact evaluation design when it is not feasible to undertake an experimental design, and/or when there is a need to explain how the context influences the success of programme or policy initiatives. Comparative case studies usually utilize both qualitative and quantitative methods and are particularly useful for understanding how the context influences the success of an intervention and how better to tailor the intervention to the specific context to achieve the intended outcomes.
Overview: Data Collection and Analysis Methods in Impact Evaluation: Methodological Briefs - Impact Evaluation No. 10
Overview: Data Collection and Analysis Methods in Impact Evaluation: Methodological Briefs - Impact Evaluation No. 10

AUTHOR(S)
Greet Peersman

Published: 2014 Methodological Briefs
Impact evaluations need to go beyond assessing the size of the effects (i.e., the average impact) to identify for whom and in what ways a programme or policy has been successful. What constitutes ‘success’ and how the data will be analysed and synthesized to answer the specific key evaluation questions (KEQs) must be considered up front as data collection should be geared towards the mix of evidence needed to make appropriate judgements about the programme or policy. This brief provides an overview of the issues involved in choosing and using data collection and analysis methods for impact evaluations.
Developing and Selecting Measures of Child Well-Being: Methodological Briefs - Impact Evaluation No. 11
Developing and Selecting Measures of Child Well-Being: Methodological Briefs - Impact Evaluation No. 11

AUTHOR(S)
Howard White; Shagun Sabarwal

Published: 2014 Methodological Briefs
Indicators provide a signal to decision makers by indicating whether, and to what extent, a variable of interest has changed. They can be used at all levels of the results framework from inputs to impacts, and should be linked to the programme’s theory of change. Most important at the lower levels of the causal chain are monitoring indicators such as inputs (e.g., immunization kits supplied), activities (e.g., immunization days held) and outputs (e.g., clinics built). For higher-level indicators of outcomes and impact, however, monitoring tells us what has happened but not why it happened. To understand this, impact evaluation must be used to increase our understanding of the factors behind achieving or not achieving the goal.
Interviewing: Methodological Briefs - Impact Evaluation No. 12
Interviewing: Methodological Briefs - Impact Evaluation No. 12

AUTHOR(S)
Bronwen McDonald; Patricia Rogers

Published: 2014 Methodological Briefs
Interviews are easy to do badly and hard to do well - good planning, adequate time and appropriate skills are required. The type of interview should be carefully chosen to suit the situation rather than choosing a type of interview (such as focus groups) simply because it is commonly used. Interviews with children raise particular ethical issues that need to be carefully considered and fully addressed. This brief outlines key issues to consider in planning interviews for impact evaluation, taking into account the purpose of the evaluation, how interview data aim to complement other data for assessing impact, and the availability of resources.
Modelling: Methodological Briefs - Impact Evaluation No. 13
Modelling: Methodological Briefs - Impact Evaluation No. 13

AUTHOR(S)
Howard White; Shagun Sabarwal

Published: 2014 Methodological Briefs
Modelling is an approach to impact evaluation which uses mathematical models to describe social and economic relationships and to infer causality from an intervention to an outcome, and/or between an outcome and its determinants. Models with more than one equation are most valuable, as they allow for both direct and indirect effects and also two-way relationships to be captured. Models can be used to examine the impact of a programme or policy by introducing them as an exogenous change in some of the variables, parameters or equations.
13 - 24 of 30
INNOCENTI DISCUSSION PAPERS INNOCENTI REPORT CARD INNOCENTI RESEARCH BRIEFS INNOCENTI WORKING PAPERS MISCELLANEA INNOCENTI RESEARCH REPORT BEST OF UNICEF RESEARCH
JOURNAL ARTICLES BLOGS
Return on Knowledge: How international development agencies are collaborating to deliver impact through knowledge, learning, research and evidence
Publication

Return on Knowledge: How international development agencies are collaborating to deliver impact through knowledge, learning, research and evidence

Effective collaboration around knowledge management and organizational learning is a key contributor to improving the impact of international development work for the world’s most vulnerable people. But how can it be proven? With only 10 years from the target date for the Sustainable Development Goals, nine of the world’s most influential agencies set out to show to the connection between the use of evidence, knowledge and learning and a better quality of human life. This book – a synthesis of stories, examples and insights that demonstrate where and how these practices have made a positive impact on development programming – is the result of the Multi-Donor Learning Partnership (MDLP), a collective effort to record the ways each of these organizations have leveraged intentional, systematic and resourced approaches to knowledge management and organizational learning in their work.
Gender Solutions: Capturing the impact of UNICEF’s gender equality evidence investments (2014–2021)
Publication

Gender Solutions: Capturing the impact of UNICEF’s gender equality evidence investments (2014–2021)

UNICEF has undertaken hundreds of gender evidence generation activities, supporting programmatic action, advocacy work and policymaking. The Gender Solutions project aims to draw together the knowledge, innovations and impacts of gender evidence work conducted by UNICEF offices since the first UNICEF Gender Action Plan was launched in 2014. A desk review identified over 700 gender-related UNICEF research, evaluation and data evidence generation activities since 2014. Twenty-five outputs were shortlisted because of their high quality and (potential for) impact and three were selected as Gender Evidence Award winners by an external review panel. By capturing the impact of this broad body of work, Gender Solutions aims to showcase UNICEF’s evidence investments, reward excellence and inform the rollout of the UNICEF Gender Policy 2021–2030 and Action Plan 2022–2025.
Annual Report 2021
Publication

Annual Report 2021

The UNICEF Innocenti Annual Report 2021 highlights the key results achieved in research and evidence to inform policymaking and programming.
Responsible Innovation in Technology for Children: Digital technology, play and child well-being
Publication

Responsible Innovation in Technology for Children: Digital technology, play and child well-being

Digital experiences can have significant negative impact on children, exposing them to risks or failing to nurture them adequately. Nevertheless, digital experiences also potentially yield enormous benefits for children, enabling them to learn, to create, to develop friendships, and to build worlds. While global efforts to deepen our understanding of the prevalence and impact of digital risks of harm are burgeoning – a development that is both welcome and necessary – less attention has been paid to understanding and optimizing the benefits that digital technology can provide in supporting children’s rights and their well-being. Benefits here refer not only to the absence of harm, but also to creating additional positive value. How should we recognize the opportunities and benefits of digital technology for children’s well-being? What is the relationship between the design of digital experiences – in particular, play-centred design – and the well-being of children? What guidance and measures can we use to strengthen the design of digital environments to promote positive outcomes for children? And how can we make sure that children’s insights and needs form the foundation of our work in this space? These questions matter for all those who design and promote digital experiences, to keep children safe and happy, and enable positive development and learning. These questions are particularly relevant as the world shifts its attention to emerging digital technologies and experiences, from artificial intelligence (AI) to the metaverse, and seeks to understand their impact on people and society. To begin to tackle these questions, UNICEF and the LEGO Group initiated the Responsible Innovation in Technology for Children (RITEC) project in partnership with the Young and Resilient Research Centre at Western Sydney University; the CREATE Lab at New York University; the Graduate Center, City University of New York; the University of Sheffield; the Australian Research Council Centre of Excellence for the Digital Child; and the Joan Ganz Cooney Center. The research is funded by the LEGO Foundation. The partnership is an international, multi-stakeholder and cross-sectoral collaboration between organizations that believe the design and development of digital technology should support the rights and well-being of children as a primary objective – and that children should have a prominent voice in making this a reality. This project’s primary objective is to develop, with children from around the world, a framework that maps how the design of children’s digital experiences affects their well-being, and to provide guidance as to how informed design choices can promote positive well-being outcomes.

Share:

facebook twitter linkedin google+ reddit print email