How studies were selected and data were collected

Ed. note: For more methodology details, see this post.

We currently have data for ten programs. These initial topics were selected through a Kickstarter campaign. Any study that appears to be an impact evaluation of the effects of that program is included; as you can see by reading more about our meta-analyses, we are purposefully not restrictive. By including a lot of information about each study, we allow you to narrow down the programs at a later step – for example, choosing to focus on only those school meals programs that were fortified with iron. The outcome variables available for selection in any of our online tools will depend on the program selected, as not every outcome variable is relevant to every program. More details on our methodology follow.

Literature Search

We initially searched the list of impact evaluations and their references at the websites of the Abdul Latif Jameel Poverty Action Lab (J-PAL), at MIT; Innovations for Poverty Action (IPA), associated with Yale; and the Center for Effective Global Action (CEGA), at the University of California, Berkeley, looking for impact evaluations for each intervention. Following this, two research assistants independently conducted a search of titles, abstracts, and keywords in SciVerse, using the search terms in the table below, to augment the list of impact evaluations. SciVerse is an aggregator of scholarly articles, books, theses, abstracts, and other academic records. It searches other indexing databases such as Scopus, MEDLINE, and PubMed Central. It also searches many journals directly, including those under Elsevier, Springer, SAGE Publications, as well as university sites, dissertation repositories, and professional organizations. Overall, it searches over 500 million records.

The research assistants were instructed to select each paper whose title suggested it was an impact evaluation on the effects of that particular intervention for further screening. Working papers were included. While the searches focused on RCTs, any paper that was an impact evaluation that tried to estimate the counterfactual was included. Following this, the two research assistants also checked the abstracts of the papers they had identified for further screening, again marking whether it appeared from the abstract to be an impact evaluation on the effects of that intervention. They also collected the references of all the studies that had passed the title check and then went through the same process for each reference. The full text of every study that at least one research assistant coded as passing the abstract check was gathered. For papers that went through many iterations, the most recent version was used. The full text was then checked to ensure it was an impact evaluation on the effects of the intervention, and a third researcher arbitrated any disagreements between the two research assistants.

Finally, in the process of conducting the above searches, the research assistants were also instructed to compile a list of existing meta-analyses for each intervention whenever they came across one in the primary search or the references search. The full text of these meta-analyses was also scoured for additional impact evaluations.

Screening

There are two sets of criteria we used for screening papers. One simply determined the papers for which we collected data. The second determined which papers we used in our own meta-analysis.

Regarding the first set of criteria, our goal was to capture as many papers as possible while coding up every characteristic of the paper that might be relevant so that in the future researchers can use our database and simply select which kinds of papers they wish to use, doing the filtering as a later step. Thus, we kept any paper that tried to use a counterfactual to estimate the effects of the particular intervention.

Our second set of criteria was informed by the literature on each intervention. For deworming, for example, we wanted to focus on studies that were randomized by cluster, given the evidence that deworming has large spillover effects and studies that do not take this into consideration are doubly underestimating treatment effects.[1] We made sure to specify these filters in advance so as not to accidentally engage in the meta-analysis equivalent of specification searching. More details are included within each specific meta-analysis.

Data Extraction

After compiling the list of studies for each intervention, a research assistant entered the main results from each paper into a spreadsheet depending on how they were reported: the treatment coefficient, standard error, and number of observations; the treatment and control group means, standard deviations, and sample sizes; etc. Other study characteristics were also coded at length, such as more details on the specific intervention, the paper’s methodology, and the population studied. The characteristics collected varied by intervention, as different characteristics were important to each intervention. For example, whether the population was initially undernourished or not is particularly important to school meals programs.

Information that is specific to each meta-analysis, such as the number of studies identified and eliminated at every stage of the process, is provided in its own short working paper.

Again, for more details on the methodology used in our current round of meta-analyses, see this post.



[1] Miguel, E. and M. Kremer, 2004. “Worms: Identifying Impacts on Education and Health in the Presence of Treatment Externalities”, Econometrica, vol. 72 (1).

Top