Meta-analyses take the results of studies evaluating the impact of a type of intervention and combine these results.

An individual evaluation of the effectiveness of an aid program does not tell you as much as you’d like. Multiple studies are needed to reassure you the results were not just a fluke. Ideally, you would also like to see how an intervention performs in different contexts.

Once a few studies have been done, a meta-analysis is the best way to combine them to get an estimate of the program’s typical effects, its effects in particular settings, and its effects on particular groups of people. Some effects might also have been too weak to have been picked up in a small study; a meta-analysis is more likely to catch these.

A meta-analysis is different from a summary that simply counts how many studies say one thing and how many studies say another thing. This is known as “vote counting” and it is not a statistically sound way of combining results.

For example, suppose there are 10 studies on a subject, 3 of which find a positive, significant effect and 7 of which find no statistically significant effect. One might think those 7 studies are evidence of no effect and come to the conclusion that the intervention is not effective, or not as effective as would be suggested by the 3 studies alone. But perhaps the studies’ sample size was simply not large enough. Those 7 studies, if combined through meta-analysis, might actually provide evidence in favour of the intervention having a positive effect.

Given that many if not most studies in development economics do not have the sample size needed in order to be likely to find an effect that has statistical significance, results from combining studies through vote counting can be very misleading. Vote counting makes an error akin to “accepting the null”, when one can only either reject the null or not reject the null. In contrast, meta-analyses go to the data and combine the results in a statistically sound way.