Read more about our process or skip to the results.
What makes us different? First, we will cover all impact evaluations in international development. Second, we will share all data. This is very rarely done for meta-analyses and yet is of particular importance as so many results are synthesized in a single meta-analysis.
For example, the status quo is that a team of researchers will scour hundreds if not thousands of papers, find a set that they will include, input their results, and from this set of results output a single estimate of the effects of an intervention. Sometimes, another team of researchers will think a different set of papers should have been used, and they will then have to scour the literature and recreate this set from scratch. If the two groups disagree, all the public sees are these two data points and their reasoning for selecting different papers. AidGrade will cover the superset of all impact evaluations one might wish to include along with a list of their characteristics (e.g. where they were conducted, whether they were randomized by individual or by cluster, etc.) and let people set their own filters on the papers or select individual papers and view the entire space of possible results. The diagram below illustrates the difference.
Further, our data set is not a stagnant data set but will allow people to add data as more becomes available, keeping results updated and relevant over time. We have seen the success of other crowd-sourced research projects such as Foldit, a protein-folding application which has over 240,000 registered members and has resulted in numerous discoveries in biology. We believe we can similarly harness the good will and interest of many people who can help identify studies and input data. We have already had the help of more than 30 volunteers, most of them recent college graduates, and through having tested out breaking down the components of a meta-analysis into small, simple tasks, we know that this model would work.
We are starting with a dataset of studies that were collected and coded by volunteers, containing over 60 filters. Since this work is akin to building Wikipedia from scratch, it will take some time to upload all the results from impact evaluations in development. We are currently building a tool that would allow people to add data and will refine the process as we continue. (Read more about how our current studies were selected and how data were collected.) You can play around and build your own meta-analyses right now using our data. (Read more about our meta-analysis tool.)
Through a Kickstarter campaign to gauge interest in this project, we selected 10 initial topics on which to focus.
The Kickstarter book, What Works in Development: 10 Meta-Analyses of Aid Programs, covers the complete list of topics and gives some examples of how these programs have worked in practice, in plain English. This page will be updated with technical papers as they become available and we have plans to provide shorter descriptive summaries here in the future. Until then, and even after then, we recommend you read What Works in Development. You can get a copy by donating at least $12.99. Please contact us if you have any questions.