Blog Archives

Which topics do you want us to cover?

We’re asking for your help in selecting the next topics we will cover.
We plan to next cover 20 of the topics on this list, or roughly half.

Please vote and pass on the link! Voting will remain open for one week (until March 18).

(To view a full-page version of this survey, click here.)

Announcing AidGrade’s new team!

Thursday was the 1-year anniversary of AidGrade’s founding, and we also have completed a new round of hiring of researchers!

It is a really great team, with decades of experience in impact evaluation and meta-analysis among the group. Congratulations and welcome to Gautam Bastian, Timothy Catlett, David Mason, and Christine Shen!

Volunteer opportunities remain, and we expect other opportunities to arise over the next few months, including summer internships! Watch this space or follow on facebook or twitter for updates!

Kickstarter book available

If you missed the Kickstarter book and want a copy, you can get it here. We’ll also be posting a short version and a long version of the results for each topic on this website. Some things in the papers are not in the book and vice versa; the book is more of a popular read and includes remarks about the state of the discipline and an introduction to the statistics behind the analyses.

Job Openings

Deadline for all postings: Jan. 18, 2013.

Research Analyst (2 positions available, 3 month to 1 year contracts)

The research analysts will be responsible for sifting through impact evaluations of development programs and collecting data from them. Training will be provided. This is a full-time position. Some work can be done remotely, but there is a preference for candidates willing to work in the Washington, DC office or in San Francisco. The ideal candidate has a good understanding of international development and standard economic methods of causal attribution and a passion for the field.

Please submit cover letter and resume or CV to info “at” aidgrade “dot” org with “Research Analyst” in the subject line to apply. Deadline is Jan. 18, 2013.

 
Research Manager (part-time)

The Research Manager will be responsible for managing research projects, coordinating volunteers, and fundraising/friendraising. Some work can be done remotely, but there is a preference for candidates willing to work in the Washington, DC office or in San Francisco. The ideal candidate has a good understanding of international development and experience managing teams. Experience in fundraising/friendraising is a plus.

Please submit cover letter and resume or CV to info “at” aidgrade “dot” org with “Research Manager” in the subject line to apply. Deadline is Jan. 18, 2013.

 
Development Manager (part-time)

The Development Manager will be responsible for coordinating and implementing the fundraising and friendraising of the organization. Some work can be done remotely, but there is a preference for candidates willing to work in the Washington, DC office or in San Francisco. The ideal candidate has experience in fundraising/friendraising, a good understanding of international development, and experience managing teams.

Please submit cover letter and resume or CV to info “at” aidgrade “dot” org with “Development Manager” in the subject line to apply. Deadline is Jan. 18, 2013.

 
Director of Organizational Development

The Director of Organizational Development will be responsible for coordinating and implementing the fundraising and friendraising of the organization and managing volunteers for our research projects. Some work can be done remotely, but there is a preference for candidates willing to work in the Washington, DC office or in San Francisco. The ideal candidate has experience in fundraising/friendraising, a good understanding of international development, and experience managing teams.

Please submit cover letter and resume or CV to info “at” aidgrade “dot” org with “Director of Organizational Development” in the subject line to apply. Deadline is Jan. 18, 2013.

Some friendly concerns with GiveWell

Post by Eva Vivalt.

Let me preface this by saying GiveWell was really great at bringing attention to the issue of effective donations. It really spearheaded this movement and improved the conversation about aid.

However, there are some things that could be improved:

– Their literature reviews frequently use “vote counting” when results of meta-analyses are not available.[1] What is vote counting? Suppose there are 10 studies on a subject, 3 of which find a positive, significant effect and 7 of which find no statistically significant effect. One might think those 7 studies are evidence of no effect and come to the conclusion that the intervention is not effective, or not as effective as would be suggested by the 3 studies alone. But perhaps the studies’ sample size was simply not large enough. Those 7 studies, if combined through meta-analysis, might actually provide evidence in favour of the intervention having a positive effect.

Given that a review of the literature suggests many if not most studies do not have the sample size in order to be likely to find an effect has statistical significance, this is a major flaw. Vote counting makes an error akin to “accepting the null”, when one can only either reject the null or not reject the null.

– They aren’t in a good position to evaluate studies that did not use randomization. Despite stating that they appreciate quasi-experimental methods, their Aug. 23, 2012 summary of methods of causal attribution doesn’t even include differences-in-differences or matching: http://blog.givewell.org/2012/08/23/how-we-evaluate-a-study/. It also oddly puts instrumental variables at the top of its list,[2] and it invents a new form of causal identification: “visual and informal reasoning”. Economists will be delighted to hear that no longer do they have to bother with finding a valid counterfactual – they need merely follow these steps for causal attribution:

Visual and informal reasoning. Researcher sometimes make informal arguments about the causal relationship between two variables, by e.g. using visual illustrations. An example of this: the case for VillageReach includes a chart showing that stock-outs of vaccines fell dramatically during the course of VillageReach’s program. Though no formal techniques were used to isolate the causal impact of VillageReach’s program, we felt at the time of our VillageReach evaluation that there was a relatively strong case in the combination of (a) the highly direct relationship between the “stock-outs” measure and the nature of VillageReach’s intervention (b) the extent and timing of the drop in stockouts, when juxtaposed with the timing of VillageReach’s program. (We have since tempered this conclusion.)

We sometimes find this sort of reasoning compelling, and suspect that it may be an under-utilized method of making compelling causal inferences.

– While they agree with the idea that people may wish to support different things (e.g. health, education), in the end they provide their own list of recommended organizations and interventions implicitly based on what they find important or what they assume others might find important.

In contrast, AidGrade doesn’t try to make comparisons without solid grounding. It doesn’t impart judgments about the value of a year of life versus the value of an education but focuses on specific outcome variables separately. In order to say anything about the relative value of these different outcomes, one needs a theory of well-being (read more on this here). GiveWell does look at DALYs, which is one way of aggregating health outcomes, but this doesn’t really apply to other things one might care about such as education or income. When you start by focusing on outcomes separately, you can always aggregate them up again later, and work is underway to provide a variety of tools for people to do so themselves rather than making that decision for them.

Again, I have a lot of respect for the people there, and they have been the best game in town for the past few years. AidGrade’s positioning in this space is different: it’s jumping to the end of the spectrum in terms of statistical rigor, even if that means that its statements are all couched in terms of the limits of the data.

These views are my own. (Main blog: http://www.aideconomics.com/, twitter: @evavivalt.)



[1] See, for example, their most recent (2008) review of microfinance: http://www.givewell.org/international/economic-empowerment/microfinance/detail (accessed Jan. 3, 2013). The review contains a warning to refer to http://www.givewell.org/international/economic-empowerment/microfinance for up-to-date content, but this latter summary does not cite any impact evaluations or meta-analyses itself and if you follow the links you are led back to more old vote counting.

[2] Instrumental variables are rarely used and have generally become viewed with suspicion; their heyday was the 1980s.

How will you provide information on context?

We’re continuing to answer the questions we received. Please use this link to send us your questions! This question comes from Paul Niehaus, assistant professor at the University of California, San Diego.

The full comment: “This is something we think about a lot in summarizing evidence on cash transfers – there is of course no such thing as “the effect” of cash transfers since by design they are flexible and people use them very differently in different contexts. Even within our own randomized controlled trial sample you see very clearly that poorer people with starving kids prioritize nutrition while less poor households invest more in land, livestock, housing.”

We’re trying to code up the different factors that describe a study’s setting. So, if someone wanted to look only at studies in India, for example, they could do that by going to our meta-analysis app and choosing to include only those studies that were done in India in their analysis. (Note: right now you will only see a couple of sample filters as the demo version. Behind the scenes, we have coded up an extensive list of variables for each study, including country and other characteristics of the sample studied. We are still discussing how to put these up in a user-friendly way, as nobody wants to wade through dozens of options. Please feel free to send us your comments about which filters you most want to see added or any usability suggestions.)

Now, adding filters would reduce the number of studies that would be included substantially, so the user will have to make trade-offs. If they want to pick the closest study to their specific situation, they can find just that one study. If they think their situation is somewhere in between several situations, they can include all the studies in the situations they believe might be relevant.

While we’ve been collecting data on a long list of characteristics of studies and the samples they are based on, we’re not going to catch all the characteristics one might possibly care about on our first try. As we grow, you’ll be able to suggest missing filters, which we can go back and add in. Again, the analogy is that this work is like building a Wikipedia from scratch – slow at the start, but, with the help of many individuals, we’ll serve as a good resource.

Posted in FAQ
Top