Monthly Archives: December 2013

An intern at AidGrade

Post by Scott Weathers, @scott_weathers.

My first day at AidGrade came with a gift. In our fall team’s first meeting with Eva, we received owl-themed kitchen timers.

But these weren’t stocking stuffers. Eva brought up a set of studies—which everyone was already familiar with—arguing that working for short, fixed blocks of time was the most effective work strategy. Spend less time concentrating and you’re not reaching optimal productivity, spend more and you’re burning yourself out.

This is AidGrade. How can we use our time, money, and energy to the greatest effect? How can development aid be maximized to spur economic growth, cure disease, and save the greatest number of lives?

AidGrade is a risky endeavor. We are less than a dozen employees and interns working to refine the most promising solutions to global disease and poverty.

You’ll see the importance of AidGrade right away. Why isn’t there a database of studies on HIV education, anti-malarial bed nets, micronutrients, or microfinance? If you like to ask big questions, you’ll fit right in.

You’ll also learn quickly. The fine-grained details of Randomized Control Trials (RCTs) may not be too familiar right now, but AidGrade presents an awesome opportunity to dive in.

AidGrade is a development start-up that I believe is working on some of the most important issues the world has today. If you have any aspiration to impact humanity, or at least understand the impact you may have, I encourage you to join.

——
Scott was a fall term social media intern. Interested in learning more about what opportunities are available? Get in touch: info@aidgrade.org.

What does $50k mean to us?

Post by Eva Vivalt, @evavivalt.

AidGrade’s at risk of not making the Indiegogo target, meaning the money raised to date would be returned to the contributors.

We’re a small organization. We do everything on a shoestring. This actually means we’re quite cost-effective. Everything I’ve done for the organization has been on a volunteer basis and actually most of our work gets done that way.

We’re a young organization. We seem so well-established that perhaps we don’t seem like we need the money, but we’re only celebrating a year since launching the website tomorrow.

$50k is a year’s worth of activities, if were to ration ourselves to only the most essential parts. That’s all the meta-analyses, all the data collection, all the massive coordination that requires. And perhaps we haven’t allocated enough in the past to fundraising or publicity or putting our materials into media-friendly sound bytes. That’s understandable – we’re researchers – but mea culpa. We are finishing up the last of our first 20 meta-analyses, and expect the focus to shift to making more use of these data to put out new findings.

We aren’t planning to do any more crowdfunders, so this is your last chance to get one of the rewards – the crowdfunder ends Tuesday! Please help us get to our goal so we can squeeze these data this coming year and build sustainability.

Do randomized controlled trials engage in less specification searching?

Originally posted on Eva Vivalt’s blog.

An excerpt from on-going work, working from AidGrade’s database of impact evaluation results in development economics.

These are results from caliper tests which essentially compare the number of results just above a critical threshold (t=1.96) with those just below a critical threshold. You can vary the width of the band; for example, a 5% caliper would look at the range 1.862 – 2.058. If you see a jump at 1.96, you might suspect specification searching is going on, in which researchers only report the results they like, biasing the results.

   Over     Under   p-value   * 
All studies        
2.5% Caliper 45 26 0.02 <0.05
5% Caliper 73 51 0.03 <0.05
10% Caliper 127 117 0.28  
15% Caliper 182 185 0.58  
20% Caliper 220 231 0.71  
RCTs        
2.5% Caliper 24 14 0.07 <0.10
5% Caliper 35 28 0.22  
10% Caliper 64 68 0.67  
15% Caliper 97 107 0.78  
20% Caliper 119 134 0.84  
Non-RCTs        
2.5% Caliper 21 12 0.08 <0.10
5% Caliper 38 23 0.04 <0.05
10% Caliper 63 49 0.11  
15% Caliper 85 78 0.32  
20% Caliper 101 97 0.42

Okay, there seems to be a jump. Possibly more among quasi-experimental studies than among RCTs.

Overall, though, this jump is actually quite small. Gerber and Malhotra did the same kinds of tests for political science and sociology. They used different selection criteria when gathering their papers, essentially maximizing the probability they would see a jump, but take a look at their numbers:

Political science:

   Over     Under   * 
A. APSR      
Vol. 89-101      
10% Caliper 49 15 <0.001
15% Caliper 67 23 <0.001
20% Caliper 83 33 <0.001
Vol. 96-101      
10% Caliper 36 11 <0.001
15% Caliper 46 17 <0.001
20% Caliper 55 21 <0.001
Vol. 89-95      
10% Caliper 13 4 0.02
15% Caliper 28 12 0.008
20% Caliper 21 6 0.003
B. AJPS      
Vol. 39-51      
10% Caliper 90 38 <0.001
15% Caliper 128 66 <0.001
20% Caliper 165 95 <0.001
Vol. 46-51      
10% Caliper 56 25 <0.001
15% Caliper 80 45 0.001
20% Caliper 105 66 0.002
Vol. 39-45      
10% Caliper 34 13 0.002
15% Caliper 48 21 <0.001
20% Caliper 60 29 <0.001

Sociology:

   Over     Under   * 
ASR (Vols. 68-70)      
5% Caliper 15 4 0.01
10% Caliper 26 15 0.06
15% Caliper 47 17 <0.001
20% Caliper 54 19 <0.001
ASJ (Vols. 109-111)      
5% Caliper 16 4 0.006
10% Caliper 25 11 0.01
15% Caliper 41 14 <0.001
20% Caliper 48 18 <0.001
TSQ (Vols. 44-46)      
5% Caliper 13 4 0.02
10% Caliper 22 7 0.004
15% Caliper 26 11 0.01
20% Caliper 30 20 0.1
Combined (recent vols.)      
5% Caliper 44 12 <0.001
10% Caliper 73 33 <0.001
15% Caliper 114 42 <0.001
20% Caliper 132 57 <0.001
ASR (Vols. 58-60)      
5% Caliper 17 2 <0.001
10% Caliper 22 5 <0.001
15% Caliper 27 11 0.007
20% Caliper 30 15 0.02

Wow! Economics is not doing so badly after all! (Some public health papers are also included, but results are comparable if you break it down.) To match Gerber and Malhotra, these are all reporting number of results rather than number of papers, and sometimes papers report more than one result, so there are some subtleties here that I get into in the longer working paper. Data are still being gathered, and there is much more to be said on this topic. If you’d like to see more of this kind of work on research credibility, please support us in the last few days of our Indiegogo campaign!

Want to make a difference in development?

Update: please see revised deadline of January 10, 2014.

Do you know someone interested in development? It’s that time of year again – with some people moving on to bigger and better things, we are looking for new analysts and interns interested in research and public outreach.

As a very small organization, you will find you have a lot of autonomy working with us. While everyone here wears multiple hats, we are particularly looking for people to work in the two following areas:

Research: One of the main components of conducting a meta-analysis is reading through academic papers and coding up different characteristics of the papers. You would likely learn this process and progress to reconciling the coding work that others have done. Depending on your skills, you might also be involved in some analysis and writing papers.

Publicity: We also have a need to improve our social media presence and publicity and are seeking a social media intern and a director of development to improve our outreach, manage donor relations, and lead our fundraising activities.

There is a preference for applicants in Washington, DC, New York, or San Francisco, but work can quite successfully be done remotely. We have internships available in both these categories for people who would like to learn more but have less experience. To apply, please send CV and letter of interest indicating the role(s) for which you would like to be considered to info@aidgrade.org by January 10. Early application is encouraged.

1
Top