First, we don’t try to make comparisons without solid grounding. For example, we don’t impart judgments about the value of a year of life versus the value of an education but give ratings within programs that work on health and those that work on education separately. In order to be able to say more than this, one needs more evidence; we are planning additional research to address this issue in the future, allowing individuals to make their own judgments.
Second, we focus on randomized controlled trials (RCTs) and other rigorous methods to evaluate a program’s effects. Other charity navigators that pay attention to methods tend to also weight their own communications and relations with the organizations in question, which we worry makes them less transparent.
Third, we have the quantitative background to do evaluations the right way. For example, we combine the results of studies in statistically sound ways. Existing charity navigators instead use “vote counting”, which incorrectly treats studies that do not find evidence of an effect as evidence of no effect. Given that a review of the literature suggests many if not most studies do not have the sample size in order to be likely to find an effect has statistical significance, this is a major flaw. Essentially, existing charity navigators treat insignificant studies as evidence against an effect, when they could actually provide evidence for an effect when combined with other studies. This kind of basic statistical error makes us cautious about the quantitative ability of other charity navigators.
Finally, we actively develop more RCTs to answer critical questions. We don’t just look at what organizations provide as their own evidence of effectiveness but ask them to let us or another independent evaluator evaluate their projects. We also do work on research credibility and work to build up a database that should be of benefit to researchers, NGOs and social enterprises, and donors alike.
Our main area of similarity is that we help to promote organizations we find evidence are doing well. In summary, we are the most rigorous charity navigator, but we are also more than a charity navigator; we also actively develop more impact evaluations, build new knowledge, and use our quantitative and technological skills to create tools that advance understanding.