Philanthropedia Blog

A Closer Look at Experts Part I: Who are our experts and why do we value them?

November 23rd, 2009 by Erinn Andrews Leave a reply »

In my guest post on Tactical Philanthropy, I outlined how Philanthropedia is trying to create good information that is actionable for donors and available at scale across a number of social causes. However, I only barely touched upon why we think experts working in the sector are well suited to evaluate nonprofits. In light of the great questions Richard Marker posed about Philanthropedia in his blog post this weekend, I’d like to go into more detail about who our experts are and why we value them. And in a future post, I’ll cover exactly how we find these experts and what kind of information we collect from them.

First, a recap on who our experts are: they are funders, nonprofit executives, academics, researchers, policy makers, and many others who have been working in a particular sector for a long period of time, generally ten or more years to constitute deep expertise.[i] As our colleague, Richard Marker, noted in his blog post this weekend, expertise in a field is valuable and we’re trying to capture that knowledge through our work. However, Richard also importantly noted that younger innovators also have a valuable perspective to share. We couldn’t agree more, which is why our minimum years of expertise are 2 for experts. In our climate change research, for example, our experts had anywhere from 2 to 40 years of experience.

Next, we think it’s important to understand why we think experts are better proxies for nonprofit evaluation. Experts are like doctors or admission officers at selective universities. When a doctor meets with a patient to diagnose a problem, she takes into consideration not only the patient’s height, weight, and blood pressure, but also the patient’s past history, description of ailments, and even perhaps body language. Having worked with the patient in the past, the doctor can make a diagnosis based on a variety of factors.

In the example of the admission officer at the selective university, the applicant is not admitted based on SAT score or GPA alone, but on a variety of factors including personal statement, letters of recommendation, opportunities at the high school, and family context. In both of these examples, the doctor and admission officer follow well-thought-out guidelines, but do not rely on a formula or an equation to produce a diagnosis or an admission decision. Instead, the patient and applicant are given a holistic review.

We believe that in the nonprofit sector, the funders, nonprofit executives, academics, researchers, and policy makers are best suited to evaluate nonprofit effectiveness for two reasons. First, these experts have access to unique and nonpublic data about the nonprofits, just as the doctors and admission officers do in their respective fields. Too often, this information remains private, though it could potentially be valuable to the nonprofits, donors, and policy makers, as Paul Brest and Hal Harvey argue in Money Well Spent (specifically referring to the knowledge foundation professionals gather). As I explained earlier in the guest post, the first challenge is capturing this knowledge and then the challenge becomes disseminating it in a meaningful, useful way. The “value of knowledge can be multiplied many times over if there are good systems in place for disseminating it.”[ii]

Second, because of their expertise in the field, we believe these experts use advanced mental models to consider the many factors that go into measuring nonprofit effectiveness. Our core assumption is that expert judgment is a good proxy for evaluation when an equation alone cannot predict an outcome.[iii] “[T]he input assumptions, the range of applicability of the model, and the interpretation of the output are all subject to intuitive intervention by an individual who can bring the appropriate expertise to bear on the application of the model.”[iv]

We think that each professional brings a different, yet valuable perspective to judge nonprofit performance. “Foundations in the United States have spent significant time and money on their performance measurement systems, and are probably as close a parallel in the nonprofit sector to the kind of for-profit financial analysts that work for investment banks.” (source) Nonprofit executives, on the other hand, having spent years in the “trenches” know the intricacies of balancing competing interests, allocating resources, and working with multiple constituencies. Therefore, these professionals bring a valuable and unique perspective when evaluating nonprofit success. Academics, researchers, and policy analysts provide yet another important view rooted in research, longitudinal measurement, and scholarly interdisciplinary study.

We don’t believe that one of these perspectives is necessarily more correct or accurate than another—all add value and represent well-informed views of the nonprofit world. While admittedly this is still an imperfect measure of effectiveness, we believe that by bringing together the perspectives of these diverse groups of professionals, we can meaningfully capture the aggregated beliefs of a group of well-informed people and understand which organizations they currently think are impactful. And, as I explained in the guest post on Tactical Philanthropy, experts are only a starting point.

[i] Herbert Simon & Kevin Gilmartin, “A simulation of memory for chess positions,” Cognitive Psychology 5 (1973): 29-46.

[ii] Paul Brest and Hal Harvey, “Money Well Spent: A Strategic Plan for Smart Philanthropy,” (New York: Bloomberg Press, 2008), 90.

[iii] Olaf Helmer, “Analysis of the Future: The Delphi Method,” The Rand Corporation, March 1967, 4.

[iv] Olaf Helmer, “Analysis of the Future: The Delphi Method,” The Rand Corporation, March 1967, 4.


  1. Great post Erinn. Can you point to approaches similar to Philanthropedia that are used in other domains to source expert knowledge? Your point about SAT/GPA is well put. But in that case, an individual expert (the admissions officer) makes the call. Do other disciplines source expert knowledge the way that you are?

  2. Sean, that’s a great question. You know, the closest example that comes to mind is the Doomsday Clock. While I don’t know a whole lot about the Doomsday Clock, from what I understand, a group of scientists (ie. experts) determines how the clock will change according to their understanding of current events, scientific events, etc. The experts take a number of different factors into consideration and more than just one person weighs in on the discussion.

    I can’t think of any other examples right now, though I welcome suggestions from others. In general, I think the way we’re sourcing expert knowledge (by including experts from different professions within the field) is rather new to the field. I’m going to try to think of some other examples, though! Again, great question and thanks for your feedback.

  3. Easily, the post is really the freshest on this deserving topic. I totally agree with your conclusions and will eagerly look forward to your future updates. Just saying thanks will not just be enough, for the wonderful clarity in your writing. I will instantly grab your rss feed to stay privy of any updates.Good work and much success in your business efforts!Thank you.

Leave a Reply

Philanthropedia is a registered 501(c)3 organization. All of your donations are 100% tax-deductible.