Philanthropedia Blog

Archive for January, 2010

Summary of the GiveWell-Philanthropedia Conversation

January 13th, 2010

With this post, I want to close the loop on the recent conversation between GiveWell, Saundra, Ingvild, and us (among others) and provide a summary of the core issues and where we stand.

Issue 1: How are experts defined and who are they?

The answer to these questions is rather straightforward, as I explained here and here. In short, while there is certainly plenty of room to grow especially in terms of getting more foundation professionals to participate, we have a good grasp of how to find and recruit experts for our surveys. These experts come from the places you would expect and where, we argue, a lot of nonpublic knowledge about nonprofit impact is aggregated: among nonprofit senior staff, foundation professionals, and academics.

The related issue of transparency of our expert network is important, and we certainly plan to reveal more information in order to build credibility and trust, as well as promote open discussions about nonprofits. In their last blog post, GiveWell made an additional stronger claim that we need to associate individuals with their quotes, which I strongly disagree with. As a matter of fact, evidence and research points in the opposite direction: providing a forum for anonymous feedback creates opportunities to surface candid opinions about strengths and weaknesses (for a good example, consult literature on 360 evaluations). What is more, such a change would effectively prevent certain types of people from participating at all – most notably, foundation professionals, who need to make daily decisions about which nonprofits to support with grants. That is why we have no plans to mandate that individual experts be linked to their recommendations and we maintain committed to providing a safe forum for candid feedback about strengths and weaknesses of different nonprofits. That being said, we are planning changes to our survey and policies to increase the number of experts, who will be acknowledged as participants in our research. To us, it is a question of when rather than a question of if, so stay tuned for changes as Philanthropedia continues to mature as an organization.

Issue 2: How are organizations recommended?

In a comment in an earlier blog post, Ingvild built on GiveWell’s questions and asked how our experts recommend nonprofits (focusing on the difference between impact and effectiveness in particular). As I have mentioned before, we believe that the experts themselves have criteria for evaluating nonprofits. We collect that information in one of the first parts of our survey. Our goal is then to get recommendations from the experts about nonprofits based on these very same criteria, which we then compile and standardize to make available to the public.

Today, we ask experts about their criteria for effectiveness, which to me includes impact in addition to other elements such as marketing ability, fundraising efficiency, leadership, hiring, etc. We did that to experiment and see how far we could push our methodology and how much information we could collect. However, our number one priority remains assessing nonprofits based on impact, which is why we are planning some changes to our survey to make sure our we can collect enough specific information about the impact of nonprofits.

In summary, we have learned a lot from this debate – both in terms of how we could improve, as well as what we need to explain better. In future blog posts, I plan to address several topics including (1) our vision for the space, (2) specific changes motivated by this public discussion, (3) fundamental pros and cons of our model of using experts as a proxy (i.e. what we are and what we are not), (4) more information on our whitepaper and planned improvements to our methodology, (5) assessment of progress to date, and (6) planned social cause expansion for 2010.

Thank you all for your feedback!

Disaster Relief: the Haiti Earthquake

January 13th, 2010

Together with the whole world, we have been reading about the terrible disaster that has struck Haiti. Our thoughts are with the families that have been hurt and we hope for a quick international intervention to help the ailing country.

Please consider making a donation if you can. We have not performed any research on disaster relief to date, but our partner GiveWell has a few interesting notes about potentially good organizations here (even though they have not done specific research either). Of the few organizations that they mention, I would probably highlight Partners in Health, which seems to have moved rather quickly to provide aid in Haiti: http://www.pih.org/home.html.

A Response to Good Intentions: More on Philanthropedia’s Methodology

January 11th, 2010

This is another post in my series of articles explaining more about Philanthropedia. As always, I am extending an open invitation for others to comment and join the discussion. No point is too small or insignificant, so feel free to leave a comment! For those that prefer a private exchange, please email us at feedback@myphilanthropedia.org – as some of you already know, I answer almost all emails personally and promptly.

As a follow up to the recent exchange between GiveWell and us, Saundra from Good Intentions blogged about her questions, saying that “[u]nfortunately, instead of helping me understand your processes better, it left me with even more questions than before.” While I hope my blog post brought some clarity for readers, as a philosopher once said “the more I know, the more I realize how much more there is to know.” That is to be expected given the complexity of the subject matter and that we, ourselves are constantly coming up with new questions and searching for ways to improve our methodology. In any case, as I have said before, we are committed to transparency and plan to rework our webpage to better explain our approach.

Saundra had a number of excellent questions that I will attempt to answer below:

Choosing Experts

How do we choose experts? Our approach is rather straightforward and is based on three main categories: foundation professionals, academics, and nonprofit senior staff. When choosing experts, we currently look at a number of factors including years of experience working in the sector, job title and occupation, professional affiliations and/or academic background, with 2 years of experience and relevant career being minimum criteria. In addition, we ask experts to self-rate their expertise on a scale from 1-5 (where 5 means “most” expert), with a self-rating of “3” as a minimum requirement. Here is the scale that we used for microfinance:

On average, how would you characterize your expertise in microfinance?

Limited: “I have limited knowledge of this issue area and do not feel qualified to identify outstanding organizations.”

Basic: “I have basic knowledge of this issue area, and might be able to make a directional assessment of the organizations with which I am familiar. My professional experience and training might qualify me to identify and evaluate a few of the most outstanding organizations in the sector.”

Moderate: “I have moderate knowledge of this issue area and am confident in making a directional assessment of the organizations with which I am familiar. My professional experience and training probably qualifies me to identify and evaluate a few of the most outstanding organizations in the sector.”

Strong: “I have strong knowledge of this issue area that is both broad and deep. My professional experience and training qualifies me to identify and evaluate some, but perhaps not most or all, of the most outstanding organizations in the sector.”

Expert: “I have expert knowledge of this issue area that is both broad and deep, including experience with multiple sub-issues. My professional experience and training qualifies me to confidently identify and evaluate most or all of the most outstanding organizations in the sector.”

The vast majority of our experts are US based, reflecting the fact that this is the primary focus of our research and operations (microfinance being an exception). In addition, we ask experts to comment on the organizations that they know the best and we do not suggest or influence their responses in any way (i.e. these are open-response questions). We also always try to err on the side of inclusiveness: we do not set specific limits about a foundation’s size and we try to invite the all relevant experts within an organization. However, we do reserve the right to disqualify experts on the basis of very poor quality of responses or other related factors.

As for how many experts actually respond, here is a table that summarizes our response rates:

Social Cause Education (pilot) Climate change Bay Area Homelessness Microfinance
Invited Experts

N/A (only relied on referred contacts)

773

392

1049

Participating Experts

39

139

83

131

Conversion rate

N/A

18%

21%

12.5%

These numbers reflect a bit of the evolution of the venture. First, the main challenge in education was “can expert crowdsourcing be done at all.” As Bob Ottenhoff (GuideStar CEO and member of our Advisory Board) said, this is a challenge that the nonprofit sector has been trying to solve in the last 10 years.

Once we demonstrated some initial success with education, the next important issue was “can expert crowdsourcing be done at scale” (i.e. beyond just referrals from a few personal connections). Building on our initial momentum, we managed to overcome this challenge as well, building big and representative expert networks in these two national/international social causes. Microfinance was particularly difficult, given that it is very much an international issue. Even though we intend to focus on the US for the foreseeable future, we saw this social cause expansion as an important experiment in how far we could push our methodology and we have drawn a lot of lessons from it. And finally, we wanted to see if our approach could work on the local level, which is why we decided to expand into the Bay Area, starting with homelessness. We are quite happy with these initial results as well and plan to continue our work on the local level.

To summarize, we have put a lot of effort in the past 2 years to create a solid methodology that allows us to compile expert opinion about impactful organizations in different social causes. We now also have a really good grasp of how to go about finding and recruiting experts to participate in our research. Nevertheless, we see a lot of room for growth and continued iteration of our surveys and experimentation around the best ways to excite and educate even more experts.

Processing expert opinions

Saundra also asked how we handle disagreement, which is a great question that we constantly ask ourselves as well. The short answer is that our goal is to be able to identify all good approaches toward solving a particular problem rather than make judgment calls about what is “good” or “bad” (something that our experts are in a better position to answer). We believe that by publicly discussing the pros and cons of each solution, the sector will be able to collectively arrive at better ways to have more impact. For an example of this, you can see the organizations that we highlighted in climate change, which include policy advocacy, grassroots, and scientific-based organizations. I should repeat that in the next version of our website, we plan to better highlight the diversity of these approaches for donors and provide a forum for discussions of the pros and cons of each.

On the question of whether some experts have disproportionate value, the answer is yes, as can be seen in the graph below:

Social Cause

Education (pilot) Climate change Bay Area Homelessness Microfinance

Foundation professional avg votes

n/a

4.19 (n=21)

4.50 (n=6)

(TBD)

Academic avg votes

n/a

4.18 (n=11)

3.67 (n=6)

(TBD)

Nonprofit staff avg votes

n/a

3.55 (n=71)

2.98 (n=58)

(TBD)

Note: “avg votes” refers to number of organizations recommended by the particular survey taker group (out of 5). We have not yet computed this for microfinance.

As you can see, foundations professionals and academics do bring disproportionate value – at least in the research that we have already done. That means that they have more influence over the final results as well. We do not adjust the scores in any other way, although we are continuously reevaluating this issue and looking for ways to have better results.

In summary, we recognize and celebrate the diversity of opinions that the different types of experts bring, and are committed to highlighting our research results in an unfiltered and actionable way on our website.

Criteria for evaluation of charities

Saundra also inquired about our criteria for evaluating charities. As our current FAQs state, we do not set our own criteria, but instead ask for the criteria that experts use to evaluate nonprofits. That is a fundamental difference between our model and GiveWell’s (both of which have their pros and cons in my opinion). Currently, we do not make the compiled criteria public, but this will change in the next iteration of our website.

In terms of charities, as a result of our methodology, we generate a long list of nonprofits (for example, in climate change we had a total of 169 organizations that our experts mentioned). But out of this long list, we only highlight the few nonprofits that are consistently mentioned by our experts. And in the near future, we certainly plan to have a response box for nonprofits that want to reply to the identified areas for improvement.

Expert reviews & difference between impact and effectiveness

Saundra also pointed to the Alliance for Climate Protection expert review, asking whether having a strong marketing ability should be a reason to recommend a nonprofit. I personally believe that this is not the case – we should be recommending nonprofits based on impact first and foremost. However, our previous research has focused on effectiveness in order to study just how much good information we could extract. (We have now decided to zoom in and only focus on impact).

Effectiveness includes a number of properties: impact of their solution, efficiency of deploying resources, attracting top talent, good governance, etc. In that line of thinking, good marketing is another important characteristic of effectiveness that our experts have legitimately pointed out given the question that we asked. Of course, by zooming into any particular organization, it is easy to miss the big point: namely, there are a number of organizations in any given social cause, each with its own strengths and weaknesses. By making these pros and cons visible, and collaborating and learning together, organizations have a much better shot of coming up with better solutions that have more impact.

In summary, our intention is to compile expert opinions to highlight strengths and weaknesses of existing models to help make them better. Many of the social problems that nonprofits are trying to solve are incredibly complex and making progress takes time, effort, resources, and patience. That is why our goal is to acknowledge which organizations are doing well and then help put them on the right track towards further improvement through a combination of public scrutiny and constructive debate. I will further elaborate on this point, and how it fits into my previously discussed idea for continuous improvement, in a separate blog post.

Responding to GiveWell’s Analysis of our Microfinance Report – Part 2/2

January 4th, 2010

In the first part of this blog post, I gave some perspective around our progress so far and plans for the future as a first step towards responding to GiveWell’s excellent analysis on our microfinance report. In this second part, I will attempt to address the specific concerns that Holden raised. If you are short on time, you can scroll to the bottom of each paragraph, where I have summarized our plans for improvement in the future (where it made sense).

Issue 1: the definition of “expert” is unclear

I believe that Holden is right on the spot criticizing us that we have not done a good job of defining an expert publicly. The very fact that he had to search in multiple places to find something so fundamental already speaks volumes about the urgent need to bring more transparency and reorganize the way we present information. We intend to do that as soon as possible.

The more substantial issue that GiveWell is bringing up is basically how we came up with these particular experts. Let me provide some details on our process here (a better explanation will most certainly become a central part of our webpage).

When we decide to expand into a new social cause, our first step is to prepare by researching and mapping out the space. In doing so, we identify – to the best of our abilities – all relevant experts in a given field. We list all foundations, nonprofits, and academic institutions and then find the experts’ contact information in these organizations. In addition, in some social causes we include journalists, policy makers, researchers, and other types of experts. The end result is a list of hundreds of experts who we then invite to participate in our surveys.

Of course, not every invited expert participates today. And of those who participate, we don’t necessarily include all responses from all experts because we do additional sorting of experts during the data collection process. What is more, not all experts are “equal” in terms of their expertise or experience. And some – most notably foundation professionals and academics – bring disproportionate value to our research. However, we strongly believe in our current inclusive model, which also invites nonprofit executives and senior staff to participate for several reasons. First, this group is obviously much bigger than researchers or funders. Not including them would be a missed opportunity to capture knowledge from those “working in the trenches.” Second, getting nonprofit executives involved opens the door to fix the broken feedback loops by using expert reviews as a starting point. For example, many nonprofits have expressed interest in being able to respond to the reviews (a functionality that we will be developing soon) – which would get us one step closer to increased transparency. The reason why nonprofit executives form such a large part of the final expert network is that there are simply more of them, compared to academics or foundation professionals. And to address the issue of the “others” category, we plan to develop more specific categories.

So, overall, I completely agree with GiveWell that we need to disclose substantially more information to build more trust among donors. We have internally identified the same issue and are working to redesign our webpage to address the concern.

Issue 2: who are the experts?

I have already mentioned the challenge of answering the question of who our experts are because of our still incomplete research in part 1 of this blog post. Our goal is to continuously get more experts “on the record” and include their bios. Based on our experience so far, experts are excited to participate in the research and I am confident that we will end up featuring more than the 31 people we have today in microfinance.

On the more specific issue of featuring microfinance skeptics, I want to make two points. First, both microfinance skeptics and proponents have recently been announcing results from different research projects that point in different directions with respect to the effects of microfinance. Regardless of who is right (which is often somewhat subjective), one obvious limitation of any crowdsourced approach (such as ours) is that it takes time to change the perceptions and opinions of a group of people. Second, our research actually features a number of microfinance skeptics at the urging of Dean Karlan, who has kindly provided us with critical feedback, served as a sounding board, and helped us connect with more researchers in the space.

Nevertheless, GiveWell is making a terrific point here: our ability to build trust and engage donors depends on being able to attract and feature credible experts. It is our intent to make substantial progress by being selective about our experts, asking them for more participation, and continuing to build large and representative networks of experts. In addition, we intend to provide better social cause summaries, which highlight the core issues as well as most important debates.

Issue 3: the recommended charities

GiveWell makes an interesting observation that the “list reads like a who’s who in large U.S. microfinance charities.” This statement is quite surprising to me, because our own experience when testing the expert mutual fund with donors is quite different. Perhaps to insiders, the list reads like the who’s who, but to the average donor that is certainly not the case. In fact, most people’s knowledge is limited to Kiva.

In some industries, it is perfectly fine for the “average” individual to not know the “who’s who.” But in most social causes in philanthropy that is not the case, because individuals provide the vast majority of funding – and they do it on the basis of very little good information. So for our national social causes, the lists might read like the who’s who to experts, but that is certainly not the case for our target user: the individual donor. And I would personally argue that perhaps an even bigger opportunity is our work on a local level – highlighting top nonprofits in different communities, which inspires people to join forces, support good organizations, and work towards solving the important societal problems of the day.

GiveWell also criticizes the expert-recommended charities and I personally certainly respect their opinion as microfinance skeptics. However, the Philanthropedia vision of accelerating the pace of social change has a different approach. We know that no organization out there is perfect – that is why we identify both strengths and areas for improvement. Our goal is to engage experts, donors, and nonprofits to collectively find and fund the best organizations that are solving some of the world’s toughest social problems. I believe that this can only happen through an inclusive process of continuous improvement, focused on bringing more transparency and demanding more impact from these nonprofits.

Issue 4: expert quotes / reviews quality

GiveWell also uses the Kiva expert quotes to analyze the usefulness of our expert reviews. The main concern that Holden identifies is whether experts are indeed doing a good job of helping donors increase their impact. This is a concern that I personally share and that is at the core of our push to increase the quality of our research in 2010 that I wrote about in Part 1. Some of the upcoming changes that we will be making in our methodology and survey design should help us get higher quality reviews. Further, as we gain more traction and educate more experts about our mission, this problem will gradually be reduced.

However, I should note that we continuously evaluate all aspects of Philanthropedia and look for ways to improve. We have performed research on our expert reviews in Bay Area homelessness and climate change (both with foundation professionals and donors) and the feedback we got was that the comments capture the essence of what the different nonprofit’s strengths and weaknesses are. For example, one of Kiva’s big accomplishments is that it has been able to capture the attention and imagination of the individual. This is a fascinating and rather unique achievement that our experts have legitimately pointed out. So we feel that the starting point is not as low as GiveWell might portray.

In summary, I believe that our expert reviews can be further improved by better surveying and educating experts on the importance of sharing their opinion. This issue is front and center for us as we look to expand into more social causes in 2010.

Issue 5: reinforcing bad incentives

Holden also hypothesizes that our methodology encourages experts not to disclose their names, and encourages charities to do whatever to get them recommended by Philanthropedia’s set of experts. This is the main issue identified by GiveWell that I disagree with. In my opinion, Philanthropedia has the opposite effect from the one outlined in the GiveWell post. First, we are increasing the transparency of experts. Even if only 31 experts agreed to publish their bios and stand behind the results, that is exactly 31 more than before. And our plans are to further push the bar, both in terms of what experts disclose and in terms of how many experts participate.

Second, Philanthropedia is getting areas for improvement of different charities “on the record” and thus inspiring them to participate in the discussion, increase their transparency, and further improve. The idea that charities will be able to manipulate the results is certainly plausible, but there are a number of reasons why I think it is highly unlikely:

  • Experts have high morals and ethics – foundation professionals, nonprofit executives, and academics operate in an environment characterized by conflicts of interest. For example, researchers might need access and data to assess nonprofits’ impact, or foundation professionals might receive multiple grant proposals from competing nonprofits, while collaborating with them on other projects. In general, in our dealings with experts, we have found them to have very high personal and professional integrity.
  • The Philanthropedia methodology relies on a high number of experts, which makes gaming the system challenging.
  • We currently have a number of internal checks that makes manipulating the results difficult. Moreover, we will be adding other policies such as random checks, more due diligence, better conflict of interest management, etc. to further reduce such risks.
  • Of course, I fully expect that if someone is dedicated enough, they will be able to game the system. That’s why we plan on very aggressively punishing any offenders, who violate the our trust, as well as the trust of the community.

In summary, I personally disagree that we are promoting the wrong incentives. I believe that the public discussions, engagement of new donors, and reactions that nonprofits have speak to that as well.

Conclusion

In conclusion, let me again thank GiveWell for their time and analysis. The rest of the Philanthropedia team and I very much agree with the vast majority of the issues that were identified, as well as the suggestions for improvement. We absolutely plan on raising the bar in terms of transparency and quality in order to be able to gain the trust of both donors and experts. Stay tuned for our reorganized webpage that addresses some of the concerns that GiveWell brought up.

GiveWell, in addition to many others, have kindly said that they see a lot of potential in our model. We at Philanthropedia certainly aspire to live up to that expectation, and with more analyses and constructive feedback, we have a real shot of getting there. So thanks again for your leadership Holden and Elie and for helping us get to the next level.

Responding to GiveWell’s Analysis of our Microfinance Report – Part 1/2

January 2nd, 2010

This is a response to GiveWell’s excellent analysis of our microfinance report that you can find here. You are reading Part 1, Part 2 will be posted tomorrow.

About a month ago, I was having lunch at the Hewlett Foundation with President Paul Brest, when he asked me what I thought the main challenge would be for Philanthropedia in 2010. Paul, as well as our other close advisors have done a terrific job reviewing our work, helping us improve, and being supportive – and we are very thankful for that.

That’s why I was very excited when I heard that GiveWell had decided to take a deeper look into our microfinance results. Philanthropedia the organization is built on several important principles, including feedback, continuous improvement, advice-seeking, and humility. And who better suited to provide feedback on our new and developing model than an organization known for incredible rigor and great analytical thinking? We are honored that GiveWell has taken the time to review our work and we urge you to do the same. Take a good look and let us know what you think!

In the next two blog posts, I will elaborate on our future plans and reply to Holden’s recent article (read it here). I have split the response into three paragraphs (spread out over two posts due to length):

  • Our vision – I take a step back and discuss how we see ourselves and our commitment to making the philanthropic sector better.
  • Intervening variables – I look at factors that I believe need to be taken into account when discussing results from microfinance (and all of our social cause research), as well as the GiveWell analysis.
  • Detailed response – to address the majority of specific concerns that GiveWell brought up.

I will cover the first two points below, while in Part 2 I will focus on answering the specific concerns that were raised.

Our Vision

Philanthropedia’s vision is to make everyone give with impact. We envision a world:

  • where all experts openly share their knowledge about nonprofits,
  • where all donors give to impactful nonprofits in social causes they care about
  • and where all nonprofits improve their practice because of increased transparency, accountability, and effectiveness within the nonprofit sector.

As I look forward to realizing that vision and in response to Paul Brest’s question over lunch a few weeks ago, I see several main challenges for Philanthropedia in 2010:

First, on our programmatic side, we need to raise the quality of our research even further. That can happen in three main ways:

  • Improve our methodology – so far, we have put 2.5 years into developing a solid methodology for compiling expert opinion. And yet, there is more work that we can do to further improve our main tool for collecting information – our surveys (see below for more information). For the past 4 months we have been analyzing our progress to date and writing a whitepaper that discusses the pros and cons of our methodology and outlines ways to improve. We are still reviewing it internally and sharing it with close advisors to critique. We look forward to gradually making a lot of the whitepaper public in the next 2 months. In addition, we have learned a lot from expanding into our first four social causes and plan to use this knowledge to get better results as we keep adding more social causes in 2010.
  • Improve our selection & recruitment of experts – we already have a good grasp of how to select and recruit experts (in Part 2, I will address Holden’s concerns on this issue). Nevertheless, there is a lot of room to improve, especially in terms of exciting experts at academic institutions and foundations, which often have significant expertise and a great holistic perspective on social causes. In 2010, we plan to actively recruit “high quality” experts by focusing on sharing the Philanthropedia vision with major foundations, universities, and nonprofits, as well as partnering with other organizations that already have existing relationships.
  • Improve our expert reviews – we need to educate our experts on the importance of sharing their evaluations of nonprofits to help guide donors’ decisions. As straightforward as this sounds, it is also radically different from the status quo, which is characterized by lack of cooperation and information sharing.

Overall, I would summarize our aspiration to higher quality as being able to carry out research and credibly claim that we have recruited the right experts, and compiled their knowledge of top performing nonprofits in a specific social cause. I am confident that for our upcoming research (planned for the first 6-9 months of 2010) we will be able to push the bar much higher through a combination of our own efforts outlined above as well as the support and constructive feedback of GiveWell and others in the field.

Second, organizationally we need to continue building trust and become more transparent. We can do more to disclose our approach, provide more information about our experts (while respecting their privacy), and publish more of the information that we collect (such as what criteria experts use to evaluate impact in a specific social cause). We are committed to improving on this dimension as soon as possible, which will be reflected on our webpage soon. The blog post from GiveWell is another major inspiration to get there faster.

We have a number of other challenges, related to our customers (individual donors) such as earning their trust, gaining their attention, and developing engaging products (such as our webpage, mutual funds, and gift cards). I will not elaborate on these topics here in order to keep this blog post focused. However, I will be blogging more about this in January, for those who are interested.

To sum up, we are committed to further improving our model and have already put a lot of work into identifying many areas for improvement, with a number of changes planned for the next couple of months. GiveWell’s recent blog post also highlighted many of the same concerns, which I acknowledge and elaborate on in Part 2 of this blog post. But don’t take my word for it – please do continue to monitor our progress and provide constructive feedback and suggestions for improvement.

However, GiveWell’s recent blog post is important not just because it gives me an opportunity to discuss our vision, strengths, areas for improvement, and commitments for the future, but because it is a perfect example of the future that we want to create in the philanthropic sector. We envision a future characterized by open discussions, active feedback cycles, and continuous improvement within the sector. I know that it is not enough for Philanthropedia to talk the talk – we have to walk the walk as well, and I hope this blog post serves as additional proof that we plan to.

Intervening Variables

Before I discuss the specifics of Holden’s blog post, I want to mention a few factors (“intervening variables” if you will) that I think need to be taken into consideration when discussing our reports .

First, it is important to take into account the organizational lifecycle stage that Philanthropedia is in right now. I would call this stage “formative,” which means that a lot of things are still very much under development and not yet at the level, where I ultimately see them being. For example, our efforts to compile expert opinion are by and large groundbreaking (especially on a mass scale). Despite some early success, our journey to improve our methodology and research is by no means over (even though our upcoming whitepaper should help a lot). To further complicate matters, as we have been working on our methodology, we have also discovered that qualitative analysis on a large scale is very poorly understood. In other words, even though the idea of crowdsourcing with experts sounds very logical and straightforward in theory, in practice it is extremely challenging to “get right.” It is due to this complexity that I am advocating for looking at Philanthropedia as a work in progress. We plan to continue rigorously researching the subject matter, collecting data, and then learning and improving continuously from it.

Let me give a specific example to illustrate this last point on the importance of continuously learning based on collecting data. One of the ways that we grow our list of experts is by asking for referrals from experts, who we have identified in advance. Some of our advisors voiced a legitimate concern that this creates an opportunity to introduce bias and potentially “game” the system. Let me say right away that a lot of these concerns paint scenarios which are extremely unlikely – due to the large number of experts involved, the experts’ own honor system, and our monitoring and curating processes. Nevertheless, the concern is legitimate, which is why we collected data and demonstrated that there is no statistically significant difference between the predictive powers of cold called experts vs. referred experts.

It is because of this idea of continuous improvement on the basis of real data that I personally disagree with GiveWell’s conclusion that our model is “reinforcing bad incentives” (pretty much my only disagreement with their analysis). While there are plenty of ways that Philanthropedia could improve that I will write about in Part 2, I personally would argue that we have an approach that promotes the “right” incentives – in particular, educating experts and inspiring them to share actionable information with the world.

To sum up, the first intervening variable that I want to highlight is the notion of a “formative” stage in our organizational development marked by continuous improvement. Just like Wikipedia was initially heavily criticized for its strength/weakness – user-generated content that sometimes had errors – it has since improved dramatically. Therefore, I urge the reader to see Philanthropedia not as a snapshot in time, but as a process of gradual improvement. Today, we have excited and educated some 400 experts on the virtues of sharing their knowledge. A year from now, that number will triple. And in five years it will be in the high thousands. Imagine a world, where everyone has access to the compiled knowledge of experts to make better giving decisions. That is the vision that we are driving towards.

The second intervening variable that I think needs to be mentioned with respect to GiveWell’s blog post is microfinance itself. I almost wish GiveWell’s research was on our report in Bay Area homelessness or perhaps climate change, because microfinance is such a unique field. Our initial thinking in choosing microfinance was (in addition to helping donors) to eventually be able to correlate measures of impact with insight from our experts, in order to better study how and whether experts are a good proxy for measuring impact. The reason why I thought microfinance might be a good field to try to answer that question was that the financial nature of the sector makes it a bit easier to assess impact. Of course, in practice that is hardly the case: both GiveWell (here) and our friend at Yale, Dean Karlan (here, here, and here), as well as others have written extensively about the challenges in measuring and achieving real impact. What makes this even harder for an organization such as Philanthropedia that identifies top nonprofits is that microfinance is in many respects a special case, because there is a mixture of nonprofits and for profits and a complex chain of organizations that work together to deliver the complete product.

But an even more important consideration about microfinance is that our research is not yet complete. In particular, we collect biographies of experts after concluding the main part of our research. As a result (and due to the holidays), we currently still have only 31/131 or 23% of experts that are “on the record” and publicly identified – a figure, which is an order of a magnitude lower than the same indicator in both Bay Area homelessness and climate change, where we have a bit more than 40% of experts publicly identified. Just to be clear: I am by no means content with a 40% rate – we are constantly pushing the number up and reconsidering our policies. However, the limited data set does present challenges from an analytical point of view and can lead to misleading or limited conclusions.

So, with these two notes in mind, I explore and answer to GiveWell’s analysis in Part 2 of this already very long post.


Philanthropedia is a registered 501(c)3 organization. All of your donations are 100% tax-deductible.