This is a response to GiveWell’s excellent analysis of our microfinance report that you can find here. You are reading Part 1, Part 2 will be posted tomorrow.
About a month ago, I was having lunch at the Hewlett Foundation with President Paul Brest, when he asked me what I thought the main challenge would be for Philanthropedia in 2010. Paul, as well as our other close advisors have done a terrific job reviewing our work, helping us improve, and being supportive – and we are very thankful for that.
That’s why I was very excited when I heard that GiveWell had decided to take a deeper look into our microfinance results. Philanthropedia the organization is built on several important principles, including feedback, continuous improvement, advice-seeking, and humility. And who better suited to provide feedback on our new and developing model than an organization known for incredible rigor and great analytical thinking? We are honored that GiveWell has taken the time to review our work and we urge you to do the same. Take a good look and let us know what you think!
In the next two blog posts, I will elaborate on our future plans and reply to Holden’s recent article (read it here). I have split the response into three paragraphs (spread out over two posts due to length):
- Our vision – I take a step back and discuss how we see ourselves and our commitment to making the philanthropic sector better.
- Intervening variables – I look at factors that I believe need to be taken into account when discussing results from microfinance (and all of our social cause research), as well as the GiveWell analysis.
- Detailed response – to address the majority of specific concerns that GiveWell brought up.
I will cover the first two points below, while in Part 2 I will focus on answering the specific concerns that were raised.
Philanthropedia’s vision is to make everyone give with impact. We envision a world:
- where all experts openly share their knowledge about nonprofits,
- where all donors give to impactful nonprofits in social causes they care about
- and where all nonprofits improve their practice because of increased transparency, accountability, and effectiveness within the nonprofit sector.
As I look forward to realizing that vision and in response to Paul Brest’s question over lunch a few weeks ago, I see several main challenges for Philanthropedia in 2010:
First, on our programmatic side, we need to raise the quality of our research even further. That can happen in three main ways:
- Improve our methodology – so far, we have put 2.5 years into developing a solid methodology for compiling expert opinion. And yet, there is more work that we can do to further improve our main tool for collecting information – our surveys (see below for more information). For the past 4 months we have been analyzing our progress to date and writing a whitepaper that discusses the pros and cons of our methodology and outlines ways to improve. We are still reviewing it internally and sharing it with close advisors to critique. We look forward to gradually making a lot of the whitepaper public in the next 2 months. In addition, we have learned a lot from expanding into our first four social causes and plan to use this knowledge to get better results as we keep adding more social causes in 2010.
- Improve our selection & recruitment of experts – we already have a good grasp of how to select and recruit experts (in Part 2, I will address Holden’s concerns on this issue). Nevertheless, there is a lot of room to improve, especially in terms of exciting experts at academic institutions and foundations, which often have significant expertise and a great holistic perspective on social causes. In 2010, we plan to actively recruit “high quality” experts by focusing on sharing the Philanthropedia vision with major foundations, universities, and nonprofits, as well as partnering with other organizations that already have existing relationships.
- Improve our expert reviews – we need to educate our experts on the importance of sharing their evaluations of nonprofits to help guide donors’ decisions. As straightforward as this sounds, it is also radically different from the status quo, which is characterized by lack of cooperation and information sharing.
Overall, I would summarize our aspiration to higher quality as being able to carry out research and credibly claim that we have recruited the right experts, and compiled their knowledge of top performing nonprofits in a specific social cause. I am confident that for our upcoming research (planned for the first 6-9 months of 2010) we will be able to push the bar much higher through a combination of our own efforts outlined above as well as the support and constructive feedback of GiveWell and others in the field.
Second, organizationally we need to continue building trust and become more transparent. We can do more to disclose our approach, provide more information about our experts (while respecting their privacy), and publish more of the information that we collect (such as what criteria experts use to evaluate impact in a specific social cause). We are committed to improving on this dimension as soon as possible, which will be reflected on our webpage soon. The blog post from GiveWell is another major inspiration to get there faster.
We have a number of other challenges, related to our customers (individual donors) such as earning their trust, gaining their attention, and developing engaging products (such as our webpage, mutual funds, and gift cards). I will not elaborate on these topics here in order to keep this blog post focused. However, I will be blogging more about this in January, for those who are interested.
To sum up, we are committed to further improving our model and have already put a lot of work into identifying many areas for improvement, with a number of changes planned for the next couple of months. GiveWell’s recent blog post also highlighted many of the same concerns, which I acknowledge and elaborate on in Part 2 of this blog post. But don’t take my word for it – please do continue to monitor our progress and provide constructive feedback and suggestions for improvement.
However, GiveWell’s recent blog post is important not just because it gives me an opportunity to discuss our vision, strengths, areas for improvement, and commitments for the future, but because it is a perfect example of the future that we want to create in the philanthropic sector. We envision a future characterized by open discussions, active feedback cycles, and continuous improvement within the sector. I know that it is not enough for Philanthropedia to talk the talk – we have to walk the walk as well, and I hope this blog post serves as additional proof that we plan to.
Before I discuss the specifics of Holden’s blog post, I want to mention a few factors (“intervening variables” if you will) that I think need to be taken into consideration when discussing our reports .
First, it is important to take into account the organizational lifecycle stage that Philanthropedia is in right now. I would call this stage “formative,” which means that a lot of things are still very much under development and not yet at the level, where I ultimately see them being. For example, our efforts to compile expert opinion are by and large groundbreaking (especially on a mass scale). Despite some early success, our journey to improve our methodology and research is by no means over (even though our upcoming whitepaper should help a lot). To further complicate matters, as we have been working on our methodology, we have also discovered that qualitative analysis on a large scale is very poorly understood. In other words, even though the idea of crowdsourcing with experts sounds very logical and straightforward in theory, in practice it is extremely challenging to “get right.” It is due to this complexity that I am advocating for looking at Philanthropedia as a work in progress. We plan to continue rigorously researching the subject matter, collecting data, and then learning and improving continuously from it.
Let me give a specific example to illustrate this last point on the importance of continuously learning based on collecting data. One of the ways that we grow our list of experts is by asking for referrals from experts, who we have identified in advance. Some of our advisors voiced a legitimate concern that this creates an opportunity to introduce bias and potentially “game” the system. Let me say right away that a lot of these concerns paint scenarios which are extremely unlikely – due to the large number of experts involved, the experts’ own honor system, and our monitoring and curating processes. Nevertheless, the concern is legitimate, which is why we collected data and demonstrated that there is no statistically significant difference between the predictive powers of cold called experts vs. referred experts.
It is because of this idea of continuous improvement on the basis of real data that I personally disagree with GiveWell’s conclusion that our model is “reinforcing bad incentives” (pretty much my only disagreement with their analysis). While there are plenty of ways that Philanthropedia could improve that I will write about in Part 2, I personally would argue that we have an approach that promotes the “right” incentives – in particular, educating experts and inspiring them to share actionable information with the world.
To sum up, the first intervening variable that I want to highlight is the notion of a “formative” stage in our organizational development marked by continuous improvement. Just like Wikipedia was initially heavily criticized for its strength/weakness – user-generated content that sometimes had errors – it has since improved dramatically. Therefore, I urge the reader to see Philanthropedia not as a snapshot in time, but as a process of gradual improvement. Today, we have excited and educated some 400 experts on the virtues of sharing their knowledge. A year from now, that number will triple. And in five years it will be in the high thousands. Imagine a world, where everyone has access to the compiled knowledge of experts to make better giving decisions. That is the vision that we are driving towards.
The second intervening variable that I think needs to be mentioned with respect to GiveWell’s blog post is microfinance itself. I almost wish GiveWell’s research was on our report in Bay Area homelessness or perhaps climate change, because microfinance is such a unique field. Our initial thinking in choosing microfinance was (in addition to helping donors) to eventually be able to correlate measures of impact with insight from our experts, in order to better study how and whether experts are a good proxy for measuring impact. The reason why I thought microfinance might be a good field to try to answer that question was that the financial nature of the sector makes it a bit easier to assess impact. Of course, in practice that is hardly the case: both GiveWell (here) and our friend at Yale, Dean Karlan (here, here, and here), as well as others have written extensively about the challenges in measuring and achieving real impact. What makes this even harder for an organization such as Philanthropedia that identifies top nonprofits is that microfinance is in many respects a special case, because there is a mixture of nonprofits and for profits and a complex chain of organizations that work together to deliver the complete product.
But an even more important consideration about microfinance is that our research is not yet complete. In particular, we collect biographies of experts after concluding the main part of our research. As a result (and due to the holidays), we currently still have only 31/131 or 23% of experts that are “on the record” and publicly identified – a figure, which is an order of a magnitude lower than the same indicator in both Bay Area homelessness and climate change, where we have a bit more than 40% of experts publicly identified. Just to be clear: I am by no means content with a 40% rate – we are constantly pushing the number up and reconsidering our policies. However, the limited data set does present challenges from an analytical point of view and can lead to misleading or limited conclusions.
So, with these two notes in mind, I explore and answer to GiveWell’s analysis in Part 2 of this already very long post.