Connect with Us  Facebook Twitter LinkedIn Email Email

Measuring Our Grants' Impact

2014-July 24
by Don Matteson, Chief Program Officer

A core issue in grantmaking is the role of evaluation. How do we know if our investments in initiatives, programs, and organizations have made a difference?

For at least as long as I've been at the Foundation (eight years), our grant review process has included an evaluation component. Where possible, I've encouraged applicants to identify outcomes that can be measured using data and statistics they're already tracking.

One thing that's always troubled me, however, was how to choose outcome targets. How do we (or our grantees) know that they can reasonably expect an "x%" improvement in a particular metric? Where did that figure come from? In most cases, I suspect that the target is usually arbitrary; ambitious enough to satisfy funders (probably) that their dollars have made an impact, modest enough that the target is achievable without making extraordinary efforts. After all, funders tend to get crabby if the stated target isn't achieved. Some are even punitive about it, "cutting their losses" and terminating what they perceive as a failed (or failing) grant.

We've started using a process called Results-Based Accountability (RBA) to frame our work. It formed the basis for our strategic planning process, and it will inform the way we assess our grantmaking's impact. There are quite a few moving parts to RBA (we'll have a guest blogger going into greater depth), but my focus here is on evaluating program outcomes (Program Accountability, in RBA-speak). Evaluating program performance boils down to answering three questions:

  1. How much did we do?
  2. How well did we do it?
  3. Is anybody better off?

Aside from its appealing simplicity, there's a lot of value to approaching evaluation this way. First of all, applying this framework to program evaluation ensures that we don't mistake motion for progress. Many evaluation plans offer a lot of outputs (to use logic model language for a moment), and they track the activities performed and the number served, but they can't tell us whether anybody's actually benefited from the work. While it's tempting to presume that all work being done is good and benefits the people being served, it's not a presumption that holds up well when trying to demonstrate value to donors or funders.

Similarly, there's a presumption that a service delivered is a service delivered well. Again, this is something that warrants explicit attention. There's a world of difference to the end-user between a service delivered poorly and one delivered with knock-your-socks-off quality. This is not to mention the fact that the quality of service is likely perceived differently by providers and the people served.

RBA provides some specific guidance on setting performance targets, drawing on the idea of baselines. We can compare a program's performance to its previous performance (assuming it's not new), some external comparison, or some standard. In all three cases, the goal is to see performance improve relative to the baselines.

When using a program's previous results as a baseline, we should be able to go back and find historical data for our performance measures, and use these to project trends. If our metric is trending negatively and we do nothing at all, it's not unreasonable to expect that the negative trend is likely to continue. From there, we can see whether things began to trend positively from the point at which we made our grant.

Many grants are made to replicate existing programs, whether taken from the National Registry of Evidence-Based Programs and Practice or based on successes we've seen in our own back yards. Where this is the case, we can look to these previous implementations to get a sense of how the program should be performing. If our funded program falls short or succeeds wildly, we can investigate why and see what needs fixing or possibly replicating.

Finally, we can establish an external standard for program performance. Maybe we believe that no person referred for mental health services should have to wait more than four days before being seen by a clinician. If that's the case, we can compare our funded program's wait times to our standard to see if it's performing to spec. When developed from scratch, rather than derived from an externally established frameworks, we aren't necessarily that far ahead of the arbitrarily-defined targets, but at least there's a basis for conversation.

Naturally, there are many factors that influence the performance of a particular program. Community needs, organizational changes, regulatory and compliance issues, changes to externally established performance measures -- all affect the programs we fund. With this framework, though, we're in a position to talk about these factors and how they affect these three questions.

We've started working with our applicants to build grant evaluations using this framework, and the simple exercise of ensuring that all three questions are addressed in our performance metrics has been valuable. It's fostered some good, thoughtful conversations about how we can assess the quality and effectiveness of services, and has let us keep the outputs/process measures (i.e., how much did we do?) in proper perspective.

Time will tell whether this approach will give us a better sense of how our grants are performing. If the conversations I've had with applicants about this framework provide any indication, though, I think it's going to be great.

What's been your experience with grant evaluation? What's worked? Does RBA seem like a framework that could benefit your programs or grants?

Photo by Tudor Barker  (Flickr: tudedude)
https://www.flickr.com/photos/tudedude/2900335966
https://creativecommons.org/licenses/by/2.0/

comments powered by Disqus