Nick Randell, Program Officer
[This installment of a two-part series looks at the performance of Tower grants when measured against specific project goals; Part 2 will looks at the broader impact of the same grants.]
So are Tower grants effective? Most foundations ask themselves some variation of this question. A few years ago we developed an internal report form (we called it "Lessons Learned') that our program officers complete when we close out a grant. Between 2010 and the end of 2013, we performed a simple assessment on 54 grants. First, we wanted a very basic grade. Were the key objectives of the initiative met? We looked at the objectives that the grantee had described to us when they applied for funding. Just to give you an example of what these objectives might look like: an agency may have identified a reduction in wait time for psychiatric consults as a project objective, or the number of staff trained in a particular intervention model, or a change in attitudes towards binge drinking measured in questionnaires administered in school. When we review final reports from our grantees, we take a look at progress towards these objectives.
The pie chart shows what we found for the 54 grants that we
Reasonably good results I think. About 81% of grants met their objectives, either in full or mostly. Of course that means that about 19% fell fairly short of expectations. So the next piece of analysis took a look at this subset in hopes of learning from the experience of a grant that, on the face of things, didn't seem very successful. Based on our review of project reports, our site visits, and our discussions with grantees, we tried to capture the factors that most contributed to less than hoped for outcomes. We called these "failure drivers," and some of the ones that we saw fairly frequently are given below.
We also wanted to look at what made projects successful. Here are some of the "success drivers" we identified for projects that fully or mostly met objectives.
So what do we do with this information? In some ways, we are still trying to figure
that out. We know we want to capture
the stories (and the learning!) that our grantee partnerships tell us. But here is a short answer. When we work with prospective grantees or review
grant proposals, we look for program designs that can harness success drivers. And, from the failure drivers, we know to
caution grantees about the common pitfalls of less than stellar program
Coming Soon ... Part 2: What about the broader impact of our grants?