by Nick Randell, Program Officer
[A few years ago a two-part series took a look
at the the performance of Tower grants when measured against specific project
goals (Part 1) and at the broader impact of the same grants (Part 2). It is time to take a look at grants that have
closed since that earlier analysis.]
So once again we ask the question: Are Tower grants effective? We took a look at 22 program grants that have been closed since we last conducted this analysis. Since most of our grants are three-year grants, this actually represents over 50 years of programming/project time. Closing dates ranged from January 2014 through June 2016.
When a program officer closes out a grant, they complete an internal assessment form that we call "Lessons Learned." Drawing on site visits, interim reports, final reports, and the relationships we build with grantees, we respond to this question: Were overall grant objectives met? We looked at the objectives that the grantee had described to us when they applied for funding. The scoring options include Fully Met, Mostly Met, Largely Unmet, Not Met At All.
Just to give you an example of what these objectives might look like: an agency may have identified a reduction in wait time for psychiatric consults as a project objective, or a change in attitudes towards binge drinking measured in questionnaires administered in school. When we review final reports from our grantees, we take a look at progress towards these objectives.
It should be said that we fully expect that grants may miss their targets, sometimes widely. But maybe there is more to learn from these projects than from ones that go swimmingly from start to finish. What assumptions were disproven? How did unexpected environmental factors come into play?
This pie chart shows what we found for the 22 grants that we scored.
Recognizing that Tower staff score these grants - and not a third-party evaluator - we feel pretty good about these results. About 95% of grants met their objectives, either in full or mostly. One grant (4.5%) fell fairly short of expectations. We didn't have any grants in the dreaded "Not Met at All" category.
As previously, we looked first at "failure drivers," the factors/conditions that contributed to grants underperforming. A few critical failure drivers emerged.
We also wanted to look at what made projects successful. Here are some of the "success drivers" we identified for projects that fully or mostly met objectives.
So what do we do with this information? We know we want to capture the stories (and the learning) that our grantee partnerships tell us. When we work with prospective grantees or review grant proposals, we look for program designs that can harness success drivers. And, from the failure drivers, we know to caution grantees about the common pitfalls in both planning and execution.
In Part 2, we look at the broader impact of our grants. Do they have a pronounced, positive effect on the target population served, on the strength of the organization, or on the field as a whole?