Connect with Us  Facebook Twitter LinkedIn Email Email

Assessing Program Grants: Key Factors in Program Outcomes [Part 1]

by Nick Randell, Program Officer
2016-August 1

 [A few years ago a two-part series took a look at the the performance of Tower grants when measured against specific project goals (Part 1) and at the broader impact of the same grants (Part 2).  It is time to take a look at grants that have closed since that earlier analysis.]

So once again we ask the question:  Are Tower grants effective? We took a look at 22 program grants that have been closed since we last conducted this analysis.  Since most of our grants are three-year grants, this actually represents over 50 years of programming/project time. Closing dates ranged from January 2014 through June 2016.  

When a program officer closes out a grant, they complete an internal assessment form that we call "Lessons Learned." Drawing on site visits, interim reports, final reports, and the relationships we build with grantees, we respond to this question:  Were overall grant objectives met?  We looked at the objectives that the grantee had described to us when they applied for funding.   The scoring options include Fully Met, Mostly Met, Largely Unmet, Not Met At All.

Just to give you an example of what these objectives might look like: an agency may have identified a reduction in wait time for psychiatric consults as a project objective, or a change in attitudes towards binge drinking measured in questionnaires administered in school.  When we review final reports from our grantees, we take a look at progress towards these objectives.  

It should be said that we fully expect that grants may miss their targets, sometimes widely.  But maybe there is more to learn from these projects than from ones that go swimmingly from start to finish.  What assumptions were disproven?  How did unexpected environmental factors come into play?

This pie chart shows what we found for the 22 grants that we scored.

Recognizing that Tower staff score these grants - and not a third-party evaluator - we feel pretty good about these results. About 95% of grants met their objectives, either in full or mostly.  One grant (4.5%) fell fairly short of expectations.  We didn't have any grants in the dreaded "Not Met at All" category.

As previously, we looked first at "failure drivers," the factors/conditions that contributed to grants underperforming.   A few critical failure drivers emerged.

  • Little value assigned to consistent project management.
  • IT systems modifications take longer (a lot longer!) than anticipated.
  • Generally poor communication with us about project challenges.
  • Project ownership didn't run deep; the work was not viewed as mission critical.

We also wanted to look at what made projects successful.  Here are some of the "success drivers" we identified for projects that fully or mostly met objectives.

  • Strong advisory boards created.
  • Patience and perseverance relating to process change.
  • Focus on steady demand (e.g., manage client waitlist aggressively)
  • With mentors, focus on recruiting quality candidates (and bilingual as needed)
  • Maintain strong relationships with referral sources
  • Build internal training capacity to sustain gains.
  • Move services closer to end user.
  • Nurture co-located, cross-disciplinary staff.
  • Stay calm in the face of staff turnover; have short-term staffing back-up plans
  • Program has effective coaching component.
  • Pay attention to the culture of project partners.
  • Make and keep commitment to engage the community.
  • Strong program planning.
  • Broad base of buy-in among clinicians before implementing new clinical models.
  • Vendors/trainers that are responsive, prepared to offer post-implementation technical support.

So what do we do with this information?  We know we want to capture the stories (and the learning) that our grantee partnerships tell us.  When we work with prospective grantees or review grant proposals, we look for program designs that can harness success drivers.  And, from the failure drivers, we know to caution grantees about the common pitfalls in both planning and execution.

In Part 2, we look at the broader impact of our grants. Do they have a pronounced, positive effect on the target population served, on the strength of the organization, or on the field as a whole?


Photo by Mark Taylor
Flickr: 7587238186
Creative Commons 2.0


comments powered by Disqus