Connect with Us  Facebook Twitter LinkedIn Email Email

Are We Making Effective Grants? (Part 1)

2014-October 2
Nick Randell, Program Officer

[This installment of a two-part series looks at the performance of Tower grants when measured against specific project goals;  Part 2 will looks at the broader impact of the same grants.]

So are Tower grants effective? Most foundations ask themselves some variation of this question.  A few years ago we developed an internal report form (we called it "Lessons Learned') that our program officers complete when we close out a grant.   Between 2010 and the end of 2013, we performed a simple assessment on 54 grants.  First, we wanted a very basic grade.  Were the key objectives of the initiative met?  We looked at the objectives that the grantee had described to us when they applied for funding.  Just to give you an example of what these objectives might look like: an agency may have identified a reduction in wait time for psychiatric consults as a project objective, or the number of staff trained in a particular intervention model, or a change in attitudes towards binge drinking measured in questionnaires administered in school.  When we review final reports from our grantees, we take a look at progress towards these objectives.

The pie chart shows what we found for the 54 grants that we scored.

Reasonably good results I think. About 81% of grants met their objectives, either in full or mostly.  Of course that means that about 19% fell fairly short of expectations.  So the next piece of analysis took a look at this subset in hopes of learning from the experience of a grant that, on the face of things, didn't seem very successful.  Based on our review of project reports, our site visits, and our discussions with grantees, we tried to capture the factors that most contributed to less than hoped for outcomes.  We called these "failure drivers," and some of the ones that we saw fairly frequently are given below.

  • Clinical staff turnover was high
    • Staff buy-in was lukewarm

    • Other distractions (e.g., renovation project) siphoned off resources
    • Primary outcome measures were abandoned/not tracked
    • Program champion or coordinator left the organization
    • Resistance to demands of a new therapy was high (e.g. videotaping or role play)

    We also wanted to look at what made projects successful.  Here are some of the "success drivers" we identified for projects that fully or mostly met objectives.

    • Selected interventions/curricula were carefully chosen, generally robust and user-friendly, fitting well with organizational culture
    • Staff feel ownership of new approaches
    • Referral networks are educated and strengthened
      • Management is visibly engaged and physically present
      • Vendors/consultants carefully vetted for quality and organizational fit
      • Coaching and mentoring components in place
      • Staff supported (and compensated) in the extra learning time required
      • In-house capacity strengthened (e.g., train-the-trainer model, in-house translator)

      So what do we do with this information?  In some ways, we are still trying to figure that out.   We know we want to capture the stories (and the learning!) that our grantee partnerships tell us.  But here is a short answer.  When we work with prospective grantees or review grant proposals, we look for program designs that can harness success drivers.  And, from the failure drivers, we know to caution grantees about the common pitfalls of less than stellar program planning.

      Coming Soon ... Part 2:  What about the broader impact of our grants?

      Photo by
      Flickr: On Target / 2655969483
      Creative Commons 2.0 Licensed

      comments powered by Disqus