Anne had a meeting a few weeks ago with a United Way exec to orient him to our firm’s services. During the conversation, she asked about the challenges he was facing and he shared that his United Way was struggling with connecting the programs that they fund to community level indicators. He thinks he ought to be able to demonstrate that a gift to United Way contributes to a specific positive change in the community. As I have had some involvement with similar processes, she suggested that he chat with me and so we did have a conversation. I hope I was helpful; I certainly intended to be. The issues that arose in the conversation do make me want to bang my head against a brick wall, however. Let me explain.
I completely understand that donors are tired of giving money to solve community problems, only to be told, year after year, that the problems are getting worse, not better. As another UW exec explained to me awhile ago, “When I took this job, I was told by my corporate donors that I had better fix something. Didn’t matter what it was...just so something got better and not worse!”
The UW exec I spoke to this week expressed surprise that he was unable to find many examples of other United Ways who had succeeded in linking their funding to changes in community level indicators. He has staff members working with groups of community volunteers who are trying to figure this out and was hoping that they could draw upon the successes of other communities. He assured me that his volunteers are very well educated and very sincere and he is confident that they will come up with something terrific. I am less confident and not at all surprised that he is struggling to find success stories.
There is a set of interconnected problems that have to be taken into consideration that, in my view, are mostly ignored in this effort to tie funding to changes in community indicators. Let’s take some things we all know about…like immunization. We know beyond a shadow of a doubt that if kids are immunized, their health outcomes, over the course of childhood, are better than if they are not immunized. We also know that participation in quality early childhood education is a solid predictor of success in school…at least through the 4th grade. If we want to improve community indicators that these interventions address, we are on fairly solid ground. But those instances of near certainty are more the exception than the rule. Let’s look at high risk behaviors by adolescents. This has become an attractive target for community indicator projects because of the availability of normed surveys to measure risky behaviors that are undertaken in many states. But when we look at the stage of development of the field of risky behavior prevention, we see that we don’t have certainty about how to intervene. We may have a wide array of programs that are being tried; we may have some that have been evaluated; some that are labeled promising, but we really don’t have a large body of knowledge that tells us that there is evidence based practice to support the use of this intervention rather than that.
Another area that seems popular is elimination of pockets of extreme poverty in our cities. Here we can measure the existence of these pockets at least every ten years by the census but that measurement is confounded by the fact that people move. If we are successful in helping some families out of poverty, they will probably move from their poverty stricken neighborhood (wouldn’t you?), and will be replaced by others who are drawn by the low rents typical of these areas. In this arena, we may have some confidence that job training, placement assistance and job retention supports can help families move out of poverty but using community level indicators to track success is very challenging.
And then there are the issues of scope and intensity. If a United Way or foundation is responsible for an area that is as large as a city or perhaps a county, what level of intervention at what level of intensity is required to move a community level indicator? Mostly, we don’t know.
So, what are we to do? Quit trying to improve community indicators? No, not at all. I do think though that we have to say where we really are on this issue. If I were a funder in this position, I would make sure that my process of choosing community indicators included: (1) the identification of indicators where good measurement is possible; and then, from among that list, (2) identification of indicators where interventions are solidly based on evidence based practice; and then, from among those remaining, (3) identification of indicators that my organization can afford to address in sufficient scope and intensity to make a difference. And if I found that we didn’t have enough money to address any of the remaining indicators? Well, I would seek funding partners among other funders in the area.
So, does that mean we should never tackle an indicator where we don’t have known knowledge? No, but if we are going to do that, we should be honest with our stakeholders, our donors, our Board, our staff, that we are experimenting. And, if we are going to experiment, then we have to accept the obligation to evaluate what we are doing so that we can add to the knowledge base….even if we only tell the story of something that fails to prevent others from failing in similar ways. And I can’t tell you how many times when I reach this point in the conversation, someone says, “Oh we don’t have either the money or expertise to evaluate what we are doing.” And that is when my head banging begins anew.