by Anne Yurasek on July 03, 2008
How are you doing? More importantly -- how is your organization doing? If we locked you in a room for three months and didn't let you speak with anyone but gave you only the reports that routinely cross your desk, would you know?
Even without those drastic circumstances, as an Executive Director, how would you answer the question? What documents or reports would you look at? To whom would you speak? How would gather enough pieces of information to know how your organization is doing? In my recent discussions with clients and those in the sector, there is not an obvious textbook answer - or a standard set of tools or reports that can be used by multiple organizations.
Executive Directors and program managers "know" how things are going. When pushed, they may say, "Well, our funders will let us know if we are not performing.", or "I just have a sense.", or "I speak to our clients and they are happy with our services." If they look internally at management processes or to their Board, EDs tend to lean towards the anecdotal - "My board seems to be functioning alright.", "Our management processes get the job done." Wouldn't it be nice to know? From an objective perspective how are you doing?
I have realized that the tools our firm has to offer - including our NPO development framework, Board survey, and organizational climate survey (as well as our Mid-Manager Assessment and Organizational Learning Survey) - provide an objective basis of information about an organization. A neutral foundation. A foundation of information that is not based on anecdotal evidence, or the "sense" of its leadership, but based on the actual observation of behaviors and activities. It is also a group effort to gather the information - which gives voice to various stakeholder groups involved in the organization. Therefore - the information gathering process can be a engagement process for all involved.
You may see this as a pitch. Well, it is. I have so been struck by the lack of neutral information for decision-making in organizations - that I believe figuring out ways for Executive Directors and Board Members to truly gain an understanding of what is REALLY going on in their organizations is imperative. Our tools are one option.
Flickr Photo Credit: Mister Scratch
Read More
by Anne Yurasek on February 28, 2008
Anne had a meeting a few weeks ago with a United Way exec to orient him to our firm’s services. During the conversation, she asked about the challenges he was facing and he shared that his United Way was struggling with connecting the programs that they fund to community level indicators. He thinks he ought to be able to demonstrate that a gift to United Way contributes to a specific positive change in the community. As I have had some involvement with similar processes, she suggested that he chat with me and so we did have a conversation. I hope I was helpful; I certainly intended to be. The issues that arose in the conversation do make me want to bang my head against a brick wall, however. Let me explain.
I completely understand that donors are tired of giving money to solve community problems, only to be told, year after year, that the problems are getting worse, not better. As another UW exec explained to me awhile ago, “When I took this job, I was told by my corporate donors that I had better fix something. Didn’t matter what it was...just so something got better and not worse!”
The UW exec I spoke to this week expressed surprise that he was unable to find many examples of other United Ways who had succeeded in linking their funding to changes in community level indicators. He has staff members working with groups of community volunteers who are trying to figure this out and was hoping that they could draw upon the successes of other communities. He assured me that his volunteers are very well educated and very sincere and he is confident that they will come up with something terrific. I am less confident and not at all surprised that he is struggling to find success stories.
There is a set of interconnected problems that have to be taken into consideration that, in my view, are mostly ignored in this effort to tie funding to changes in community indicators. Let’s take some things we all know about…like immunization. We know beyond a shadow of a doubt that if kids are immunized, their health outcomes, over the course of childhood, are better than if they are not immunized. We also know that participation in quality early childhood education is a solid predictor of success in school…at least through the 4th grade. If we want to improve community indicators that these interventions address, we are on fairly solid ground. But those instances of near certainty are more the exception than the rule. Let’s look at high risk behaviors by adolescents. This has become an attractive target for community indicator projects because of the availability of normed surveys to measure risky behaviors that are undertaken in many states. But when we look at the stage of development of the field of risky behavior prevention, we see that we don’t have certainty about how to intervene. We may have a wide array of programs that are being tried; we may have some that have been evaluated; some that are labeled promising, but we really don’t have a large body of knowledge that tells us that there is evidence based practice to support the use of this intervention rather than that.
Another area that seems popular is elimination of pockets of extreme poverty in our cities. Here we can measure the existence of these pockets at least every ten years by the census but that measurement is confounded by the fact that people move. If we are successful in helping some families out of poverty, they will probably move from their poverty stricken neighborhood (wouldn’t you?), and will be replaced by others who are drawn by the low rents typical of these areas. In this arena, we may have some confidence that job training, placement assistance and job retention supports can help families move out of poverty but using community level indicators to track success is very challenging.
And then there are the issues of scope and intensity. If a United Way or foundation is responsible for an area that is as large as a city or perhaps a county, what level of intervention at what level of intensity is required to move a community level indicator? Mostly, we don’t know.
So, what are we to do? Quit trying to improve community indicators? No, not at all. I do think though that we have to say where we really are on this issue. If I were a funder in this position, I would make sure that my process of choosing community indicators included: (1) the identification of indicators where good measurement is possible; and then, from among that list, (2) identification of indicators where interventions are solidly based on evidence based practice; and then, from among those remaining, (3) identification of indicators that my organization can afford to address in sufficient scope and intensity to make a difference. And if I found that we didn’t have enough money to address any of the remaining indicators? Well, I would seek funding partners among other funders in the area.
So, does that mean we should never tackle an indicator where we don’t have known knowledge? No, but if we are going to do that, we should be honest with our stakeholders, our donors, our Board, our staff, that we are experimenting. And, if we are going to experiment, then we have to accept the obligation to evaluate what we are doing so that we can add to the knowledge base….even if we only tell the story of something that fails to prevent others from failing in similar ways. And I can’t tell you how many times when I reach this point in the conversation, someone says, “Oh we don’t have either the money or expertise to evaluate what we are doing.” And that is when my head banging begins anew.
Photo provided by tanakawho
Read More
by Anne Yurasek on February 21, 2008
We have recently taken on a project that promises to answer questions I have had for a very long time. We are working in partnership with a capacity building entity that is interested in encouraging systemic approaches to community problems. Over the work of the last of couple of years, we have realized that these kinds of solutions to community problems emerge only where there is a strong culture of data and evidence and we now have a small grant to explore some of the organizational antecedents that have to be in place for such a culture to materialize.
We are recruiting agencies to participate so I have been having conversations with CEO’s of mid to large nonprofits about how data and evidence inform decision making in their organizations. Across the group of interviews, I would have to say that I see a disturbing pattern. The most often mentioned arena of closely examined data by CEOs is financial. The least often mentioned and the arena where the least amount of data is routinely available is program quality data.
If you read the literature on organizational learning, you quickly discover that one of the most powerful facilitating factors in organizations that learn from their practice is the presence and leadership of the CEO in examining what the organization does well and what the organization does poorly. It is the CEO that has to make addressing quality measurement important and it is the CEO that ultimately determines whether the rest of the organization values quality measurement.
As a CEO, what do you look at routinely?
Photo provided by interrupt via Flickr
Read More