Talk of developing a system to evaluate the sustainability of non-profit organizations has been on the rise. Centered around measuring outcomes in terms of percentages or ratios, the goal of such a system appears to be twofold:

  1. To create a filter that will allow grantmakers to quickly assess and compare the overall ability of non-profit organizations to manage funds and deliver results.
  2. To express those assessments as a numerical ratio that will rank the worthiness of non-profits as grant recipients.

Calls for hard measurement of non-profit organizations’ efficiency have been made before, but I can’t remember a time when the voices doing so have been so loud.

Attendees at a major conference heard the CEO of one of the United States’ largest foundations say that he is looking for a financial ratio to employ in the review of grant applications. The CEO of a major website serving non-profit organizations has publicly expressed his intent to create a numerical approval ranking system for the hundreds of non-profits listed on the site. A state association of non-profit organizations announced that the organization was planning to implement numerical “performance standards,” and that the association was going to “raise the bar” for non-profit organizations in order to judge “whether or not they were doing a good job.”

From the statements I’ve heard and the comments I’ve read, it would appear that funders are in search of a litmus test for grant worthiness. An organization that fails to score high enough would be out of the box — automatically ineligible to be considered for a grant. One that fell within the box would then be “certified” for funding. But how meaningful would such certification be? Just what would organizations find themselves doing in order stay within the box, and how compatible would it be with their mission?

The For-Profit And Non-Profit “Bottom-Line”

This idea of applying financial ratios to non-profit organizations seems to have grown out of the accounting and business management practices of the for-profit world. It has been over three decades since I gave up a corporate career to become a development professional in the non-profit world. In those 30-plus years, I have never argued with the need for non-profits to be well managed, held accountable, and responsive to funders. I have always supported the adoption from the business world of sound accounting and management practices, when applicable. But I find it hard to understand how:

  1. Reducing an organization’s performance to a rating on a numerical scale will make it better at carrying out its mission.
  2. Summer camps, museums, hospitals, theaters, latchkey programs, and orchestras can be assessed under the same rubric.

Even when non-profit organizations perform missions in the same arena and are basically the same size, such comparisons to my mind are still of the apples-to-oranges variety.

There are those who disagree.

  • They will tell you that non-profits are no more diverse than for-profits. They are right.
  • They will tell you that such diversity has not prevented the implementation of valid financial ratios in the for-profit world. Again they are right.
  • They will tell you that financial ratios can be developed to compare non-profits in the same way that they have for businesses. Here they are wrong.

Every for-profit business works primarily to increasing sales, profits, and the value of the company, but non-profit organizations, no matter how similar, seldom aim for the same outcome. The shared goals of income, profit, and value make it possible to develop ratios that succeed in comparing and rating for-profit businesses.

Similar shared intended outcomes just aren’t there in the non-profit world. It is a place where the edges are far more blurred and the tonal range far grayer. I have come to believe that it is impossible to cross evaluate non-profit organizations.

Many ratios can be computed for aspects of a non-profit’s operations. Earned income to contributed income, programming expense to total expense, fund-raising expense to contributions received, and current liabilities to current assets, are a few. But with little exception, those ratios provide data that is valid only when viewed in the context of the single, specific organization for which they have been computed.

Playing The Numbers Game

The American Symphony Orchestra League labored for years gathering financial data from member orchestras in a search for direct comparisons and meaningful ratios. It never found them. In fund-raising for example, expenses are reported far differently from orchestra to orchestra. Some even charge the cost of fund-raising brochures and other “climate-making” expenses to the marketing budget.

Over the years, I have seen similar problems in comparisons and ratios reported by other umbrella or clearing-house organizations. Nor have universities and academics seeking workable and consistent formulas met with any notable degree of success.

If a single set of financial ratios and standards cannot be found for a group of organizations that on the surface appear to be as similar as symphony orchestras, what happens when you extend the search across the entire universe of non-profits? An organization developing affordable housing has a very different set of numbers from one providing youth outreach. And how do you compare a museum to a hospital?

Non-profit organizations’ costs, debt, funding sources, and infrastructure needs are so different that I simply do not believe that any meaningful benchmark can be applied from one non-profit to another.

To my mind, the only valid test of a non-profit’s efficiency is how well it carries out its specifically stated mission. Since each organization uniquely defines its mission, an organization is best measured against itself. Past performance of an individual organization should be the evaluative criteria used in determining that organization’s sustainability and worthiness to receive future grants.

It is ironic that foundations are now looking for formulaic ways to evaluate non-profit organizations. For years, foundations have failed to adequately evaluate how well the funds they awarded were used. I have helped organizations obtain many, many grants, but have rarely seen a grant that required an evaluation of outcomes that was more than mere lip service.

The Rush To Do “Something”

Now a new breed of program officers, executive directors, and board members is moving into the seats of power at foundations. They’re bringing with them business experience accumulated during the 1980s and ’90s, the formalized training of graduate programs in non-profit management, and the MBA mantra of success measured by the numbers. To their dismay, they are finding little or no data with which to measure and compare non-profits.

Foundations have long done a poor job of evaluating outcomes for grants. As the pressure for better measurements increases, they find themselves playing catch-up. In trying to develop systems that evaluate and compare non-profit organizations’ overall efficiency, stability, management practices, and success rates, they are looking for a shortcut to make up for their own past failures.

To paraphrase Shakespeare: The fault dear foundations, is not in the non-profits, but in yourselves. If in the past, foundations had rigorously evaluated the outcomes of projects that they supported and the stewardship of the grants they awarded, they would today have valid data to assist in grant-making decisions. They would not be turning to the false promise of across-the-board financial ratios and standards.

There, I’ve had my say about what is wrong with ratios and comparative evaluation. That said, the drive on the part of foundations and other concerned organizations to develop better data to use in evaluating grant requesters is not going to go away.

What non-profits need to do is help in that endeavor. They need to take the initiative. Instead of waiting for foundation evaluations, they need to perform their own. Non-profits that measure the effectiveness of their efforts will be better able to argue the validity of their grant requests. Also, they need to stand up and make the points against arbitrary ratios and comparisons.

It’s Up To The Development Professional

If you’re a development professional in a non-profit organization, you’re the person who needs take the reins on this one. Just as your organization should not wait for funders to set the measurement scale, you should not wait for those who manage your programs to address the issue of evaluation. You’re the one charged with bringing in the grants, and evaluation is likely to become a make or break issue with foundations and other grant-making organizations.

So, to get you started, here are five actions your organization should take in regard to evaluation.

  1. Include a statement of organization and program evaluation in the strategic plan.
  2. Get buy-in on the part of everyone in the organization for the need to conduct meaningful evaluation.
  3. Build an evaluation component into every program activity.
  4. Include an evaluation component in every grant request and ask for funding to cover it.
  5. Work with other non-profits and with funders to address the issue of evaluation.

Evaluation of a non-profit program comes in two flavors. First, there is the traditional model as performed by a neutral external evaluator. Its advantage is the inherent objectivity of an evaluation performed without bias. Then there is the participatory model in which staff, users/clients, and other stakeholders participate in a process that produces an evaluation of a project or program.

In general, external evaluation has been perceived to have greater credibility. However, most non-profit organizations lack the funding to constantly engage external evaluators. The advantages to participatory evaluation include greater staff satisfaction with the process and as a result easier implementation of recommendations.

There are a number of different types of program evaluation, but the one most likely to resonate with funders is outcome evaluation. The goal of outcome evaluation is to determine whether the actions taken delivered the desired results, and were they cost efficient.

When conducting an outcome evaluation, you must first identify the outcomes you wish to achieve. Then determine what criteria will be observed to gauge success or failure. Next develop a process for making those observations, and finally report the results of your analysis and findings.

I’m not going to address here the identification of outcomes and determination of criteria for a participatory outcome evaluation. This is a planning process. Nor am I going to get into how to write a final report. However, I think a few suggestions on how to collect data are in order.

Surveys are a good way to collect a lot of information quickly. Unsigned questionnaires guarantee anonymity. They’re easy to manage, and multiple-choice responses can be easy to quantify. But you have to be careful not to write questions that bias responses. Questionnaires lack a personal touch, and both survey design and sample selection require a high level of expertise. At the very least, a professional should be involved in question creation.

Focus groups give you a chance to explore issues in depth. Putting six or seven people in a room with a video camera running and asking questions of the group as a whole can yield valuable information. Try it with users/clients, contributors, or just about any constituency. However, it is tough to get the people to commit to giving the time and then show up when expected. The group facilitator needs to be able to establish instant comfort for participants and keep them both engaged and on track. You will probably need a professional communicator as group facilitator. Focus groups should be scheduled on a continuing basis to establish benchmarks and measure change. Since the responses are freeform, it can be hard to analyze results, and that analysis can be quite time consuming.

Interviews give you a chance to talk with users/clients one-on-one. They can yield some great information due to the give and take of the conversational process. Maybe you should be conducting an “exit interview” with each user/client or a cross section of them. However, the interview process is time consuming. The information acquired is often anecdotal in nature and can be very hard to quantify. It is easy for a less skillful interviewer to bias responses without intending to.

Observation is the most direct way to gather information about how a program is functioning if the observer is well trained and understands the nuances of what he or she is observing. The downside is that an observer is an intrusion in the normal process and therefore changes it. What is observed can be hard to categorize and interpret, and it is an expensive way to judge outcomes.

Case reviews are the most complete form of assessment of a given user’s or client’s experience. They are a great way to show exactly what the organization does, how it does it, and how an outcome is achieved. The problem with them is that they are very time consuming and really only show what happens for one person. They don’t provide quantifiable data. They don’t show breadth, and they only work for certain types of organizations.

A great deal more could be said about the process of evaluation and outcome measurement. However that is a topic unto itself, and there are people better qualified than I to address it. My interest here is to encourage you to motivate your organization to implement a process of ongoing evaluation of its programs. The data yielded will go a long way towards heading off the mounting pressure for one-size-fits-all evaluation ratios.

Greater evaluation of organizations and their programs is coming. It is not going away. The question is: are the non-profit organizations going to define the process or are the funders? I know which answer I prefer.
Those are my views on evaluating non-profit organizations’ outcomes. What are yours? I would be happy to hear from you.