It’s over. The campaign is finished. The thank-you’s have been said and the money counted. However, before closing the book on a campaign for good, you should take one last look at it. The days immediately following a campaign are the time to analyze what went wrong and what went right, which fixes worked and which didn’t.
You should assess and review every fund-raising campaign, and you should make a record of what you find. Evaluation is the final procedure in a well-organized fund-raising campaign, and the report you write based on that evaluation is the organized record of the knowledge you acquired. File that report and it will be a database for you to draw on. Hindsight is 20/20. Turn it into foresight for the next campaign.
All the participants in a campaign should be asked to evaluate their area of responsibility and the volunteers with whom they worked. You want to determine what the campaign did well and not so well, which expectations were realistic and which weren’t, which tools worked and which didn’t, and who performed well and who didn’t. Solicitors, team captains, division chairs, and campaign chairs should each make their own evaluation, but no evaluation is more important than that of the staff members charged with designing, organizing, and running campaigns. They, after all, are the ones who are going to have to manage the next campaign, so it is from their perspective that we will look at the evaluation process.

The First Rule in Evaluating a Campaign Is Don’t Wait

The farther away you get from a campaign, the less you and others will remember, and every day you delay your evaluation is a brick in the wall of inertia you must climb to start the process. Begin your evaluation the day after the close of the campaign.

Let’s start with the things you want to learn. What is it you wish to know about a campaign that can help with the next one?

  1. Was the goal realistic?
  2. How well did the organizational structure of the campaign work?
  3. Did the solicitation kit materials do the expected job?
  4. Was the kickoff meeting effective?
  5. Were the progress meetings and reports to volunteers effective?
  6. Was the campaign able to fix problems and replace volunteers quickly and effectively?
  7. Did the development office function adequately?
  8. Which volunteers performed well, and who fell down?

The answers to those eight questions can be synthesized from an analysis of information from five different sources:

  1. Your own record and recollection of campaign events and occurrences
  2. Other campaign workers’ recollections
  3. Notes of progress meetings and progress reports
  4. The quantifiable results — who gave how much
  5. Prospects’ and donors’ experiences

Was The Goal Realistic?

At first blush, the question of whether the goal was realistic seems to be self-evident. If it was achieved, it was. If it was missed by a lot, it wasn’t. However, like most things in life, the issue is not that simple. A goal easily achieved could have been an unrealistically low goal or the result of a totally unexpected large gift, while a failure to meet a goal could have been due to a poorly designed or executed campaign, rather than the goal having been set too high. Whether or not the goal was realistic is a question that may have to await the answers to the other seven questions.

In assessing how well prospects were rated and evaluated, you need to determine if there was a consistency of results. Did the vast majority of donors give substantially under their rated level? Did a large number of previous donors reduce their gifts or decline to contribute altogether? Was the ratio of new donors to new prospects contacted in line with that of previous campaigns? This information is easily obtained from an analysis of gifts and pledges.

How Well Did the Organizational
Structure of the Campaign Work?

The best way to determine the effectiveness of the organizational structure of the campaign is to look at the campaign’s interim progress reports and to interview those who took part in it. What we are searching for here is not isolated instances of persons failing to perform their jobs, but a pattern of under performance. Was there a division where a number of team captains came up short? If so, the division chair may have been spread too thin, and a division co-chair may have been needed. Do the progress reports show a pattern of solicitors having a hard time following up with their calls? If so, there may have been too many prospects assigned to each solicitor. Look at how the pyramid of campaign management was structured and determine whether that structure had weak points. If the weak points were many and spread throughout a single level of the structure or if they were grouped vertically within one part of the pyramid, then it is likely that there was a problem in structure. If the weak points are random, and no pattern can be identified, then it is likely that they are a result of poor individual performance.

Did the Solicitation Kit Materials Do the Expected Job?

To find out how well the tools in the solicitation kit worked, ask the solicitors. Query them individually either by phone or in a questionnaire, or convene a focus group or groups. If you go the route of a focus group, make sure that you keep the group on topic. Don’t let them rehash the entire campaign. Focus their attention, and they will provide a better, more detailed analysis. Also, be sure to draw your participants from a number of teams.

Was the Kickoff Meeting Effective?

Analysis of how the kickoff meeting went is best done right after the event. Your evaluation of it should be based largely on a review of your notes and the comments you collected from participants at the time. To this you can add new information, such as problems that occurred during the campaign with material that was covered during the kickoff meeting.

Were the Progress Meetings and
Reports to Volunteers Effective?

Progress meetings and progress reports are the maintenance procedures of a fund-raising campaign. If they uncovered problems and helped you to track the progress of the campaign, they did their job. Did you do yours? Did you hold all scheduled progress meetings? Was attendance good? Did you issue reports and the campaign newsletter immediately following each meeting? Did you respond immediately to problems identified at the meetings?

Was the Campaign Able to Fix Problems and
Replace Volunteers Quickly and Effectively?

What you do to fix the problems progress meetings uncover can make or break a campaign. In order to fix a problem, you have to be prepared to act as soon as it is identified. Go back to your progress reports and see how long it took you to solve problems. Were you able to replace a missing volunteer, add new prospects to the list, get slow-acting solicitors moving, limit the damage of negative publicity, and handle any of the myriad other things that can go wrong in a campaign quickly and effectively? If the problem came back in the next meeting or if results slipped as a result of it, the answer is no. If that is the case, ask yourself what went wrong and how you could have dealt with the problem differently.

Did the Development Office Function Adequately?

Evaluating the development office is a relatively straightforward process. Were gifts booked, calls made, and acknowledgements sent accurately and according to schedule? If not, you may need to change procedures or personnel before the next campaign.

Which Volunteers Performed Well and Who Fell Down?

Until this point, you have been looking for breakdowns in the organizational structure or procedures of the campaign. Now look for persons who failed. This is the most delicate and often the most important part of your post-campaign assessment. Campaigns live and die by the quality of volunteers who work them. People who don’t do the job need to be weeded out. The last thing you want is a division chair, who fell down, becoming next year’s annual campaign chair.

Actually identifying the persons who failed is the easiest part of the entire evaluation process. First you look at the results. Did the solicitors, team captains, division chairs, and campaign chair deliver as expected? Ask the campaign chair how the division chairs performed, the division chairs how the team captains worked out, and the team captains how well their solicitors performed.

Ready to Close the Book On This Campaign for Good

Once you have gathered all the above information and analyzed it, you are ready to write your report. For your own file make it no-holds-barred. Evaluate everything and everyone ruthlessly, but with no personal bias, and do it very confidentially. However, you will need to share the results of the assessment with the campaign chair, the board’s standing development committee, and the chair of the board of trustees. For these audiences you need to write a report that accurately documents how the campaign progressed, but does not point a finger at particular persons. Present the evidence, and indicate what actions were taken to solve problems. Let the readers draw their own conclusions.

If the first rule in evaluating a campaign is don’t wait, the second is get the evaluation done quickly. You should be finished within a week of the campaign’s close. It’s time to move on to other things.

Note: Additional resources are available on my website to help you make your next campaign your best campaign:

Those are my views on the subject. What are yours? I welcome your comments and suggestions.