Once upon a time there was a manager who was responsible for starting up a new pedestrian safety program. Because it was new, her boss asked her to evaluate the program to find out how well it worked. Alarm bells rang in her head; she had never done an evaluation and it seemed way beyond her ability. When she discussed this assignment in her regular staff meeting, one of the staff volunteered to take on the responsibility. Greatly relieved, she gave him free rein.
The staff member immediately busied himself designing data collection forms and survey instruments. He wrote instruction manuals for filling out the forms and distributed them to the folks who were involved in publicizing the program. His research designs called for dividing the city into four regions that would each receive different combinations of the program’s components. His weekly project reports were filled with detailed accounts of new forms, focus group protocols, new data collection and analytical procedures, and statistical tests. It seemed that everything was under control.
As the program reached its peak of activity, things took a turn for the worse. Data collectors weren’t filling out the forms correctly, and no one could get a handle on the mountains of data the survey produced. The evaluator spent most of his time analyzing the change in public perception of the program. The difference was statistically significant, but so small as to be practically negligible. The progress reports started documenting why it was impossible to conduct a valid evaluation, with terms like changes in data definitions, and confounding variables leading the list of excuses.
The net result was that more than 20 percent of the project’s resources were spent on evaluation and no one could answer the simple question “did it work?” The project manager vowed “Never again!”
The term evaluation evokes similar nightmares for anyone working in the public sector. We have all heard stories about expensive evaluation efforts that yield reams of complex data that end up confusing people. None of us wants an evaluation like that. We want to document the good parts of our program and find the things that need to be changed.
Evaluation is a term that refers to the process by which someone determines the value of something.
Value doesn’t only mean monetary value; so evaluation doesn’t necessarily involve converting something into a dollar and cents issue. It is simply examining, appraising, or judging the worth of a particular item or program.
We all conduct evaluations whenever we are contemplating a major purchase. If we are considering a new car purchase, we must decide if a vehicle is worth the price being asked for it. We go through three distinct evaluation processes to make that determination.
Once we have purchased the car, we probably continue to evaluate, but we sometimes call it “having second thoughts.” After the purchase is made, we try to determine if we made a good choice. Did the car deliver on the advertising promises? Did it meet our personal needs and wants? Did it actually cost what we planned or did the car require a lot of expensive maintenance to keep it running. If I had it to do over, would I buy the same car? Would I recommend it to a friend?
When you are implementing a traffic safety program, you should be making the same types of judgments. You build evaluation into your program so that you can determine:
First, the evaluator identified a specific problem (The kids who died were not wearing bicycle helmets.) Next there is one focused program approach to address this problem. (Increase bicycle helmet use.) Note that there is no mention of how you are going to do this: free helmets, school programs, bike safety events or whatever. Finally, there is a practical measure of the progress your program made. (Document the change in bicycle helmet use.)
Why You Want to Read This Guide
A lot has been said over the years about the importance of program evaluation in traffic safety. At various times, program managers have been required to allocate a specified percentage of their program budgets to program evaluation. Training programs have been developed on how to evaluate traffic safety programs using such statistical tools as time series analysis and multiple regression analyses. And despite all of this attention, criticism continues to pour in about the fact that most traffic safety programs are never actually evaluated. And it is no wonder. Some program managers are convinced that program evaluation is too hot to handle, that it causes nothing but trouble, and costs a fortune to boot.
This Guide will convince you otherwise!
It is designed to alleviate your fears about program evaluation and convince you that conducting an appropriate evaluation actually makes your job easier rather than harder. The focus is on what evaluation can do for you, not the other way around.
The Guide provides an overview of the steps that are involved in program evaluations and gets you thinking about how these steps fit into your implementation plans. It also will provide you with some handy suggestions on how to find and work with an evaluation consultant. And finally it will provide you with a handy glossary of evaluation terms and concepts so that you speak with confidence when the topic turns to “proving results.” (When you encounter an underlined term such as Before and After Design, you can refer to the Glossary for its definition.)
It is equally important that you recognize what this Guide is not. It will not give you detailed, step-by-step instructions on how to evaluate a traffic safety program. Our assumption is that you are already too busy to take on a new career as a evaluation specialist. There are talented individuals in your own community who can help you design and conduct an appropriate evaluation. This Guide will tell you how to find and work with them.
The focus of this Guide is on using limited resources to maximum, practical, advantage. This means conducting an evaluation that is appropriate to the size and scope of the program you are implementing.
Who the Guide is for
Before we go any further, it’s time to share the assumptions we have made about who you are. If you are a state or local traffic safety project director with at least some curiosity about program evaluation, this Guide is for you. Our assumption is that you do not have a background in experimental design or statistics and have no intention of becoming an evaluation expert. (If you really want to become an expert, you should enroll in some college-level statistics courses—this is not one of those subjects you can teach yourself with a book!) You need to understand:
If that is what you are looking for, this Guide is for you!
How the Rest of the Guide is Organized
The remainder of this Guide is organized into six sections, and an appendix. They are: