Community How To Guide On EVALUATIONMyths & Facts About Evaluation
Formative Evaluation
Process Evaluation
Outcome Evaluation
Impact EvaluationAppendix #1 Tracking Form
Appendix #2 Evaluation Plan Worksheet
Appendix #3 Sample Personal Interview Question
Appendix #4 Sample Focus Group Questions
Appendix #5 Participant Observation FormResources Cited in Community How To Guide
Other Evaluation Resources
Evaluation Publications
To debunk the myth that evaluation is a process better left to people with a PhD, the Community How To Guide on Evaluation, describes how organizations and coalitions can develop and utilize an evaluation strategy to make their programs successful and effective. This booklet first describes the purpose of evaluation as well as the myths and facts about the evaluation process. For instance, contrary to popular opinion, evaluations do not have to be time consuming and expensive to be useful.
When evaluation is an integral part of the planning and implementation process, it includes four stages or types of evaluation that are described in detail, including formative, process, outcome and impact. In addition to descriptions, the booklet provides examples of when each stage should be used. There are also two methods that can be used quantitative and qualitative. In appendix, the booklet provides samples of both of these methods, including a form to be used in counting numbers for quantitative methods and sample interview and focus group questions and a participant observation form for use in qualitative methods.
Proper planning of an evaluation is critical to its success and the booklet describes the process to be used as well as what to look for if hiring an evaluator. After reading the booklet, organizations and coalitions will know how to integrate an evaluation into their overall plan, thereby making their efforts more targeted and effective.
The American Heritage dictionary defines
evaluation as:
1. To ascertain or fix the value or worth of. 2. To examine
and judge: appraise, estimate. 3. To calculate or set down the
numerical value of: to express numerically.
When program managers hear the word evaluation, it often conjures up unpleasant images of cold-eyed scientists scrutinizing their activities and declaring their program to be a failure. In reality, evaluation can be and should be an asset for program managers and their efforts. Strong, carefully designed evaluation can help program managers target their efforts, develop efficient materials and programs, make mid-course adjustments, if necessary and prove their success. Evaluation can transform guesswork into certainty and can help an organization thrive.
Well-designed evaluation programs can help underage drinking prevention programs demonstrate their effectiveness. Underage drinking prevention specialists know that many adults view underage drinking as an inevitable rite of passage for youth. Evaluation can disprove that myth and can demonstrate that carefully targeted programs do reduce youthful drinking.
Many of the people who cringe at the word evaluation mistakenly believe that evaluation begins when a program is nearly finished. That kind of evaluation offers very little useful information for the program planners. When evaluation is an integral part of the planning process, however, it can help program planners to do the following:
Evaluation is not something that should be an afterthought. It is an integral part of the projects overall plan and must start from day one. Following are some myths and facts about evaluation, which were taken from the booklet, The Art of Appropriate Evaluation. A Guide for Highway Safety Program Managers, produced by the NHTSA. Information on how to obtain this guide is listed in the Resource Section of this booklet.
Myth: Evaluation involves complex research methodologies and is too complicated and expensive for community-based programs.
In an ideal world, every community program would have the luxury of retaining a highly skilled evaluation specialist dedicated only to their effort. In reality, most community programs cannot afford and often do not require a highly complicated evaluation strategy, unless they receive funding from a private or government agency that requires a specific evaluation approach.
Community-based programs can develop evaluation strategies by taking advantage of the information in this booklet and other books that are available in bookstores, libraries or on the Internet, some of which are listed in the Resource Section of this booklet. In many cases, the program managers may also be able to solicit assistance from a college professor, graduate student, health department staff member or other individual with expertise in evaluation.
Myth: Evaluation only points out what is wrong and exposing difficulties with a program may cause funding sources or the public to withdraw their support.
If evaluation is incorporated into a planning process from the outset, it helps guide the process so that the result is a success, not a failure.
Also, most funding agencies and organizations do not penalize grantees if they test an approach, which has a reasonable chance of success, and then determine it has not achieved the desired goal. Progress is achieved through testing new strategies. Funding sources want to know what was learned from the experience and how the project managers intend to redirect their efforts in the future. Other underage drinking prevention programs may also benefit from lessons learned during an unsuccessful process.
Just as an evaluation may indicate what is not working, it can also point out what is effective. Effective strategies can be replicated and shared with other similar community-based projects. A track record of accomplishments makes a project more attractive to funding agencies, public officials, and the community. By learning about strengths and weaknesses, the coalition or organization can improve their efforts over time.
Myth: Evaluations are a lot of work, time consuming and expensive.
Evaluation does not have to be expensive or complicated to be useful. There are evaluations that involve little or no cost. For instance, contact with a college or university may result in identifying a professor or graduate student interested in conducting a program evaluation as part of a special project or course work. An evaluator from the state traffic safety, health or substance abuse prevention and treatment agencies or someone from a marketing, advertising or public relations agency might also assist the coalition.
When an evaluation strategy is an integral part of planning and implementing a prevention program, the evaluation includes four stages or types of evaluation: formative, process, outcome and impact. The following description of the key stages of evaluation has been adapted from The National Center for Injury Prevention and Control in their book Demonstrating Your Programs Worth. Some of the examples have been changed to reflect the needs of an underage drinking prevention project.
Formative evaluation is most often used to test the appropriateness and effectiveness of project materials such as a video, public service announcement (PSA) brochure, poster, etc. For example, if an organization is planning a public information and education campaign aimed at underage youth that includes a video, it is critical to test the format, message and delivery systems prior to spending the money to produce the video. To pre-test the video, program organizers can bring together a focus group of young people and present them with the contents of the video, usually in the form of storyboards (a cartoon-like representation of the material to be presented). If the young people in the focus group respond to the videos format and message, planners will know their product can be effective and their money well spent. In evaluating written materials, program planners can determine whether the product is appropriate for the target populations reading level and ethnic background.
Description
When to Use
- is being modified
- has problems with no obvious solutions
- is being used in a new setting, with a new population
- to target a new problem or behavior
What It Shows
Why It Is Useful
Measurement Methods
Process evaluation is used to determine whether the project or coalition is working effectively, and may involve interviews with key members of the coalition or organization.
Description
When to Use
What It Shows
Why It Is Useful
Measurement Methods
Note: If evaluating the effectiveness of a media effort, it is important to determine whether the news outlet or news program reaches the intended target audiences. Newspaper editorials, for instance, may not have a large teen readership. This is particularly important if an organization is conducting a radio campaign. A classical or jazz station format is unlikely to attract youth listeners, but probably will reach parents. Information on demographics is available from the radio station or from advertising agencies. In addition, media messages must also be culturally relevant and take into account any target populations with low literacy levels or those where English isnt the first language.
Large communities may also have several different weekly newspapers that are delivered to different geographic areas. In the evaluation process, the project should determine whether the information appeared in one paper or all. This is even more important for programs that are statewide. A group may want to have a map of the target area and place colored push pins to indicate where messages were received.
Outcome evaluation is used to determine the organizations progress toward achieving their goals and objectives. Assessing whether there has been an increase in the number of stories in newspapers and on radio and television that have raised public awareness of the seriousness of the underage drinking problem is an example of an outcome evaluation.
Description
When to Use
What It Shows
Why It Is Useful
Measurement Methods
Reviewing data on drinking and driving by minors five years after an underage drinking prevention program begins, and comparing that data to a baseline is one example of an impact evaluation.
Description
When to Use
What It Shows
Why It Is Useful
Measurement Methods
Note: Although a decrease in the number of underage drinking-related incidents may be evidence of program success, impact may not be evident in two to three years, usually because the sample numbers are too small in that period of time. A steady decline over a period of years is needed in order to determine impact.
Quantitative methods involve data gathering and counting numbers and may be used during process, outcome or impact evaluation. These methods include the following:
Tracking the Program
This approach involves keeping track of whatever the program is evaluating, which may include such things as:
In this guide, Appendix #1 is a Tracking Form to assist communities with this approach.
Note: When counting the number of people reached by a media message that appears in a newspaper, take the circulation number and multiply it by 2. This is usually acceptable for most mainstream publications. For broadcast, radio and television stations and advertising agencies have information on the demographics for each program and even for various segments of programs (drive-time, second quarter of the news hour).
Surveys
Surveys can be conducted on the telephone, in person, or the individual can complete the survey in private and deliver or mail it back to the organization conducting the evaluation. Surveys have a number of uses including:
The needs and budget of the organization and the objective of the evaluation determine the type of survey. Surveys people answer in person, for instance, which have the highest response rate, usually require a trained interviewer and are therefore costly. Telephone interviews are the quickest to conduct and are easily randomized, and mailed or delivered surveys offer the greatest amount of anonymity.
In designing survey instruments, use the following steps:
The Community How To Guide on Needs Assessment and Strategic Planning has two sample surveys, a household survey of adults and a survey for underage youth. The Pacific Institute for Research and Evaluation (PIRE) has also produced a booklet entitled Guide to Conducting Youth Surveys as part of the Office of Juvenile Justice and Delinquency Preventions Enforcing Underage Drinking Laws Program. This guide includes several examples and information on how to structure and conduct youth surveys. (See the Resource Section of this booklet for contact information.)
Every two years, NHTSA conducts a National Survey of Impaired Driving Attitudes and Behavior, which is administered to a random national sample of the driving public, age 16 and older. The data is reported by age and asks such questions as how much people drink and drive, what are their views on the problem of drinking and driving and how they feel about enforcement of impaired driving. Information on obtaining a copy of this survey is included in the Resource Section of this booklet.
Since 1991, the Centers for Disease Control has also conducted a biennial national school-based survey called the Youth Risk Behavioral Survey (YRBS) to assess the prevalence of health risk behaviors among high school students. National, state, territorial and local data is available on such highway safety topics as seat belt and bicycle helmet use, riding with an impaired driver and alcohol and other drug use.
Qualitative methods are generally open-ended and are another method to determine a persons attitudes, knowledge level and beliefs. These methods may be used during the formative and process evaluation stages of an evaluation. They can also help a coalition or organization correct a problem if one arises.
Personal Interviews
Personal interviews are one of the primary ways that reporters gather information for a news story. Qualitative personal interviews are different than an in-person response to a survey, since a survey is a standard set of questions. Personal interviews are more like a discussion where the interviewer asks questions to obtain the desired information.
In a personal interview, the interviewer should not interject his or her beliefs or feelings into the discussion, but remain neutral to obtain the most accurate picture of the interviewees point of view. The interviewee is asked to be a partner with the coalition or organization and to assist them in developing an effective program. The larger and more diverse the group of people who are interviewed, the better the results. Depending on the level of sophistication in the evaluation, personal interviews should be taped and transcribed verbatim, with the interviewees permission. This allows an outside evaluator to analyze the results and then provide a written report on what was learned.
Focus Groups
Focus groups are gatherings of up to ten participants who are representative of the larger target audience. They are brought together in a forum to discuss their views under the leadership of a trained moderator. Focus groups are widely used by marketing and advertising research firms to obtain insights into target audience perceptions and beliefs early in a program or campaign. The group atmosphere provides greater stimulation and richness of information than can be obtained through individual interviews.
Focus group interviews provide insight not only into what is preferred in a specific program, but why it is preferred, which is particularly important. Carefully designed and implemented focus group research has the potential for providing valuable information on important communication elements (such as appeal and perceived usefulness) and establishes the opportunity for investigators to probe for detail that might not be available through more quantitative methods. Focus groups can be particularly useful in identifying unsuccessful approaches before significant effort and money are expended on them. As with personal interviews, focus groups should be taped and transcribed verbatim and/or video taped. Video taping, however, is costly and the benefits should be weighed against the costs. The focus group process is discussed in greater detail in the Community How To Guide on Needs Assessment and Strategic Planning.
Participant Observation
Participant observation involves the evaluator participating in the event being observed. For instance, if the project is evaluating the effectiveness of its media outreach efforts, the evaluator may attend a news conference to determine if the event was successful. What the evaluator may look for are any barriers that prevent people from participating, the smoothness of the operation, the level of enthusiasm of participants, the areas of success and the areas of weakness. The number of events to observe is based on the objective of the evaluation. Participant observation can be direct, where people know they are being observed or unobtrusive where the observer does not directly participate in the event or activity.
1. Write a statement defining the purpose(s) of the evaluation. An unfocused evaluation cannot accomplish its intended goal.
A statement defining the evaluation purpose for an underage drinking prevention project may be the following:
To learn whether the enforcement, prevention and education and public policy initiatives undertaken by our project are changing the attitudes and behavior of both youth and adults in our community toward underage drinking.
In some cases, the evaluation may be focused on one or more of the activities of the overall underage drinking prevention project. For instance:
To learn whether increased compliance checks have had a measurable impact on sales to minors in the community.
2. Determine budgetary and other resource needs
The type of evaluation to be conducted will be determined by the amount of funding and other resources that are available. For instance, if the organization does not have enough money to hire an independent, outside evaluator, then other methods, such as finding a college or university that will donate these services, would be part of the plan. If the plan calls for surveys or focus group meetings, is there a sufficient number of people in the coalition who can assist with these tasks.
3. Define the target population.
The target population can vary depending on the objective. In the first example above, the target population would include all underage youth and adults. The target population for the second example would be liquor licensees and their employees.
4. Write down the type of information to be collected.
In the Community How to Guide on Needs Assessment and Strategic Planning, underage drinking prevention organizations are urged to conduct a careful needs assessment and then to use the findings as the basis for their strategic plan. This information can also serve as a baseline and be the starting point for an evaluation plan.
For instance, using the first example of the evaluation plans purpose, the information to be collected would include the following:
All of this information should be in the coalition or organizations needs assessment, which serves as both the basis of the strategic plan and the baseline for evaluation efforts. It is essential that projects have a baseline. The baseline documents the situation or problem before a project or activity is implemented. Once implementation has occurred, the project can then compare the evaluation results to determine whether the effort had any affect. Without a baseline, it would be impossible to show actual improvement.
The second example involving the effectiveness of compliance checks would also require a baseline, which would include a pre- and post-test following the enforcement action. The pre-test would determine the number of retailers who sell to minors and the post-test would determine how many sold to minors after the compliance check program was initiated.
5. Choose the type of evaluation to be used.
6. Determine what methods will be used for collecting the information.
7. Collect the information and analyze the results.
Usually the individual who analyzes the data is a professional evaluator who has the ability to interpret what was learned from the information and data. A college or university professor or graduate student, a state or local health department evaluator or an evaluator from the state traffic safety or substance abuse prevention and treatment agency may be able to help. Advertising, marketing and public relations agencies may also conduct evaluations of their campaigns and may be willing to contribute their expertise.
8. Write an evaluation report describing the evaluation results.
The report can be simple or complex, depending upon the needs of the coalition or organization. If the evaluation is being used to justify further funding, then the information that is collected and analyzed should reflect the elements in the original funding request. In addition, the final report should be in form that is consistent to what is being required by the funding agency or organization.
The appendix of this guide includes Appendix #2 an Evaluation Plan Worksheet to assist coalitions and organizations in developing their evaluation plan.
To obtain a truly objective view of the underage drinking prevention projects effectiveness, it is a good idea to hire an outside evaluator. An outside evaluator has no history with the organization and can offer a new perspective and provide fresh insights. The people who are actually planning and implementing a program often cannot see the forest for the trees and need an outside perspective. The hiring of an outside evaluator, however, costs money and it is important that the coalition or organization recognize this fact and insure there is sufficient funding to complete the task. In some cases, the cost of evaluation is part of an overall budget request and is usually 20-25% of the total amount of the funding request.
When selecting an outside evaluator, consider the following:
When looking for an individual to do qualitative evaluation, make sure they have the following qualifications:
In some cases, coalitions and organizations may not have the resources for professional evaluators to perform qualitative evaluation. There may, however, be individuals in the coalition who have these skills and can perform the tasks just as well.
In this guide see, Appendix #3 Questions for Personal Interviews; Appendix #4 Questions for Focus Groups, and Appendix #5 Participant Observation Form.
The National Center for Injury Prevention and Control offers the following information on the characteristics of a suitable evaluator:
|
This guide is designed to explain the evaluation process in such a way that it is no longer an afterthought in an organizations planning process, but a critical first step that can help the coalition or organization continue to be successful. Following are some conclusions on evaluation.
Evaluation does not need to be difficult.
In order to obtain useable results, keep the evaluation as simple and straightforward as possible. Resist anyone who tries to expand the focus or complicate the design. Keep the level of evaluation consistent with the size of the project and the objectives you are trying to meet.
It does not have to be expensive.
A project can learn a great deal if they do the following:
Take advantage of the resources that exist in the community. The project might be able to convince a university professor or another professional from outside the group to assist with an evaluation. If an outside evaluator is hired, the project can recruit volunteer data collectors from members of the coalition or organization. Work with the evaluator to identify activities on which you can economize, and which areas are worth spending a little extra.
Investing in evaluation can save time and dollars over the long haul.
With the information learned from a thorough evaluation, the project can focus resources on the most critical problems and the most effective countermeasures. The project will also be able to adjust programs mid-stream to improve effectiveness. And most importantly, the project will be much more likely to convince funding sources that their dollars have been well spent, which means that the project is a good investment for the future.
PDF -- Evaluation Plan Worksheet
PDF -- Sample Personal Interview Questions
What about peers over 21 who purchase alcohol for youth under the age of 21?
What about bars/restaurants/liquor stores that sell to minors under age 21?
PDF -- Sample Focus Group Questions
1. Is underage drinking a serious problem in (name of the
community/town/county)?
If yes, why?
If no, why?
Probes
Does anyone know or come in contact with underage youth that drink?
Has there been an alcohol-related incident (crash, death, injury) involving an underage youth?
2. Do all youth engage in underage drinking or is it just
a few?
Probes
Is underage drinking more of a problem for some young people than others?
Is underage drinking just a common right of passage?
3. What do you think causes underage drinking?
Probes
Is it the fault of parents?
Is it the youth?
4. Does the community send mixed messages to youth about
underage drinking?
Probes
Is there a lot of outdoor alcohol advertising?
Do adults permit underage drinking?
5. What are the barriers to solving the problem of
underage drinking?
Probes
Who or what would stand in the way of effective solutions?
What prevents the problem from being solved now?
6. What are your suggestions for solving the problem of
underage drinking?
Probes
Should there be more education in the schools?
Should there be stricter enforcement?
7. What do you think your
agency/organization/institutions role is in addressing the
problem of underage drinking?
Probes
What kinds of programs or activities does your agency/organization/institution do for youth?
Does your agency/organization/institution pay enough attention to the problem of underage drinking?
PDF -- Participant Observation Form
The Art of Appropriate Evaluation.
A Guide for Highway Safety Managers
National Highway Traffic Safety Administration
Office of Research and Evaluation
400 Seventh St., SW
Washington, D.C. 20590
202-366-9588
Fax: 202-366-2766
Web site: http://www.nhtsa.dot.gov
Demonstrating Your Programs Worth.
A Primer on Evaluation Programs
To Prevent Unintentional Injury
Centers for Disease Control
National Center for Injury Prevention and Control
Mailstop K65
4770 Buford Highway, NE
Atlanta, GA 30341-3724
770-488-1506
Fax: 770-488-1667
Web site: http://www.cdc.gov/ncipc
Guide to Conducting Youth Surveys
Underage Drinking Enforcement Training Center
Pacific Institute for Research and Evaluation
11140 Rockville Pike, 6th Floor
Rockville, MD 20852
301-984-6500
Fax: 301-984-6559
Web site: http://www.pire.org/udetc
National Survey of Drunk Driving Attitudes,
1997 (DOT HS 808-844)
National Highway Traffic Safety Administration
Office of Research and Traffic Records
400 Seventh St., SW
Washington, D.C. 20590
Fax: 202-366-7096
Web site: http://www.nhtsa.dot.gov
Understanding Evaluation: The Way
to Better Prevention Programs, 1993
By Lana Muraskin
U.S. Department of Education
Office of Elementary and Secondary Education
Safe and Drug Free Schools
400 Maryland Avenue, SW
Washington, DC 20202
800-USA-LEARN
Fax: 202-401-0689
Web site: http://www.ed.gov/offices/OESE/SDFS
Youth Behavioral Risk Survey
Centers for Disease Control
4770 Buford Highway, NE
Atlanta, GA 30341-3724
770-488-1506
Fax: 770-488-1667
Web site: http://www.cdc.gov/nccdphp/dash/yrbs
American Evaluation Association
PO Box 704
Point Reyes CA 94956
888-311-6321
Web site: http://www.eval.org
The American Evaluation Association is an international
professional association of evaluators devoted to the application
and exploration of program evaluation, personnel evaluation,
technology, and many other forms of evaluation. The
associations mission is to improve evaluation practices and
methods, increase evaluation use, promote evaluation as a
profession and support the contribution of evaluation to the
generation of theory and knowledge about effective human action.
The Evaluation Exchange
Harvard Research Project
38 Concord Avenue, Cambridge, MA 02138
617-495-9108
Fax: 617-495-8594
Web site: http://gseweb.harvard.edu/~hfrp
The Evaluation Exchange is an interactive forum for the exchange
of ideas, lessons, and practices in the evaluation of family
support and community development programs, promoting discussion
among persons from a variety of organizational affiliations and
viewpoints. Vol. V, No. 1 1999 of the exchange focuses on
evaluating programs serving children and youth.
Innovation Network, Inc. (InnoNet)
1001 Connecticut Avenue, NW, #900
Washington, D.C. 20036
202-728-0727
Fax 202-728-0136
Web site: http://www.inetwork.org
The Innovation Network, Inc. (InnoNet) is an organization
dedicated to helping small- to medium-sizednonprofit
organizations successfully meet their missions. The purpose of
their web site is to provide the tools, instruction, guidance
framework to create detailed program plans, evaluation plans
andfund-raising plans.
Outcome Measurement
Resource Network
United Way of America
701 N. Fairfax Street
Alexandria, Virginia 22314-2045
703-836-7100
Web site: http://www.unitedway.org
The Resource Networks purpose is to provide United Way of
Americas (UWA) and other organizations outcome
measurement resources and learning. The network includes the
following: a section on FAQ (Frequently Asked Questions),
descriptions of UWA outcome measurement publications, pricing,
and ordering information and selected outcome and performance
measurement initiatives of United Ways, health and human service
agencies, governmental and other nonprofit organizations, and
links to other internet resources. The network is available for
use by the general public.
Board Assessment of the Organization: How
Are We Doing?
by Peter Szanton
National Center for Non-Profit Boards
1828 L Street, NW, Suite 900
Washington, DC 20036-5104
202-452-6262 or 800-883-6262
Fax: 202-452-6299
Web: http://www.ncnb.org
This booklet provides key questions that board members and
executive directors should ask when assessing their
organizations performance. It also explains how to
determine who should perform the evaluation, what it should
examine, when it should be performed, and how it should be
conducted.
Empowerment Evaluation:
Knowledge and Tools for Self-assessment and Accountability
by D.M. Fetterman, S. Kaftarian, and A. Wandersman (1996)
Sage Publications Ltd
6 Bonhill Street
London
EC2A 4PU
United Kingdom
Telephone: +44 (0)171 374 0645
Fax: +44 (0)171 374 8741
E-mail: online-info@sagepub.co.uk
Book orders hotline: +44 (0)171 330 1234
Web site: http://www.sagepub.co.uk
Empowerment evaluation a method for using evaluation
concepts, techniques and findings to foster improvement and
self-determination - is the focus of this book. After an
examination of the method as it has been adopted in academic and
foundation settings, the book looks at the various contexts in
which empowerment evaluation is conducted, ranging from resistant
environments to responsive environments. Critical concerns in
empowerment evaluation, such as the role of empowerment theory
and multiple levels of empowerment from individual to societal,
are then discussed. The book also provides tools and technical
assistance needed to conduct empowerment evaluation. The
concluding section of the book serves to strengthen the links
between empowerment evaluation and community-capacity building.
Evaluators Handbook
by Joan L. Herman, Lynn Lyons Morris, and Carol Taylor
Fitz-Gibbon (1997)
Sage Publications Ltd
6 Bonhill Street
London
EC2A 4PU
United Kingdom
Telephone: +44 (0)171 374 0645
Fax: +44 (0)171 374 8741
E-mail: online-info@sagepub.co.uk
Book orders hotline: +44 (0)171 330 1234
Web site: http://www.sagepub.co.uk
This volume is at the core of the Program Evaluation Kit. It
takes a step-by-step approach to evaluation, using non-technical
language to explain procedures to novice evaluators. This edition
reflects the current emphasis on continuous evaluation throughout
the process of program development. New references and the
inclusion of evaluation standards are also a feature. The
Evaluators Handbook is illustrated with examples,
suggestions, worksheets and sample forms for the readers
own use. At appropriate points, it refers readers to other
volumes in the Kit for further information.
Handbook of Practical Program Evaluation (1994)
by Joseph S. Wholey, Harry P. Hatry, and Kathryn E. Newcomer
Jossey-Bass Publishers
350 Sansome Street
San Francisco, CA 94104
888-378-2537 800-956-7739
Web site: http://www.josseybass.com
Experts in the field of program evaluation outline efficient and
economical methods of assessing program results and identifying
ways to improve program performance. From simple evaluation to
more thorough examinations, the authors describe the nuts and
bolts of how to create evaluation design and how to collect and
analyze data in a way that will result in low cost and successful
evaluations.
Promising Approaches In The Prevention of Underage
Drinking A Final Report
National Association of Governors Highway Safety
Representatives (NAGHSR)
750 1st Street, NE, Suite 720
Washington, DC 20002
202-789-0942
Fax: 202-789-0946
Web site: http://www.naghsr.org
The document, jointly developed by NAGHSR and The National
Association of State Alcohol and Drug Abuse Directors (NASADAD)
for The National Highway Traffic Safety Administration and The
Center for Substance Abuse Prevention, contains case studies of
state-wide activities to prevent underage drinking in nine
states. Case studies include: North Dakota Alternative
Activities Through the Community Traffic Safety Program Network;
New York Athletes Helping Athletes, Inc. of Long Island;
New Jersey Smoke and Alcohol-Free Residence Halls and
Campus Entertainment Centers; Massachusetts Working with
Servers and Sellers to Restrict Access; Virginia Combating
Fraudulent Identification Use; Washington Talking to Your
Kids About Alcohol; Maryland Maryland Underage Drinking
Prevention Coalition; California Teenwork and Ohio
None for under 21. The report summarizes common themes and
includes contact names and NAGSHR and NASADAD membership lists.
Self-Assessment for Nonprofit Boards
by Larry Slesinger
National Center for Non-Profit Boards
1828 L Street, NW, Suite 900
Washington, DC 20036-5104
202-452-6262 or 800-883-6262
Fax: 202-452-6299
Web: http://www.ncnb.org
This book shows boards how to evaluate their overall performance
in a number of areas as well as each members contribution
to the boards work. It includes a 50-page users guide
and 15 copies of a 20-page questionnaire for each member to fill
out. Additional questionnaires are available separately.
W.K. Kellogg Foundation Evaluation Handbook
W.K. Kellogg Foundation
One Michigan Avenue East
Battle Creek, Michigan 49017-4058
616-968-1611
Web site: http://www.wkkf.org/Publications/evalhdbk/default.htm
This handbook is guided by the belief that evaluation should be
supportive and responsive to projects, rather than become an end
in itself. It provides a framework for thinking about evaluation
as a relevant and useful program tool. It is written primarily
for project directors who have direct responsibility for the
ongoing evaluation of W.K. Kellogg Foundation-funded projects.
However, it is the hope of the foundation that project directors
will use this handbook as a resource for other project staff who
have evaluation responsibilities, for external evaluators, and
for board members.
DOT HS 809 209
March 2001