Program managers can use a variety of methods for evaluating programs, which yield different results and have different uses and purposes. Evaluation design and scope dictate the required resources. Several common types of evaluation are:
Process evaluations, which assesses the performance or completion of steps taken to achieve desired program outcomes. Process evaluation can occur throughout the project cycle and can guide managers to make changes to maximize effectiveness. Examples of process measures are the number of ads shown in a media campaign or the number of community partners.
Output measures, which are commonly used in process evaluations, help gauge a program's processes; they describe a program's activities (e.g., how many older adults participated in a walk or how many classes were convened), rather than the ultimate effect of the program (e.g., changes in health). Output measures allow program managers to plan appropriately for clients or classes. Program planners can also use outputs to identify a need to better tailor programs to a target population (for example, if older adults are not joining a walking group) or monitor changes in program outputs (fewer older adults in the walking group than before).
Outcome evaluations, which consider program goals to determine if desired changes to attitudes, behavior, or knowledge have been attained as a result of the intervention. Outcome metrics are usually measured at the beginning and end of a project cycle or program. Examples of outcomes include positive changes to health status or a quantifiable increase in walking seniors.
...Community outcomes are not visions or goals, but
specific changes or benefits that involved organizations hold themselves
accountable for influencing.
Impact evaluations, which seek to isolate a program's impact on participants and communities, while filtering out effects from other potential sources (e.g. weather, other programs).1 Although impact evaluations require a higher level of technical expertise, they are considered the "gold standard" of evaluation. Impact evaluations (known as "experimental" or "quasi-experimental" studies) compare a group receiving services against one that is not.
back to top
In Sacramento, California, program planners measured the number of
participants in walking groups as well as walkers' satisfaction. Although
not a formal evaluation, this basic assessment of program processes
provided managers with information indicating a need to modify the program.
Walkers told them that the group meeting place affected their willingness
Depending on the scope, design, and purpose of program evaluations, a range of staff and researchers can conduct them. For process-oriented, day-to-day evaluation, program staff can collect and monitor information on various program features and collaborate with managers to analyze and interpret data. This type of evaluation, sometimes referred to as an internal evaluation, is conducted on a routine basis to review objective aspects of a program.
For in-depth evaluations, the Centers for Disease Control and Prevention (CDC) recommends a team approach.2 The team may include technical experts, such as statisticians or epidemiologists; program staff and management; stakeholders; and trusted members of the community with no vested interest in the outcome of the evaluation. Participation from outside the program provides fresh insight and increases the credibility of the evaluation. These external evaluations often focus on the outcomes or impact of a program.
Largo, Florida's active aging program managers
will measure blood pressure and pulse before and after the intervention
to determine the quantifiable effects of their program on senior
health. Largo is also building new urban trails and could measure the
number of people walking in the community before and after the trails
are built. Managers could survey or count people using trails. Businesses
along the trails may measure increases in sales or customer traffic
before and after the trails are built. Because bus routes will be connected
to the trail system and every bus is equipped with a bike rack,
bus drivers could survey passengers going to urban trails or using bike
racks. Evaluation should be done before and after the trails' completion.
Program managers can choose from an array of evaluation designs, methods, and evaluators. If it is not feasible for a program to conduct an external evaluation, program managers and staff can learn a great deal from regular program assessments. Conducting any evaluation of a program - judging the satisfaction of participants, the number of classes held, or the impact - is better than no evaluation at all. Program managers who do not assess the direction, methods, potential impact, and outcomes may have a limited understanding of their program and may lack the data to justify the program to funding agencies.
An impact evaluation of the Wheeling, West Virginia
media campaign, Wheeling Walks, led program managers to document a
30 percent increase in walking in the community as a result of the campaign.
The evaluation compared walking rates in Wheeling before and after
the campaign to rates in a similar community without the intervention.
recommends a series of steps for program evaluation (Figure 1).3 This
framework was developed community initiatives. The steps are as follows:
Engage stakeholders: During this step, evaluators ask partners and stakeholders to provide input into evaluation design and data analysis. Stakeholders program managers, collaborators, the population makers. In addition to informing program and evaluation efforts, stakeholders ensure that the program meets the needs of the community and issues that require consideration.
Describe the program: Evaluators next describe in detail the mission, goals, objectives, and program strategies. The description explains the needs addressed by the program, the expected outcomes of program activities and strategies, as well as available resources. The program description provides an explanation of a "logic model." The description also presents the program's developmental stage – whether it is a new or old program – which can affect the type of measures considered. A newer program that has been in the community for a short time will not have discernable long-term effects, and evaluation measures should reflect this.
Finally, evaluators consider external factors that can affect program success. Creating Communities for Active Aging lists external factors that commonly influence older adults' walking practices.
Focus the evaluation design: Managers must
choose an evaluation methodology and measures to accurately assess the
process or outcomes of the program while minimizing cost and time. To
focus the evaluation design, managers should articulate the purpose of
the evaluation, such as to improve the program's functioning (process
evaluation) or to assess the effectiveness of the intervention (outcome
or impact evaluation). Managers should also define the ultimate users
(audiences). Methods should be directly connected to the planned use
of data. The next step is to design the evaluation methodology. Methods
can include questionnaires or surveys, quasi-experimental studies, and
structured qualitative interviews.
What Are Logic Models? How Are They Used?
Program planners and managers use logic models to outline the steps
of a program. The models begin with the problem or opportunity in question
and examine the critical steps the program will undertake to bring about
a desired change. Logic models also identify the external influences
at work in the community that could potentially affect the outcome as
well as the resources required to change the outcome.
Regardless of the chosen approach, the methodology should be well researched and adhere to the highest standards of science. All evaluations should actively ensure participants' confidentiality, and data should be reliable and accurate.
should balance the following standards to ensure evaluation effectiveness:
Because community programs may have a small number of participants and key stakeholders may be easily identified, evaluators must ensure that all information is anonymous or that the identities of those who provided information are kept secret from external parties. By doing so, participants are protected and the integrity of the data is maintained.
After the method for data collection is determined, evaluators should
plan for data analysis using accepted statistical and research methods.
The type of analysis chosen will depend on the desired uses of the information.
For a complex analysis, program planners may tap local experts for assistance.
Involving stakeholders in the design of both the program and the evaluation
aids credibility by ensuring that all points of view are considered,
the program will address the population’s needs, and the data
will be meaningful to users. This involvement helps ensure that primary
audiences will consider the resulting data credible.
Justify conclusions: After the collection, evaluators synthesize, analyze,
and interpret the data using previously determined methods. Standards
against which to compare data – such as a baseline measurement,
a comparison to previous years, studies, or measures from comparable
communities or the United States – will help determine whether
or not the program is functioning well or achieving the desired outcomes.
Program planners, with stakeholder involvement, can then use the evaluation
data to recommend program changes and/or create other programs.
Although many program managers appreciate the utility of evaluations, many programs evaluate neither process nor outcomes due to the perceived challenges associated with evaluation. The following are examples of common evaluation challenges and suggested strategies for meeting these challenges:
Cost Challenge: Program evaluation can be expensive.
A rigorous evaluation can cost more than a program has allotted for
The Time Challenge: Evaluation efforts may be time consuming and could divert staff from the day-to-day program functioning.
The Expertise Challenge: Most evaluation efforts require a minimal level of expertise. Although complicated analyses or study designs require expertise in research and statistics, many meaningful lessons can be learned from simple process evaluations.
The Robert Wood Johnson Foundation
The Robert Wood Johnson Foundation (RWJF), a philanthropic organization dedicated to improving health and health care in the US, currently funds several program evaluation projects. A sampling of evaluation projects from their website shows expenditures ranging from $32,000 for eight months to $671,000 over four years. Source: www.rwjf.org,accessed on August 12, 2002.