Program Evaluation 101

Contributed by
Robin Puett, MP


Definition of Program Evaluation

Evaluation is the systematic application of scientific methods to assess the design, implementation, improvement or outcomes of a program (Rossi & Freeman, 1993; Short, Hennessy, & Campbell, 1996). The term "program" may include any organized action such as media campaigns, service provision, educational services, public policies, research projects, etc. (Center for Disease Control and Prevention [CDC], 1999).

Purposes for Program Evaluation

Demonstrate program effectiveness to funders
Improve the implementation and effectiveness of programs
Better manage limited resources
Document program accomplishments
Justify current program funding
Support the need for increased levels of funding
Satisfy ethical responsibility to clients to demonstrate positive and negative effects of program participation (Short, Hennessy, & Campbell, 1996).
Document program development and activities to help ensure successful replication

Barriers

Program evaluations require funding, time and technical skills: requirements that are often perceived as diverting limited program resources from clients. Program staff are often concerned that evaluation activities will inhibit timely accessibility to services or compromise the safety of clients. Evaluation can necessitate alliances between historically separate community groups (e.g. academia, advocacy groups, service providers; Short, Hennessy, & Campbell, 1996). Mutual misperceptions regarding the goals and process of evaluation can result in adverse attitudes (CDC, 1999; Chalk & King, 1998).

Overcoming Barriers

Collaboration is the key to successful program evaluation. In evaluation terminology, stakeholders are defined as entities or individuals that are affected by the program and its evaluation (Rossi & Freeman, 1993; CDC, 1999). Involvement of these stakeholders is an integral part of program evaluation. Stakeholders include but are not limited to program staff, program clients, decision makers, and evaluators. A participatory approach to evaluation based on respect for one another's roles and equal partnership in the process overcomes barriers to a mutually beneficial evaluation (Burt, Harrell, Newmark, Aron, & Jacobs, 1997; Chalk & King, 1998). Identifying an evaluator with the necessary technical skills as well as a collaborative approach to the process is integral. Programs have several options for identifying an evaluator. Health departments, other state agencies, local universities, evaluation associations and other programs can provide recommendations. Additionally, several companies and university departments providing these services can be located on the internet. Selecting an evaluator entails finding an individual who has an understanding of the program and funding requirements for evaluations, demonstrated experience, and knowledge of the issue that the program is targeting (CDC, 1992).

Types of Evaluation

Various types of evaluation can be used to assess different aspects or stages of program development. As terminology and definitions of evaluation types are not uniform, an effort has been made to briefly introduce a number of types here.

Context Evaluation Investigating how the program operates or will operate in a particular social, political, physical and economic environment. This type of evaluation could include a community needs or organizational assessment (http://www.wkkf.org/Publications/evalhdbk/default.htm). Sample question: What are the environmental barriers to accessing program services? Formative Evaluation Assessing needs that a new program should fulfill (Short, Hennessy, & Campbell, 1996), examining the early stages of a program's development (Rossi & Freeman, 1993), or testing a program on a small scale before broad dissemination (Coyle, Boruch, & Turner, 1991). Sample question: Who is the intended audience for the program? Process Evaluation Examining the implementation and operation of program components. Sample question: Was the program administered as planned? Impact Evaluation Investigating the magnitude of both positive and negative changes produced by a program (Rossi & Freeman, 1993). Some evaluators limit these changes to those occurring immediately (Green & Kreuter, 1991).  Sample question: Did participant knowledge change after attending the program? Outcome Evaluation Assessing the short and long-term results of a program. Sample question: What are the long-term positive effects of program participation? Performance or Program Monitoring

Similar to process evaluation, differing only by providing regular updates of evaluation results to stakeholders rather than summarizing results at the evaluation's conclusion (Rossi & Freeman, 1993; Burt, Harrell, Newmark, Aron, & Jacobs, 1997).

Evaluation Standards and Designs

Evaluation should be incorporated during the initial stages of program development. An initial step of the evaluation process is to describe the program in detail. This collaborative activity can create a mutual understanding of the program, the evaluation process, and program and evaluation terminology. Developing a program description also helps ensure that program activities and objectives are clearly defined and that the objectives can be measured. In general, the evaluation should be feasible, useful, culturally competent, ethical and accurate (CDC, 1999). Data should be collected over time using multiple instruments that are valid, meaning they measure what they are supposed to measure, and reliable, meaning they produce similar results consistently (Rossi & Freeman, 1993). The use of qualitative as well as quantitative data can provide a more comprehensive picture of the program. Evaluations of programs aimed at violence prevention should also be particularly sensitive to issues of safety and confidentiality. Experimental designs are defined by the random assignment of individuals to a group participating in the program or to a control group not receiving the program. These ideal experimental conditions are not always practical or ethical in "real world" constraints of program delivery. A possible solution to blending the need for a comparison group with feasibility is the quasi-experimental design in which an equivalent group (i.e. individuals receiving standard services) is compared to the group participating in the target program. However, the use of this design may introduce difficulties in attributing the causation of effects to the target program. While non-experimental designs may be easiest to implement in a program setting and provide a large quantity of data, drawing conclusions of program effects are difficult.

Logic Models

Logic models are flowcharts that depict program components. These models can include any number of program elements, showing the development of a program from theory to activities and outcomes. Infrastructure, inputs, processes, and outputs are often included. The process of developing logic models can serve to clarify program elements and expectations for the stakeholders. By depicting the sequence and logic of inputs, processes and outputs, logic models can help ensure that the necessary data are collected to make credible statements of causality (CDC, 1999).

Communicating Evaluation Findings

Preparation, effective communication and timeliness in order to ensure the utility of evaluation findings. Questions that should be answered at the evaluation's inception include: what will be communicated? to whom? by whom? and how? The target audience must be identified and the report written to address their needs including the use of non-technical language and a user-friendly format (National Committee for Injury Prevention and Control, 1989). Policy makers, current and potential funders, the media, current and potential clients, and members of the community at large should be considered as possible audiences. Evaluation reports describe the process as well as findings based on the data (http://www.wkkf.org/Publications/evalhdbk/default.htm).

Recommendations

The National Research Council provides several recommendations for future violence prevention program evaluations. Some of these recommendations include: continued and expanded collaborations between evaluators/researchers and services providers, the use of appropriate measures and outcomes, the development and implementation of evaluations that address multiple services or multiple issues, and the allotment of resources to conduct quality evaluations (Chalk & King, 1998).

References

Burt, M. R., Harrell, A. V., Newmark, L. C., Aron, L. Y., & Jacobs, L. K. (1997). Evaluation guidebook: Projects funded by S.T.O.P. formula grants under the Violence Against Women Act. The Urban Institute. http://www.urban.org/crime/evalguide.html

Centers for Disease Control and Prevention. (1992). Handbook for evaluating HIV education. Division of Adolescent and School Health, Atlanta.

CDC. Framework for program evaluation in public health. MMWR Recommendations and Reports 1999;48(RR11):1-40.

Chalk, R., & King, P. A. (Eds.). (1998). Violence in Families: Assessing prevention and treatment programs. Washington DC: National Academy Press.

Coyle, S. L., Boruch, R. F., & Turner, C. F. (Eds.). (1991). Evaluating AIDS prevention programs: Expanded edition. Washington DC: National Academy Press.

Green, L.W., & Kreuter, M. W. (1991). Health promotion planning: An educational and environmental approach (2nd ed.). Mountain View, CA: Mayfield Publishing Company.

National Committee for Injury Prevention and Control. (1989). Injury prevention: Meeting the challenge. American Journal of Preventive Medicine, 5(Suppl. 3).

Rossi, P. H., & Freeman, H. E. (1993). Evaluation: A systematic approach (5th ed.). Newbury Park, CA: Sage Publications, Inc.

Short, L., Hennessy, M., & Campbell, J. (1996). Tracking the work. In Family violence: Building a coordinated community response: A guide for communities.

Witwer, M. (Ed.) American Medical Association. Chapter 5.

W.K. Kellogg Foundation. W.K. Kellogg evaluation handbook. http://www.wkkf.org/Publications/evalhdbk/default.htm  

Feedback Join Us Site Map VAWPrevention Home
  National Violence Against Women Prevention Research Center © Copyright 2000
(843) 792-2945/telephone       (843)  792-3388/fax