Plan

Define evaluation questions

A key activity during evaluation planning is to develop questions that you would like to answer though your evaluation. This should be done together with the entire evaluation team. Evaluation questions are broad; they are not the specific questions that might appear on a survey or interview guide, but rather reflect the priorities of the evaluation team.

Conduct literature review

Conducting a literature review is an important step in developing and implementing an evaluation framework. At the planning stages, it can help identify other evaluations of similar programs that have been conducted, their key evaluation questions, what outcomes were assessed, the measured used and any barriers or challenges encountered.

This information can be used to help inform development of your own evaluation framework. At the implementation stage, it is important to investigate evidence-informed practices as they relate to the program as it will help inform optimum program delivery.

Decide on evaluation strategy

The evaluation strategy should be designed to provide the information to answer the evaluation questions of intended users. Both complex and simple evaluations should be equally rigorous in relating the methodology to the intended use of the evaluation. For instance, if the intended use of the evaluation is to inform a decision-making process (e.g. continue, expand or terminate funding), then the complexity of the evaluation methodology needs to reflect the impact of the decisions that will be taken.

Design (logic model)

Logic models were developed as tools to clarify the nature of a program and its intended effects. A logic model of the program is a useful planning tool that provides a diagrammatic description of a program by depicting its goals and objectives, the component activities needed to accomplish the goals, their outputs (countable by‐products of each component), short and long term outcomes (direct results or accomplishments) and impacts (effects for which the program can claim only partial responsibility).

Evaluation design is the logic model used to arrive at conclusions about outcomes. In selecting the evaluation design, the evaluator must determine simultaneously the type of information to be retrieved and the type of analysis this information will be subjected to. For example, to assess the extent to which a program has achieved a given objective, an indicator of the achievement must be determined and an analytic technique for isolating the effect of the program. Evaluation designs provide the logical basis for measuring results and for attributing results to programs.

Logic Model Components

  • Goal: What is the overall purpose of your program? Why are you doing it?
  • Participants: For whom is your program designed? Who will benefit from it?
  • Activities: What activities are needed to run the program?
  • Outputs: What are the tangible products of your activities?
  • Outcomes: What changes do you expect to occur as a result of the program? How will you know that the intended participants benefit from the program?
  • Impacts: What longer term changes do you expect to result from the program (recognizing that they also may be influenced by factors external to the program)?

Example of a logic model.

This paper provides an in-depth understanding of evaluation design.

Outcome measures

Outcome evaluation focuses on measuring the intended effects of the program on the targeted population – short and/or intermediate outcomes such as changes in knowledge, skills, attitudes and behaviour. Although an important part of evaluating a program, measuring outcomes can be complex and time consuming. When planning an evaluation, it is important to focus on key outcomes that are important to stakeholders in order to ensure feasibility of the evaluation.

To narrow the list of outcomes that you plan to measure, it is helpful to ask the following questions:

  • Is this outcome important to stakeholders? Different outcomes may have different levels of importance to different stakeholders. It will be important to arrive at some consensus.
  • Is this outcome within our sphere of influence?
  • Will the program be at the right stage of delivery to produce the particular outcome? Ensure that the intended outcomes are achievable within the timelines of the evaluation.
  • Will we be able to measure this outcome? There are many standardized measures with strong validity and reliability that are designed to measure specific outcomes.

Develop data collection tools (survey, interview guide, etc.) and procedures and train data collectors. Consider whether incentives are appropriate and brainstorm ways to enhance response rates. To ensure validity, pilot test tools and procedures and closely monitor data gathered. If issues arise, modify tools and procedures and document changes. Computerize data collection to facilitate analysis if appropriate.

http://www.nccmt.ca and http://cbpp-pcpe.phac-aspc.gc.ca/resources/planning-public-health-programs are two databases with a collection of measures for program evaluation.

Analysis technique

Implement strategies to review data quality during and after data collection. During data collection, look closely at the first wave of responses and number of ‘no response’ or refusals, and keep data collectors and the evaluation lead connected. After data collection, enter data and double check quality and consistency of entry, sort to find missing, high or low values (quantitative), and check content by reviewing transcripts entered (qualitative). Organize data in a format that can be summarized and interpreted. Analyze by conducting statistical analysis of quantitative data; identify themes in qualitative data. This is a technical step – enlist expert support when possible. This sets the stage for interpretation.

Quantitative analysis

Quantitative analysis is useful in providing statistical descriptive and for making inferences from the variables used. The latter is done using statistical methods such as analysis of variance which can determine if differences in measurement scores are statistically significant. With respect to quantitative data analysis, there are some very sophisticated data analysis software programs that you can purchase such as IBM SPSS Statistics (formerly SPSS Statistics), SAS and others. Programs like R are free, but require a more sophisticated understanding of statistics and computer languages.

Qualitative analysis

Qualitative data, such as interview transcripts, open-ended questions and journals, can provide a holistic view of the data. Qualitative data is easy to gather but difficult to analyze, and is best analyzed in conjunction with quantitative data. A common qualitative analysis software programs is NVivo.

(Optional) Pilot new measures

If there are new, untested surveys and questionnaires that have been created, this is the best time to deploy them. Select a small group of students and run a pilot study with the new measures. The data gathered from the pilot should be used to inform the rest of the evaluation process.