What are evaluations?

Here, we mean the systematic investigation of the merit, worth, or significance of an object or effort. Evaluations involve carefully collecting information about a strategy or program in order to make necessary decisions about it. Evaluations can include a variety of at least 35 different types of evaluation, such as for needs assessments, accreditation, cost/benefit analysis, effectiveness, efficiency, formative, summative, goal-based, process, outcomes, etc.

Why are evaluations useful?

Evaluation may be used to generate information to improve a framework or program; this type is described as formative evaluation. Evaluation also supports “evidence-informed” decisions about how to improve program management. In addition, using evaluation for program improvement may promote a spirit of inquiry among program personnel, those most intimate with program function and the effects of policy. The opportunity to contribute to further success of a program may improve employee morale and their commitment to an organization.

Evaluation may also be conducted to show accountability and to provide information about program effectiveness to decision makers; this type is described as summative evaluation. Program evaluation serves an important role to interpret trends that are revealed through statistical analysis and to examine program implementation.

Outcome measurement and evaluation of the effectiveness of campus services are critical elements to understanding the needs of a student population and improving related programs and services.

When to evaluate?

As soon as possible! Ideally, the prospect of evaluating a framework or program should be integrated into its conception. When strategies are adopted on campus, being able to evaluate the effectiveness of those strategies should be part of the planning process. A simple pre-mid-post measurement design is useful in most cases to showcase whether a framework or program has had an effect on campus.

Evaluation steps

Here is a brief summary of the main steps involved in evaluation. Jump to the bottom of the page to further explore these steps.

Prepare

  • Form an evaluation project team
    • Internal vs external evaluators
  • Identify stakeholders
  • Develop evaluation charter
  • Develop evaluation budget
  • Develop evaluation timeline and schedule
  • Research Ethics Board approval

Plan

  • Define evaluation questions
  • Conduct literature review
  • Decide on evaluation strategy
    • Design (logic model)
    • Outcome measures
    • Analysis technique
  • Pilot new measures (optional)

Share

  • Write evaluation report
  • Decide on dissemination method
  • Discuss implementation of change based on findings

Evaluation Tools

The sections below have been structured to provide a walk-through on how to support campuses in adopting performance criteria and data collection systems.

Start to explore tools and resources now!

Plan

Define evaluation questions

A key activity during evaluation planning is to develop questions that you would like to answer though your evaluation. This should be done together with the entire evaluation team. Evaluation questions are broad; they are not the specific questions that might appear on a survey or interview guide, but rather reflect the priorities of the evaluation team.

Conduct literature review

Conducting a literature review is an important step in developing and implementing an evaluation framework. At the planning stages, it can help identify other evaluations of similar programs that have been conducted, their key evaluation questions, what outcomes were assessed, the measured used and any barriers or challenges encountered.

This information can be used to help inform development of your own evaluation framework. At the implementation stage, it is important to investigate evidence-informed practices as they relate to the program as it will help inform optimum program delivery.

Decide on evaluation strategy

The evaluation strategy should be designed to provide the information to answer the evaluation questions of intended users. Both complex and simple evaluations should be equally rigorous in relating the methodology to the intended use of the evaluation. For instance, if the intended use of the evaluation is to inform a decision-making process (e.g. continue, expand or terminate funding), then the complexity of the evaluation methodology needs to reflect the impact of the decisions that will be taken.

Design (logic model)

Logic models were developed as tools to clarify the nature of a program and its intended effects. A logic model of the program is a useful planning tool that provides a diagrammatic description of a program by depicting its goals and objectives, the component activities needed to accomplish the goals, their outputs (countable by‐products of each component), short and long term outcomes (direct results or accomplishments) and impacts (effects for which the program can claim only partial responsibility).

Evaluation design is the logic model used to arrive at conclusions about outcomes. In selecting the evaluation design, the evaluator must determine simultaneously the type of information to be retrieved and the type of analysis this information will be subjected to. For example, to assess the extent to which a program has achieved a given objective, an indicator of the achievement must be determined and an analytic technique for isolating the effect of the program. Evaluation designs provide the logical basis for measuring results and for attributing results to programs.

Logic Model Components

  • Goal: What is the overall purpose of your program? Why are you doing it?
  • Participants: For whom is your program designed? Who will benefit from it?
  • Activities: What activities are needed to run the program?
  • Outputs: What are the tangible products of your activities?
  • Outcomes: What changes do you expect to occur as a result of the program? How will you know that the intended participants benefit from the program?
  • Impacts: What longer term changes do you expect to result from the program (recognizing that they also may be influenced by factors external to the program)?

Example of a logic model.

This paper provides an in-depth understanding of evaluation design.

Outcome measures

Outcome evaluation focuses on measuring the intended effects of the program on the targeted population – short and/or intermediate outcomes such as changes in knowledge, skills, attitudes and behaviour. Although an important part of evaluating a program, measuring outcomes can be complex and time consuming. When planning an evaluation, it is important to focus on key outcomes that are important to stakeholders in order to ensure feasibility of the evaluation.

To narrow the list of outcomes that you plan to measure, it is helpful to ask the following questions:

  • Is this outcome important to stakeholders? Different outcomes may have different levels of importance to different stakeholders. It will be important to arrive at some consensus.
  • Is this outcome within our sphere of influence?
  • Will the program be at the right stage of delivery to produce the particular outcome? Ensure that the intended outcomes are achievable within the timelines of the evaluation.
  • Will we be able to measure this outcome? There are many standardized measures with strong validity and reliability that are designed to measure specific outcomes.

Develop data collection tools (survey, interview guide, etc.) and procedures and train data collectors. Consider whether incentives are appropriate and brainstorm ways to enhance response rates. To ensure validity, pilot test tools and procedures and closely monitor data gathered. If issues arise, modify tools and procedures and document changes. Computerize data collection to facilitate analysis if appropriate.

http://www.nccmt.ca and http://cbpp-pcpe.phac-aspc.gc.ca/resources/planning-public-health-programs are two databases with a collection of measures for program evaluation.

Analysis technique

Implement strategies to review data quality during and after data collection. During data collection, look closely at the first wave of responses and number of ‘no response’ or refusals, and keep data collectors and the evaluation lead connected. After data collection, enter data and double check quality and consistency of entry, sort to find missing, high or low values (quantitative), and check content by reviewing transcripts entered (qualitative). Organize data in a format that can be summarized and interpreted. Analyze by conducting statistical analysis of quantitative data; identify themes in qualitative data. This is a technical step – enlist expert support when possible. This sets the stage for interpretation.

Quantitative analysis

Quantitative analysis is useful in providing statistical descriptive and for making inferences from the variables used. The latter is done using statistical methods such as analysis of variance which can determine if differences in measurement scores are statistically significant. With respect to quantitative data analysis, there are some very sophisticated data analysis software programs that you can purchase such as IBM SPSS Statistics (formerly SPSS Statistics), SAS and others. Programs like R are free, but require a more sophisticated understanding of statistics and computer languages.

Qualitative analysis

Qualitative data, such as interview transcripts, open-ended questions and journals, can provide a holistic view of the data. Qualitative data is easy to gather but difficult to analyze, and is best analyzed in conjunction with quantitative data. A common qualitative analysis software programs is NVivo.

(Optional) Pilot new measures

If there are new, untested surveys and questionnaires that have been created, this is the best time to deploy them. Select a small group of students and run a pilot study with the new measures. The data gathered from the pilot should be used to inform the rest of the evaluation process.

 

Prepare

Form an evaluation project team

Typically, there is an individual accountable to ensure an evaluation is planned and conducted, such as a Program Manager or Director. This individual will strike an Evaluation Project Team to complete the task. The Evaluation Project Team is comprised of individuals who can offer expert program knowledge and other skills as necessary to complete the evaluation process. In addition to the Program Manager or Director, the Evaluation Project Team will likely include individuals who can fulfill the roles of:

  • Evaluator (see below)
  • Program expert
  • Stakeholders
  • Evaluation coordinator

The Evaluator may be an individual who is internal (e.g. an employee) or external (e.g. a consultant) to the organization. Internal evaluation is the process of using staff members who have the responsibility for evaluating programs or problems of direct relevance to managers.

If the Evaluator is an employee, she/he can offer:

  • Organizational knowledge to ensure evaluation methodology is relevant;
  • Potentially, a responsibility to use the evaluation to achieve on-going organizational improvement.

External evaluation refers to contracting with an external consultant to complete the evaluation. If the Evaluator is an external consultant, she/he can offer:

  • Necessary (and often specialized) expertise;
  • Objectivity

It is often advisable for an Evaluation Project Team to comprise of both internal and external personnel.

Identify stakeholders

Evaluation stakeholders are individuals and groups (both internal and external) who have an interest in the evaluation, that is, they are involved in or affected by the evaluation.

Evaluation stakeholders include:

  • Program management (managers, team leaders, executive sponsors)
  • Funding agencies
  • Program personnel (first line leaders, support staff)
  • Volunteers and community representatives
  • Program participants

To identify stakeholders, consider:

  • Who is funding the program?
  • Who delivers the program (e.g., third party delivery agencies)?
  • Who has requested the evaluation (e.g. funding agency, decision makers)?
  • Who will use the results of the evaluation and how?
  • How will the organization, stakeholders and personnel respond to findings?
  • Who will the evaluation results be disseminated to?

Develop evaluation charter

An Evaluation Charter is a document that is developed to seek formal approval from internal management to proceed with an evaluation project. The evaluation project may be to develop an evaluation plan and/or conduct an evaluation.

The Evaluation Charter describes:

  • Goals of the evaluation project
  • Objectives of the evaluation project (concrete steps to completing the project)
  • Evaluation stakeholders and primary intended users
  • Assumptions about how the evaluation project will proceed
  • Known risks to completing the evaluation project
  • Roles and responsibilities of the evaluation project team members

Develop evaluation budget

An Evaluation Budget helps to consider what expenditures are needed in developing the evaluation.

Example of an evaluation budget.

Develop evaluation timeline and schedule

Deciding when to collect data is an important part of planning an evaluation. When you don’t plan for data collection, you often miss important opportunities to gather data. For example, once you begin a project, you may no longer have the opportunity to gather important baseline data about the sample. Basically, evaluation data can be collected at only three points in time—before the project, during the project, and after it has been completed. Frequently, you collect baseline data before the project to document the conditions that existed beforehand. Sometimes data are collected during a community-building effort to determine whether the effort is on course or needs changes. Data can also be collected after the community-building project is completed to document what was accomplished.

A work plan organizes everything into one table: the WHO, WHAT, WHERE, WHEN, WHY and HOW of your intended evaluation activities.

Example of a work plan. 

Research Ethics Board approval

Depending on how the evaluation is conducted and who it involves, approval from a Research Ethics Board may be required. Please refer to your local institution for the usual process of application and approval, as every REB operates in its own way.

 

Share

Write evaluation report

Anchor the interpretation the evaluation data to the original evaluation questions. Create a list of recommended actions that address your outcomes, and use this information to create the materials to communicate your findings. Presentation of findings can take many forms such as a written report, slide show presentation, and/or as a short informational video. Visual aids can be powerful methods for communicating evaluation results.

The types of reports (e.g. written or oral) should be defined in the Evaluation Charter. The purpose of this section is to present ideas about style, format, content, and the process of reporting information. These characteristics also influence the utility of evaluation findings. Charts/graphics are essential to capturing attention and communicating quickly. Tone, content, and language of a key message needs to be appropriate for its intended audience. Communicate sensitive information carefully. Develop clear, simple, action-oriented messages.

Reports on the evaluation findings could follow a number of formats (written and oral). In fact, written and oral delivery could be combined, as appropriate.

Formats for written reports include:

  • Executive summary, followed by a full report
  • Executive summary, followed by a few key tables, graphs, and data summaries
  • Executive summary only, and make data available for those interested
  • Newsletter article for dissemination
  • Press release

Formats for oral presentation include:

  • Oral presentation with charts
  • Short presentation followed with question/answer period
  • Discussion groups based on prepared written material
  • Retreat-like session with intended users
  • Video or audio taped presentation
  • Debate session regarding certain conclusions/judgements
  • Involve selected primary users in reporting and facilitating any of the above modes of oral presentation.

Decide on dissemination method

One main goal of evaluation is to produce and disseminate information that is useful for primary intended users. The process to develop “useful” information started when primary intended users and other stakeholders were engaged in identifying the intended use of the evaluation and the evaluation methodology.

The likelihood that evaluation findings are used is improved when evaluation findings are communicated directly with intended users of the evaluation (e.g. managers, decision-makers). Make results available to various stakeholders and audiences. Tailor what is disseminated to their specific interest in the evaluation and how they plan to use the results.

Discuss implementation of change based on findings

The use of evaluation findings (which may include implementation of recommendations) is likely more of a process than a single event. The purpose and expected use of evaluation findings is explored as part of the evaluation planning process should disseminate the information according to their intended use. Different purposes of evaluation lead to different uses of evaluation findings.

Evaluation findings can be used immediately in two ways:

Conceptual use: The evaluation produces new information about the program and this information changes how people understand the program and how it works (e.g. how it serves the intended target population). This information may be used to change the program (e.g. make adjustments to better meet needs of target population), but are not directed at a particular decision about the future of the program.

Instrumental use: Evaluation findings are directed at a particular decision for a specific program at a concrete point in time (e.g. end or expand a program).

There are many factors that influence how (and if) evaluation findings are used (e.g. existing knowledge, beliefs, values, budget and time constraints). It is more likely that evaluation findings are used (and recommendations implemented) when:

  • Intended users and use is accurately identified
  • Evaluation questions are answered in a clear way
  • Findings are accurate and relevant to intended users
  • Evaluation findings are communicated directly with intended users of the evaluation (E.g. managers, decision-makers)

Sharing and implementing changes based on evaluation findings and a review of best and promising practices will have important impacts on the quality and effectiveness of your program. Here are some suggestions:

  • Prepare a summary of your findings and lessons learned to share for discussion.
  • Seek stakeholder feedback on what you’ve learned about the program.
  • Organize a semi-annual staff and/or board meeting to discuss the outputs and outcomes of your program. Your original questions will help set priorities and guide your discussion.
  • Review your logic model (objectives and projected outcomes) and your results assessment questions.
  • Take time to explore the program design, systems and structures and discuss what is working and what is not working. Think about any modifications to the design of the program that would improve results.
  • Compare the costs and benefits of your program.