Program Evaluation
Evaluation is actually a specific form of research which focuses on a particular program's normal operations, without control groups or peer review. It is usually paid for by the organization in which it is taking place, without a goal of benefiting people or communities other than those from whom the data are collected. Lessons learned through integrating evaluation into program design are difficult to capture through any other means. In that sense, program evaluation is the collection and dissemination of feedback, conducted simultaneously with and in support of program design.
The American Evaluation Association recently updated its Program Evaluation Standards (http://www.eval.org/evaluationdocuments/progeval.html) Yarbrough, D. B., Shulha, L. M., Hopson, R. K., and Caruthers, F. A. (2011). The program evaluation standards: A guide for evaluators and evaluation users (3rd ed.). Thousand Oaks, CA: Sage
Work based learning programs may be required to evaluate the effectiveness of their work using specific methods as defined by agencies, sponsors or funders. However, when there are no specific guidelines, program developers may turn to resources published by the United States Department of Education. Among these are:
Work Based Learning Toolkit http://www.work-basedlearning.org/toolkit.cfm
Scorecard for Skills http://www.scorecardforskills.com/
Following, a few of the more common concepts used in program evaluation are described in more detail.
Summative vs Formative Evaluation
Summative (end product) evaluation is an important piece of any project, and often actually required by funders. It helps to answer questions like, "How many students took part?" "How many businesses were visited?" "How many staff hours were used?" Summative evaluation helps to answer questions about total impact and total resources used, which can lead to changes being made the next time the program is offered. "We used large groups throughout, with mixed results. Let's try a few small groups next year."
But instituting a strong formative evaluation can capture sudden "ah-ha!" moments and lead to mid-course corrections, potentially making a difference for participants currently taking part in the program. Formative questions center around what is happening now, what is working well and what isn't. It can be as easy as a quick "check in" question. "What worked better for you today, large or small group work?" Leading to changes that can be implemented right away. "Let's use small groups again tomorrow!"
Needs Assessment/Goal Setting
Prior to initiating or restructuring a work-based learning program, a needs assessment should be conducted in order to identify resources already in place, as well as those identified to fill gaps between current conditions and desired outcomes. The "needs" that are assessed can be defined broadly, both as resources to improve current performance, and those which will turn the program's focus in new directions.
(examples of needs assessments are found at http://www.dpi.state.nd.us/grants/needs.pdf)
Action Research
Action or Contextual Research is an evaluation conducted by program staff as a natural part of the program. It is generally cyclical, with results analyzed and changes implemented by those who deliver the program that is being evaluated. This "Learning by doing" approach leads to improved performance, generally with a very specific question as the focus of the work, "This is what we are doing now (documented) How can we increase (participation, positive outcomes, parent involvement, etc.)?" Staff work together to define the question, design evaluation data collection and analysis efforts, and share their findings.
(see http://www.web.ca/robrien/papers/arfinal.html for examples of action research)
Data Collection -- Quantitative/Qualitative/Mixed Methods
Quantitative researchers most often deal with numbers, and like to begin with a hypothesis which they prove or dis-prove mathematically. Whereas qualitative researchers' findings are more often built from the ground up through field observations, interviews, or personal narrative, and in some cases lead to a theory which may then be tested quantitatively. Likewise, quantitative researchers may see results in their data that are puzzling until they get out in the field to observe, qualitatively, aspects leading to further explanation of the results. When an evaluation uses both qualitative and quantitative data is is a "mixed-methods" evaluation. Mixed methods evaluations are actually increasing in popularity, and can be argued to result in the most trustworthy findings.
360-degree
Many organizations use a "360" approach to evaluation. The theory being that all stakeholder groups involved in a program have knowledge of, and deserve a voice in, the process of evaluating (valuing) it. In reality this approach will lead to lots of good information, while collecting and analyzing that information is also time consuming. It is not necessary to ask everyone in the organization a question that only impacts full-time staff, for instance. However, if the question being asked is broad enough to impact the program as a whole, or policy relating to it, 360-degree evaluation should be considered.
SWOT
If your focus is the entire organization, then a SWOT (strengths, weaknesses, opportunities, and threats) analysis may be what you are looking for. In fact, SWOT is often the first step in developing a strategic plan. (examples of SWAT for non-profits see http://nonprofitchas.com/blog/2008/12/strategic-planning-swot-analysis-toolkit/)
Tools of evaluation
Free, online survey tools, like SurveyMonkey and Zoomerang, make it possible for programs to reach their stakeholders online through live links embedded in websites or email, or in printed formats. However be warned, the quality of information gained from any survey tool will only be as good as the thought put into designing the instrument. When it comes to design, Don A. Dillman's seminal guides, the newest being Internet, Mail, and Mixed-Mode Surveys, are essential reading.
Involved statistical analysis can be accomplished with software like SPSS and SAS. Likewise qualitative tools such as NVivo and Atlas provide a technological solution for the tedium of text, tape, and video analysis. Free samples from the websites allow you to explore the software before purchasing.But before running out to buy anything, talk with other programs to learn what they are using, how they like it and what advice they can offer!
Evaluation is actually a specific form of research which focuses on a particular program's normal operations, without control groups or peer review. It is usually paid for by the organization in which it is taking place, without a goal of benefiting people or communities other than those from whom the data are collected. Lessons learned through integrating evaluation into program design are difficult to capture through any other means. In that sense, program evaluation is the collection and dissemination of feedback, conducted simultaneously with and in support of program design.
The American Evaluation Association recently updated its Program Evaluation Standards (http://www.eval.org/evaluationdocuments/progeval.html) Yarbrough, D. B., Shulha, L. M., Hopson, R. K., and Caruthers, F. A. (2011). The program evaluation standards: A guide for evaluators and evaluation users (3rd ed.). Thousand Oaks, CA: Sage
Work based learning programs may be required to evaluate the effectiveness of their work using specific methods as defined by agencies, sponsors or funders. However, when there are no specific guidelines, program developers may turn to resources published by the United States Department of Education. Among these are:
Work Based Learning Toolkit http://www.work-basedlearning.org/toolkit.cfm
Scorecard for Skills http://www.scorecardforskills.com/
Following, a few of the more common concepts used in program evaluation are described in more detail.
Summative vs Formative Evaluation
Summative (end product) evaluation is an important piece of any project, and often actually required by funders. It helps to answer questions like, "How many students took part?" "How many businesses were visited?" "How many staff hours were used?" Summative evaluation helps to answer questions about total impact and total resources used, which can lead to changes being made the next time the program is offered. "We used large groups throughout, with mixed results. Let's try a few small groups next year."
But instituting a strong formative evaluation can capture sudden "ah-ha!" moments and lead to mid-course corrections, potentially making a difference for participants currently taking part in the program. Formative questions center around what is happening now, what is working well and what isn't. It can be as easy as a quick "check in" question. "What worked better for you today, large or small group work?" Leading to changes that can be implemented right away. "Let's use small groups again tomorrow!"
Needs Assessment/Goal Setting
Prior to initiating or restructuring a work-based learning program, a needs assessment should be conducted in order to identify resources already in place, as well as those identified to fill gaps between current conditions and desired outcomes. The "needs" that are assessed can be defined broadly, both as resources to improve current performance, and those which will turn the program's focus in new directions.
(examples of needs assessments are found at http://www.dpi.state.nd.us/grants/needs.pdf)
Action Research
Action or Contextual Research is an evaluation conducted by program staff as a natural part of the program. It is generally cyclical, with results analyzed and changes implemented by those who deliver the program that is being evaluated. This "Learning by doing" approach leads to improved performance, generally with a very specific question as the focus of the work, "This is what we are doing now (documented) How can we increase (participation, positive outcomes, parent involvement, etc.)?" Staff work together to define the question, design evaluation data collection and analysis efforts, and share their findings.
(see http://www.web.ca/robrien/papers/arfinal.html for examples of action research)
Data Collection -- Quantitative/Qualitative/Mixed Methods
Quantitative researchers most often deal with numbers, and like to begin with a hypothesis which they prove or dis-prove mathematically. Whereas qualitative researchers' findings are more often built from the ground up through field observations, interviews, or personal narrative, and in some cases lead to a theory which may then be tested quantitatively. Likewise, quantitative researchers may see results in their data that are puzzling until they get out in the field to observe, qualitatively, aspects leading to further explanation of the results. When an evaluation uses both qualitative and quantitative data is is a "mixed-methods" evaluation. Mixed methods evaluations are actually increasing in popularity, and can be argued to result in the most trustworthy findings.
360-degree
Many organizations use a "360" approach to evaluation. The theory being that all stakeholder groups involved in a program have knowledge of, and deserve a voice in, the process of evaluating (valuing) it. In reality this approach will lead to lots of good information, while collecting and analyzing that information is also time consuming. It is not necessary to ask everyone in the organization a question that only impacts full-time staff, for instance. However, if the question being asked is broad enough to impact the program as a whole, or policy relating to it, 360-degree evaluation should be considered.
SWOT
If your focus is the entire organization, then a SWOT (strengths, weaknesses, opportunities, and threats) analysis may be what you are looking for. In fact, SWOT is often the first step in developing a strategic plan. (examples of SWAT for non-profits see http://nonprofitchas.com/blog/2008/12/strategic-planning-swot-analysis-toolkit/)
Tools of evaluation
Free, online survey tools, like SurveyMonkey and Zoomerang, make it possible for programs to reach their stakeholders online through live links embedded in websites or email, or in printed formats. However be warned, the quality of information gained from any survey tool will only be as good as the thought put into designing the instrument. When it comes to design, Don A. Dillman's seminal guides, the newest being Internet, Mail, and Mixed-Mode Surveys, are essential reading.
Involved statistical analysis can be accomplished with software like SPSS and SAS. Likewise qualitative tools such as NVivo and Atlas provide a technological solution for the tedium of text, tape, and video analysis. Free samples from the websites allow you to explore the software before purchasing.But before running out to buy anything, talk with other programs to learn what they are using, how they like it and what advice they can offer!