間眅埶AV

Post-Pre Survey Resources

This webpage provides an introduction to the nature, purpose, development, administration of post-pre surveys and the analysis and reporting of the data they provide.

The information and materials included here were developed by Dr. Lannie Kanevsky during her tenure as a Dewey Fellow with the ISTLD (2015 - 2016). The workshop handout on which it is based can be downloaded by clicking here (PDF).

All information and materials here are free and can be reused and modified for educational purposes. If you use them, please credit the author (Lannie Kanevsky) and the Institute for the Study of Teaching and Learning in the Disciplines.

What is a Post-Pre survey?

A post-pre survey is one of a number of tools that can be used to evaluate the impact of an instructional intervention (a course, program, workshop, etc.). Its purpose is to assess students perceptions of changes in their knowledge and skills, personal attributes, or impact on their future behaviour and aspirations. Students rate themselves twice on each intended outcome, first as they were before beginning the instruction, and second, after completing it. They do so on one form that they complete after the learning experience has concluded. The difference between students retrospective pre- and post-ratings reflects the perceived impact of their learning on each outcome. The primary focus of items on a post-pre survey is on changes that can be linked directly to students participation in an instructional intervention (Hiebert, Bezanson, Magnusson, OReilly, Hopkins & McCaffrey, 2011).

Two sample surveys are provided, one used in a career development workshop (Hiebert et al., 2011) and one used in a two-year graduate diploma program in Education. The first appears as it was presented to clients at the end of their workshop. The second sample is a working draft providing an expanded view of a survey presented to graduate students when they completed their two-year diploma program. The two columns on the left side, Capacity and Type of change, did not appear in the final version students completed. Their purpose is explained when the design process is described below.

Why Post-Pre and not Pre-Post?

Post-pre and pre-post designs both have their strengths and weaknesses. When using self-report surveys to measure changes in students perceptions of what they know, traditional, separate pre- and post- designs have been found problematic. By the end of the instruction, students measuring stick has changed as they developed greater knowledge吋hus the post-test scores end up being lower than the pre-test scores, even though positive change has occurred (Heibert, et al., 2011, p. 9). In essence, at the beginning of the instruction, students didnt know what they didnt know, so gave themselves higher ratings than they did by the end of the learning experience, i.e., they rated themselves lower after the instruction (Hiebert & Magnusson, 2014).

Post-Pre Assessment addresses this problem by creating a consistent measuring stick for both pre and post assessments. This process is used ONLY at the end of a course or program. It asks people to use their current level of knowledge to create a common measuring stick for pre-course and post-course assessments. (Hiebert, et al., 2011, p. 9)

Additional advantages of the post-pre method include the time saved by collecting pre- and post-intervention data in only one session rather than two. This also avoids problems with attrition. Unfortunately, both formats (post-pre and pre-post) are vulnerable to concerns associated with all self-report measures such as self-assessment biases like social desirability, i.e., providing a socially appropriate response rather than an accurate one.

Designing a Post-Pre Survey

The process of developing a post-pre survey begins with clearly specifying the intended outcomes of an intervention (e.g., the goals, objectives, capacities). Hiebert and Magnussen (2014) recommend survey items be developed to assess three types of change related to each learning outcome:

Competence: changes in knowledge (what students know) and skill (what students can do).
Personal attributes: changes in (a) attitudes, beliefs or dispositions (e.g., attitude toward subject, belief that change is possible); (b) intrapersonal factors (e.g., confidence, motivation, self-esteem; and (c) independence (e.g., self-reliance, initiative, independent use of knowledge and skills provided) (based on Baudoin et al., 2007).
Future impacts: benefits or changes in students lives, behaviour or aspirations in the future (e.g., better career opportunities, greater academic integrity, better collaborative problem solving, better lifestyle choices, etc.); that may emerge soon or long after the course concludes although they are being assessed at its end.

Click here for more examples of each.

Items in the second sample survey (Appendix B) are labelled in the far left columns with the professional Capacity (learning outcome) the program was designed to develop, and also by the type of change described above. Those columns did not appear on the version distributed to students but were included here to provide examples of the language that might be used to assess each type of change.

One of the major challenges in developing items is to find language that clearly, authentically represents the desired outcome in terms that will be meaningful to respondents and align well with the rating scale you choose. Compare the phrasing of the items on the two sample surveys. They are slightly different as are the meanings of the values on the rating scale. There are many different rating scales available but it is best to use only one in your survey. Klatt and Taylor- Powell (2005) offer more possibilities.

Be prepared to spend significant time on crafting the wording, asking others for feedback on clarity, revising, and field-testing or piloting your form. Expect to develop at least two or three drafts. The clarity of the items is a major determinant of the strength and quality of your data, and therefore your findings.

Prepare clear directions that explain the meaning of each value in your rating scale and how to rate each item. If you administer the form face-to-face, read the directions to students and ask for questions before having students complete the form to ensure everyone is understands the format and how to respond.

The final item on both sample surveys is essential. It asks students to indicate the extent to which they feel differences between their pre- and post- ratings are due to the instruction they received or to other influences in their lives. An item like this must be included in your survey. Its data directly addresses the big question regarding the overall effectiveness of the intervention: To what extent do students feel the changes evident in their responses to the previous items are attributable to the intervention? Results from all of the other items should be interpreted within the context of students responses to this final item.

Reporting Results

Post-pre data can be reported descriptively and analysed statistically. The sophistication, depth and length of your interpretation will vary depending on the questions you posed for your investigation, the size of your group, and the audience for your findings. Click here to see a written report based on data collected from the students in the diploma program in Education using the second sample form. A table summarizing students responses appears on the final page of the report. Again these samples are offered as illustrations. Your data may be better represented graphically or in some other format or medium. Select one or two formats that will communicate your findings most clearly.

Post-pre surveys assess only students' perceptions of their learning. This means interpretations of the differences between pre and post ratings must be limited to what or how much students think they learned or changed. Post-pre surveys do not directly measure learning therefore the differences cannot be reported as direct evidence of learning or change. Direct measures of learning include exam scores, course grades or grades earned on assignments. Direct and self-report (indirect) data are complementary. The results of post-pre assessments tell you if students think they learned while direct evidence tells you if, what or how much they actually learned. Plan to collect evidence of both.

Post-pre surveys provide a valuable means of capturing students views regarding their growth but they are not sufficient to support broad claims of success or effectiveness. Multiple sources of evidence will be required to determine what worked and didnt work, and how well. Those sources of evidence will need to be selected to suit the questions driving your project.

Frequently Asked Questions

1.  Should there be the same number items for each type of change (competence, personal attributes and future impacts)?

No, thats not necessary and often not appropriate. You may or many not need to assess all three types of change for each of your learning outcomes. That will depend on the questions to be answered by your project. Its most important that the items are clear, so they will mean the same thing to all respondents. Some research questions address personal attributes more than competence or only future impacts; others are all about competence. Every project will be different.

2.  How many items should be on the survey?

The rule for number of items is: as few as possible but as many as necessary. Include only those that are essential to addressing the learning outcomes impacted by your intervention and the questions included in your project. The longer the survey, the less likely potential respondents are to complete it or to give significant consideration in to their responses. That said, students tend to enjoy responding to post-pre surveys because the items enable them to see evidence of their growth that they might not have appreciated without completing it.

3.  How many respondents are needed?

Any data are better than no data so no matter how small your class, your data can address questions you have about the effects of your instruction and thus be meaningful and helpful to you. The larger the sample, the more likely it is to be representative of future cohorts of students. Its important to think carefully about how to optimize the response rate. Surveys distributed and completed in class have higher response rates than those students are asked to complete outside of class. If you use a paper survey, you will need to manually transfer the data in to a spreadsheet. If students complete the survey outside of class, incentives for completion and/or reminders should be considered. A higher response rate is worth the effort. Your sample will be more complete and therefore your findings more accurate.

Resources: Samples & templates

Sample surveys. More sample post-pre surveys developed by 間眅埶AV instructors (in addition to those mentioned above) are provided to offer options for ways you might word your items, different rating scales, ideas for wording for directions, and more.

REM 601 Social Science of Resource Management: Theories of Cooperation developed by J. Welch and S. Jamshidian (2015)
BUS 361 Project Management developed by K. Masri (2015)

Survey with participation consent form templates (Word format). These word documents provide suggested wording for participant consent as well as a sample survey template that includes title, directions, items and rating scale. Use one of these Word formats if you want to receive advice on your draft questionnaire from ISTLD staff before administering it. If you intend to have students complete the questionnaire online, the post-first format works best with the online survey tools available at 間眅埶AV (Survey Monkey and Websurvey). (See Nimon, Zigarmi, & Allen, 2001 in recommended readings below regarding the validity of side-by-side versus post-first formats.)

Survey template (Excel format). This Excel file provides a template in which the text of your title, directions, items, and rating scale can be inserted. It is useful for printed versions of the survey that you wish to handout to students , but its side-by-side format will not work well with online survey tools.

Spreadsheet for data entry (Excel format). This Excel worksheet is an empty spreadsheet ready for post-pre data. Each students response to each item is entered as a value in a cell in the spread sheet. Once entered here, the data should be checked (verified) by another individual. It can be analysed in Excel or imported in to any program for statistical analysis.

References

Baudouin, R., Bezanson, L., Borgen, B., Goyer, L., Hiebert, B., Lalande, V., . . . Turcotte, M. (2007). Demonstrating value: A draft framework for evaluating the effectiveness of career development interventions. Canadian Journal of Counselling, 41(3), 146-156.

Hach矇, L., Redekopp, D. E., & Jarvis, P. S. (2000). Blueprint for life/work designs. Memramcook, NB: National Life/Work Centre.

Hiebert, B., & Magnusson, K. (2014). The power of evidence: Demonstrating the value of career development services. In B. C. Shepard & P. S. Mani (Eds.), Career development practice in Canada:  Perspectives, principles and professionalism (pp. 489-530). Toronto: Canadian Education and Research Institute for Counselling.

Hiebert, B., Bezanson, M. L., Magnusson, K., O'Reilly, E., Hopkins, S., & McCaffrey, A. (2011). Assessing the impact of labour market information: Preliminary results of Phase Two (field tests).  Final report to Human Resources and Skills Development Canada.  Toronto: Canadian Career Development Foundation. Retrieved from

Kanevsky, L., Rosati, M., Schwarz, C., & Miller, B. (2014).  Post-pre assessment of effectiveness of a two-year graduate diploma program offered in a blended format.  Unpublished report.  Burnaby, BC:  間眅埶AV.

Klatt, J., & Taylor-Powell, E. (2005). Program Development and Evaluation: Designing a retrospective post-then-pre question, Quick Tips #28. University of WisconsinExtension,
Madison, WI. Retrieved from

Very Friendly, Basic, How-to Resources

Klatt, J., & Taylor-Powell, E. (2005). Program Development and Evaluation: Using the retrospective post-then-pre design, Quick Tips#27. University of WisconsinExtension, Madison, WI. Retrieved from

Klatt, J., & Taylor-Powell, E. (2005). Program Development and Evaluation: Designing a retrospective post-then-pre question, Quick Tips #28. University of WisconsinExtension, Madison, WI. Retrieved from    

Klatt, J., & Taylor-Powell, E. (2005). Program Development and Evaluation: When to use the retrospective post-then-pre design, Quick Tips#29. University of WisconsinExtension, Madison, WI. Retrieved from

Lamb, T. (2005). The retrospective pretest: An imperfect but useful tool. The Evaluation Exchange, 11(2).

Schaaf, J., Klatt, J., Boyd, H., & Taylor-Powell, E. (2005). Program Development and Evaluation: Analysis of retrospective post-then-pre data, Quick Tips #30. University of Wisconsin-Extension, Madison, WI. Retrieved from

Recommended Readings

Aiken, L. S., & West, S. G. (1990). Invalidity of true experiments: Self-report pretest biases. Evaluation Review, 14(4), 374-390.

Allen, J. M., & Nimon, K. (2007). Retrospective pretest: A practical technique for professional development evaluation. Journal of Industrial Teacher Education, 44(3), 27-42.

Coulter, S. E. (2012). Using the retrospective pretest to get usable, indirect evidence of student learning. Assessment and Evaluation in Higher Education, 37(3), 321-334.

Hill, L. G., & Betz, D. L. (2005). Revisiting the retrospective pretest. American Journal of Evaluation, 26(4), 501-517.

Howard, G. S. (1980). Response-shift bias: A problem in evaluating interventions with pre/post self-reports. Evaluation Review, 4(1), 93-106.

Klatt, J., & Taylor-Powell, E. (2005). Synthesis of literature relative to the retrospective pretest design. Retrieved from

Lam, T. C. M., & Bengo, P. (2003). A comparison of three retrospective self-reporting methods of measuring change in instructional practice. American Journal of Evaluation, 24(1), 65-80.

Nimon, K. (2014). Explaining differences between retrospective and traditional pretest self assessments: competing theories and empirical evidence. International Journal of Research and Method in Education, 37(3), 256-269.

Nimon, K., Zigarmi, D., & Allen, J. (2011). Measures of program effectiveness based on retrospective pretest data: Are all created equal? American Journal of Evaluation, 32(1), 8-28.

Pratt, C. C., McGuigan, W. M., & Katzev, A. R. (2000). Measuring program outcomes: Using retrospective pretest methodology. American Journal of Evaluation, 21(3), 341-349.

Sprangers, M. (1989). Subject bias and the retrospective pretest in retrospect. Bulletin of the Psychonomic Society, 27(1), 11-14.

Sprangers, M., & Hoogstraten, J. (1989). Pretesting effects in retrospective pretest-posttest designs. Journal of Applied Psychology, 74(2), 265-272.

Sprangers, M., & Hoogstraten, J. (1991). Subject bias in three self-report measures of change. Methodika, 5, 1-13.

Taylor, P. J., Russ-Eft, D. F., & Taylor, H. (2009). Gilding the outcome by tarnishing the past. American Journal of Evaluation, 30(1), 31-43.