Treischl, E., and T. Wolbring. 2017. “The Causal Effect of Survey Mode on Students’ Evaluations of Teaching: Empirical Evidence from Three Field Experiments.” Research in Higher Education 58: 904–921.
What is the most effective way to evaluate teaching in the classroom? Edgar Treischl and Tobias Wolbring address this question through a series of well-designed experiments that flesh out important answers regarding the delivery mode (online vs. paper) of course evaluations and its subsequent influence on the course ratings themselves. Results indicated that response rates for paper-based evaluations were significantly higher than those for online evaluations, but the gap between these rates decreased if students were initially emailed an invitation to evaluate courses and given time in class to complete the course evaluations. In addition, results indicated that paper-based evaluations trended toward more positive pictures of teaching than those administered online.
The authors framed this study from a methodological perspective, suggesting that most previous course evaluation studies used less-than-ideal research designs to address issues related to evaluation delivery mode and its potential influence on overall course ratings. Pivoting to the importance of the study on research design, the authors claim that their study’s design—random experiments across different trials—enables them to make causal claims about the role delivery mode plays in course evaluation response rates and its relationship to overall assessments of quality of instruction for any given course. Although the experiments were performed at one institution (a limitation noted by the authors), the study design merits considerations of its findings in other institutional contexts, including CIC member institutions.
DISCUSSION OF THE FINDINGS
Results of this study show that while paper administration of course evaluations yields slightly more positive results, online administration of course evaluations also is an effective means of assessing instructor quality as long as students are given time in class for completion. Only when students are given ample time for completing online surveys in class do response rates mirror those of paper administrations. Inviting students to take online course evaluations without providing them time to complete them in class is ineffective, as response rates drop, on average, by 21.6 percent.
Also, the study results suggest that delivery mode may share some relationship with overall course evaluation. Slight evidence may indicate that paper administrations trend toward more positive course evaluations. The question about efficiency trade-offs regarding the use of online evaluations and their influence on instructor ratings was not empirically asked and thus remains unanswered: Is it worth the costs associated with moving to paper evaluations, which are often more cumbersome to administer with resulting data more difficult to analyze and report, if doing so improves instructors’ overall course ratings?
IMPLICATIONS FOR ACTION BY CAMPUS LEADERS
CIC campus leaders should feel confident in knowing that the efficiencies of online platforms for gathering course-evaluation data do not necessarily compromise student response rates or significantly influence the nature of the evaluation itself.
With effective and innovative pedagogies driving the branding CIC institutions use to distinguish themselves from their competitors, questions about instructor quality remain critical. By extension, practices related to assessing instructor quality should also be important, not only in terms of what constitutes “quality” but in terms of the mechanisms used for gathering related information. Results of this study show that online administration of course evaluation is an effective means of assessing instructor quality as long as students are given time in class for completion.
About the Authors
Edgar J. Treischl is research assistant in the School of Business and Economics at Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg.
Tobias Wolbring is chair of empirical economic sociology in the Institute of Labor Market and Socioeconomics, School of Business and Economics, at FAU Erlangen-Nürnberg.
Literature Readers May Wish to Consult
Carini, R. M., J. C. Hayek, G. D. Kuh, J. M. Kennedy, and J. A. Ouimet. 2003. “College Student Responses to Web and Paper Surveys: Does Mode Matter?” Research in Higher Education 44(1): 1–19.
Cook, C., F. Heath, and R. L. Thompson. 2000. “A Meta-Analysis of Response Rates in Web- or Internet-Based Surveys.”Educational and Psychological Measurement 60(6): 821–836.
Vasey, C., and L. Carroll. May–June 2016. “How Do We Evaluate Teaching? Findings from a Survey of Faculty Members.” Academe. Washington, DC: American Association of University Professors (AAUP).