Student evaluation of teaching (SET) often suffer from low response rates, and more so if the evaluation is online. The authors of this article argue how this may distort results, limiting the interpretation of course evaluations as a measure of teaching quality in any given course as well as rendering comparisons across courses, teachers, departments, and institutes problematic when the response rate varies. This is a problem not yet sufficiently considered in the literature, despite SET scores commonly being used by departments for the purposes of awarding faculty teaching prizes and making promotion decisions, by students in course selection decisions, and for institutional rankings and accountability.
For a large European university, the authors conducted a study on SET’s. They found that evaluations somewhat misrepresent student opinion about teaching quality—in particular, the study shows positive selection bias on average, indicating that the true evaluation score is lower. Furthermore, the SET-based ranking of courses is not accurate because the response rate varies widely across courses. As overall implication of the findings the authors conclude that institutions should devote serious efforts to increasing SET response rates. They offer some strategies that universities could adopt to improve the quality of SET’s.