With an increasing number of rating systems now online, the question of who completes those surveys (since not all students do) is one with important implications. Are those students dissatisfied with the course and the instruction they received more likely to fill out the online surveys? If so, that could bias the results downward. But if those students satisfied with the course are more likely to evaluate it, that could interject bias in the opposite direction.
This question was explored in a study that involved a 4,000-student population and 848 undergraduate courses. The students have a two-week window during which they can electronically submit their anonymous course evaluations, one for each course in which they are registered. During that two-week period, they receive three email reminders.
The collected data enabled the faculty researcher to identify several characteristics that differentiated students who completed the course evaluations from those who did not. First-term beginning students respond more, as do students who are evaluating a course that is a requirement for their major. The author suggests that new students may be more enthusiastic about participating in university life. More seasoned students may think that the evaluations are not taken seriously by the instructor or institution and therefore are less motivated to complete them. It makes sense that students would consider courses in their major more important than other courses. Interestingly, course size was not a variable that reliably predicted who would complete the surveys.
The data also revealed that men, students with light course loads, and students with low cumulative GPAs and low course grades were less likely to evaluate the course. Why are women more likely to evaluate their courses? The researcher refers to this result as “puzzling.” (p. 22) The course load variable “appears to be a measure of student attachment to the university.” (p. 23); those taking fewer courses tend to be less committed to the institution.
Certainly the most interesting finding is the data indicating that students doing poorly in the course are less likely to complete the course evaluations: “A matched pairs test that completely controls for class- and instructor-invariant student characteristics confirms the finding that students who do better in the course are more likely to participate in SET (student evaluation of teaching).” (p.28) Add to this another finding documenting that students who are more likely to have strong opinions about the course (indicated by how quickly within the two-week window they submitted their evaluations) had, on average, positive views about the course and instructor. The author concludes that based on this data, online course evaluations do not attract disproportionately more dissatisfied students. In fact, they do the opposite, giving credence to the contention that the results may be biased in favor of the instructor and course as opposed to against them.
Reference: Kherfi, S. (2011). Whose opinion is it anyway? Determinants of participation in student evaluation of teaching. Journal of Economic Education, 42 (1), 19-30.
Reprinted from “Who Participates in End-of-Course Ratings?”The Teaching Professor, 25.9 (2011): 4,5.
[articles-report-button]