At the end of each semester, we give students the opportunity to provide feedback about their courses and instructors. Such feedback can be valuable for improving future sections of the course. However, a growing body of research shows that student evaluation of teaching is not a reliable indicator of teaching effectiveness.
NMU-AAUP President Brent Graves agrees. “A great deal of research is now available showing that student ratings do not correlate with student learning and are biased on the basis of many factors, including race, gender, age, personality, attractiveness, topic, and even how a professor dresses.” It has also been argued that over reliance on student feedback is a primary cause of grade inflation (https://www.ncbi.nlm.nih.gov/pubmed/27899725) and student disengagement from the educational process (https://www.chronicle.com/article/Students-Evaluating-Teachers/245169).
The national AAUP organization recommends that student feedback not be the basis of any personnel decisions (https://www.aaup.org/article/student-evaluations-teaching-are-not-valid#.XAfuGuJOlEY). Such a perspective on student evaluations of teaching is gaining traction at many universities. For example, in a recent article in Tomorrow’s Professor, Ginger Clark (Assistant Vice Provost for Academic and Faculty Affairs and Director of the University of Southern California Center for Excellence in Teaching) explained that student surveys will no longer be used directly in faculty performance reviews, but will continue to provide important feedback to help faculty adjust their teaching practices. Instead, USC will move toward a peer review system based on classroom observation and review of course materials, design, and assignments.
“In the past, some departments at NMU relied solely on student feedback to evaluate teaching effectiveness, probably because handing out questionnaires and tabulating bubble sheets is the least time-consuming way to do this,” said Graves. To protect faculty from unfair impacts of unreliable indicators of teaching effectiveness, the representatives of the NMU-AAUP have negotiated contract language that puts student feedback about teaching into perspective. In the 2015-2020 contract, the term “student evaluations” was changed to “student ratings” throughout the contract. “We argued that student feedback is not sufficient to evaluate course content nor pedagogy. Student perceptions may provide useful information, but they are not an evaluation of teaching,” said Graves.
More substantively, section 5.4.1.5.c requires faculty to include in their annual/5-year evaluations at least three types of information about their teaching: 1) colleague evaluation information (i.e., what do your peers think), 2) student ratings (what do your students think), and 3) an appraisal of student learning (i.e., what are your students learning). Sections 5.4.1.6 through 5.4.1.8 require that evaluation committees and administrators use all of this information to evaluate teaching. Especially important is section 5.4.1.2.1.1, which states explicitly “Evaluations of teaching effectiveness shall not rely solely upon student ratings.”
To use currently popular terminology, AAUP argues that student ratings should provide formative assessment of teaching, rather than summative assessment. Given the recent data, the NMU-AAUP will attempt to move closer to the position of the national organization, and other universities, on this issue in our next contract negotiation.