Keeping Students Engaged: How to Rethink Your Assessments Amidst the Shift to Online Learning
Keeping students engaged in course content is a challenge for all faculty, whether a legacy online teaching pro or a newbie to this space. Perhaps
Keeping students engaged in course content is a challenge for all faculty, whether a legacy online teaching pro or a newbie to this space. Perhaps
I admit that I’m an assessment geek, nerd, or whatever name you’d like to use. I pore over evaluations, rubrics, and test scores to see what kinds of actionable insights I can glean from them. I’ve just always assumed that it’s part of my job as a teacher to do my very best to make sure students are learning what we need them to learn.
Engagement in a continuous, systematic, and well-documented student learning assessment process has been gaining importance throughout higher education. Indeed, implementation of such a process is typically a requirement for obtaining and maintaining accreditation. Because faculty need to embrace learning assessment in order for it to be successful, any misconceptions about the nature of assessment need to be dispelled. One way to accomplish that is to “rebrand” (i.e., change perceptions) the entire process.
Classroom Assessment Techniques, or CATs, are simple ways to evaluate students’ understanding of key concepts before they get to the weekly, unit, or other summative-type assessment (Angelo & Cross, 1993). CATs were first made popular in the face-to-face teaching environment by Angelo and Cross as a way to allow teachers to better understand what their students were learning and how improvements might be made in real time during the course of instruction. But the same principle can apply to online teaching as well.
The current state of student assessment in the classroom is mediocre, vague, and reprehensibly flawed. In much of higher education, we educators stake a moral high ground on positivistic academics. Case in point: assessment. We claim that our assessments within the classroom are objective, not subjective. After all, you wouldn’t stand in front of class and say that your grading is subjective and that students should just deal with it, right? Can we honestly examine a written paper or virtually any other assessment in our courses and claim that we grade completely void of bias? Let’s put this idea to the test. Take one of your assessments previously completed by a student. Grade the assignment using your rubric. Afterwards, have another educator among the same discipline grade the assignment using your exact rubric. Does your colleague’s grade and yours match? How far off are the two grades? If your assessment is truly objective, the grades should be exact. Not close but exact. Anything else reduces the reliability of your assessment.
Flipped learning environments offer unique opportunities for student learning, as well as some unique challenges. By moving direct instruction from the class group space to the individual students’ learning spaces, time and space are freed up for the class as a learning community to explore the most difficult concepts of the course. Likewise, because students are individually responsible for learning the basics of new material, they gain regular experience with employing self-regulated learning strategies they would not have in an unflipped environment.
Measuring student success is a top priority to ensure the best possible student outcomes. Through the years instructors have implemented new and creative strategies to assess student learning in both traditional and online higher education classrooms. Assessments can range from formative assessments, which monitor student learning with quick, efficient, and frequent checks on learning; to summative assessments, which evaluate student learning with “high stakes” exams, projects, and papers at the end of a unit or term.
I’m “reflecting” a lot these days. My tenure review is a few months away, and it’s time for me to prove (in one fell swoop) that my students are learning. The complexity of this testimonial overwhelms me because in the context of the classroom experience, there are multiple sources of data and no clear-cut formula for truth.
Sometimes feedback leads to better performance, but not all the time and not as often as teachers would like, given the time and effort they devote to providing students feedback. It’s easy to blame students who seem interested only in the grade—do they even read the feedback? Most report that they do, but even those who pay attention to it don’t seem able to act on it—they make the same errors in subsequent assignments. Why is that?
Sometimes, in informal conversations with colleagues, I hear a statement like this, “Yeah, not a great semester, I doled out a lot of C’s.” I wonder, did this professor create learning goals that were unobtainable by most of the class or did this professor lack the skills to facilitate learning? I present this provocative lead-in as an invitation to reflect upon our presuppositions regarding grading.
Get exclusive access to programs, reports, podcast episodes, articles, and more!