The faculty members in the chemistry department are confused.
Last semester, the campus teaching center held a series of workshops to get faculty more familiar with the anti-plagiarism tool that the university adopted and linked into everyone’s online course environment. The teaching center showed everyone who attended the training sessions 51 ways that they could help to catch cheaters, based on research conducted by two researchers at the University of Texas’ Telecampus (McNabb and Anderson, 2009). But the 51 strategies are not why the chemistry faculty are confused.
The main focus of the teaching center’s training was how to send student-created materials to the anti-plagiarism checking service, which would then report on how similar student work was to sources on the Internet, in library database, and in a growing pool of student work from other institutions. It sounded great. Faculty members would see a color-coded report marked with a “similarity percentage.” Higher percentages and “hotter” colors indicated potential copying.
Except, the chemistry professors who sent their students’ lab reports and class essays through the anti-plagiarism service found that nearly all of their students’ work came back “hot” red, with similarity percentages reaching up into the 80-95% range. Yet, when the professors looked at the students’ work, they didn’t think it was so bad—in fact, the students were doing what the assignments were asking them to do: perform known experiments and report on their results. The chemistry professors asked their colleagues in other departments if they, too, were seeing lots of supposed cheating, but the English and history department faculty members said no—a few, sure, but not everybody.
So why was the “cheater catcher” returning so many false hits? And why weren’t other departments seeing similar flag rates with the tool?
What is Originality?
Instructors and administrators utilize many tactics to help ensure that the work their students perform is conducted under rigorous conditions and is actually created by or performed by the students themselves. Especially with the rise of online learning, academic integrity has created a business model, with companies like Turnitin and SafeAssign (now part of Blackboard) offering to compare student submissions against large databases of previous students’ work.
A lot of campuses, though, are finding that the big-database approach to academic integrity isn’t a one-size-fits all solution (Lancaster and Culwin, 2007). That’s because there are various definitions of “originality” across the units of higher education institutions.
For example, liberal arts disciplines such as English and philosophy place a premium on originality of content, where learners incorporate research materials within an argument largely of their own devising. However, in the sciences, such as biology and chemistry, the goal is originality of design, with experimentation being the area where learners demonstrate originality. In disciplines like education and psychology, academic integrity is measured by originality of method, where learners rely on—and sometimes duplicate—previous inquiries in order to build on the body of knowledge in the discipline. These are purposely broad categories. There are probably many more definitions of “original work,” and that raises a challenge.
In most institutions’ academic integrity policies, there are clear statements about what constitutes a violation of the policy. Copying directly from a source without attribution, getting someone else to take an exam or write a paper, or sharing answers to tests (McCabe et al., 2012)—all of these are definite violations.
We get into murkier water when we try to create a clear definition of what we mean when we say “the student’s own original work,” especially at institutions where the things we ask students to create in order to demonstrate skills are so varied. Blanket originality-reporting tools like Turnitin are useful for only a part of the institution, because what we call “original work” varies from discipline to discipline.
Originality of Content
In the liberal arts, originality is usually defined most closely to match the model adopted by those of us who create strategies for combatting academic dishonesty: originality of content. Learners are expected to create arguments, essays, reports, and presentations that rely on research of what others have said or done. The learner’s structure, logic, and ideas, however, are expected to be his or her own, and not simply a re-telling, re-ordering, or review of source materials. Originality-report tools are very good at catching learners who rely too heavily on source material, since the tools compare not just the exact words of submissions, but compare submissions against the logical structure of source materials.
Originality of Design
In the sciences, originality focuses more on the design of an experiment. Learners create experiments in order to test hypotheses or confirm results obtained by others. They work with existing data or interpret data that they have collected. The original part of the work is in analyzing outcomes and predicting next steps. Originality-report tools often over-report academic integrity issues in design-based disciplines, since well-known experiments are often repeated, and learners are expected to work with content that is very similar to existing content.
Originality of Method
In the social sciences, methodological originality is most common: learners create new ways to test hypotheses. Rather than re-run existing experiments, learners experiment in new and original ways; they rely on, but seldom duplicate, previous inquiries in order to expand the existing body of knowledge in the field. They create logical ties to previous research, and point to potential future directions for the field. Originality-report tools often over-report academic integrity issues in method-based disciplines, as well, since most content created by learners in method-based areas contains a literature review or other foundational set of lengthy citations.
Conclusion
Institutions can undertake several specific actions in order to foster academic integrity across campus. Create campus-wide definitions and decision processes, including an academic integrity policy and consistent definitions and penalties for infractions. Such documents should contain input from all campus stakeholders (e.g., faculty, students, and support areas).
Set up regular communication among faculty who teach the same students, and consider creating an academic integrity reporting/review board for handling cases in a formal way.
There are also several course-level best practices. First, know your students, either via personal contact or “introduce yourself” icebreaker exercises. Next, model correct and incorrect processes as part of the course. The professor should provide examples of well and poorly done content.
Adopt Universal Design for Learning (UDL), which allows for multiple means of representing content and multiple methods for students to demonstrate skills. Use the assessment randomization, pooling, and rotation tools in the learning management system. Well-pooled and -randomized assessments ensure that no two students receive the same assessment questions in the same order. Build a library of good examples from former students—ask for their permission to use their work as examples for future learners, and share the good and bad examples.
Finally, make sure that the expectations for originality in student work match up with the tools used to help students to be academically honest (Sutherland-Smith, 2010). As these suggested tactics demonstrate, it is far more effective to communicate expectations clearly for ethical conduct than it is to catch cheaters after the fact. Moving from “I caught you” to “I taught you” involves more than just sending student work to a database, but it’s worth the effort to understand originality in terms of the expectations of faculty members across your campus.
References:
Lancaster, T., and Culwin, F. (2007). Preserving Academic Integrity—Fighting Against Nonoriginality Agencies. British Journal of Educational Technology, 38(1), 153-157.
McCabe, D. L., Butterfield, K. D., and Trevino, L. K. (2012). Promoting Academic Honesty: Students, Cheating, and a Culture Of Integrity. Baltimore: The Johns Hopkins University Press.
McNabb, L., and Anderson, M. (2009). 51 Ways to Maintain Academic Integrity in an Online Course. Distance Education Report, 13(3), 8.
Sutherland-Smith, W. (2010). Retribution, Deterrence and Reform: The Dilemmas of Plagiarism Management in Universities. Journal of Higher Education Policy & Management, 32(1), 5-16.
Tom Tobin is a researcher, author, and speaker on issues related to quality in higher-education. He has been designing and teaching online courses for 20 years, and he lectures and publishes on academic integrity, accessibility, copyright, and online teaching evaluation. His latest book, Evaluating Online Teaching, will be published by Jossey-Bass in spring 2015.