Defying Expectations: Researchers Find Little Evidence of Cheating With Online, Unsupervised Exams

Distanced Learning Student

Researchers found that unsupervised online exams produced scores very similar to in-person, proctored exams, indicating minimal or ineffective cheating and supporting the validity and reliability of online assessments. Despite the concerns, the consistent results across different disciplines and levels encouraged the continuation of online exams, although professors remain cautious, implementing strategies to further deter cheating.

When Iowa State University transitioned from in-person to remote learning mid-spring semester in 2020, psychology professor Jason Chan was concerned. Would unsupervised, online exams unleash rampant cheating?

His initial reaction flipped to surprise as test results rolled in. Individual student scores were slightly higher but consistent with their results from in-person, proctored exams. Although individual student scores saw a minor increase, they remained in line with their previous in-person, supervised exam results. Students who had been scoring B’s before the COVID-19 lockdown were still earning B’s in the online, unsupervised testing environment. This pattern held true for students up and down the grading scale.

“The fact that the student rankings stayed mostly the same regardless of whether they were taking in-person or online exams indicated that cheating was either not prevalent or that it was ineffective at significantly boosting scores,” says Chan.

To know if this was happening at a broader level, Chan and Dahwi Ahn, a Ph.D. candidate in psychology, analyzed test score data from nearly 2,000 students across 18 classes during the spring 2020 semester. Their sample ranged from large, lecture-style courses with high enrollment, like Introduction to Statistics, to advanced courses in engineering and veterinary medicine.

Across different academic disciplines, class sizes, course levels, and test styles (i.e., predominantly multiple choice or short answer), the researchers found the same results. Unsupervised, online exams produced scores very similar to in-person, proctored exams, indicating they can provide a valid and reliable assessment of student learning.

The research findings were recently published in the Proceedings of the National Academy of Sciences.

Iowa State Students Working

Students work on laptops above “Gene Pool,” a tile mosaic by Andrew Leicester inside the Molecular Biology Building at Iowa State University. Credit: Christopher Gannon/Iowa State University

“Before conducting this research, I had doubts about online and unproctored exams, and I was quite hesitant to use them if there was an option to have them in-person. But after seeing the data, I feel more confident and hope other instructors will, as well,” says Ahn.

Both researchers say they’ve continued to give exams online, even for in-person classes. Chan says this format provides more flexibility for students who have part-time jobs or travel for sports and extra-curriculars. It also expands options for teaching remote classes. Ahn led her first online course over the summer.

Why might cheating have had a minimal effect on test scores?

The researchers say students more likely to cheat might be underperforming in the class and anxious about failing. Perhaps they’ve skipped lectures, fallen behind with studying or feel uncomfortable asking for help. Even with the option of searching Google during an unmonitored exam, students may struggle to find the correct answer if they don’t understand the content. In their paper, the researchers point to evidence from previous studies comparing test scores from open-book and closed-book exams.

Another factor that may deter cheating is academic integrity or a sense of fairness, something many students value, says Chan. Those who have studied hard and take pride in their grades may be more inclined to protect their exam answers from students they view as freeloaders.

Still, the researchers say instructors should be aware of potential weak spots with unsupervised, online exams. For example, some platforms have the option of showing students the correct answer immediately after they select a multiple-choice option. This makes it much easier for students to share answers in a group text.

To counter this and other forms of cheating, instructors can:

  • Wait to release exam answers until the test window closes.
  • Use larger, randomized question banks.
  • Add more options in multiple-choice questions and making the right choice less obvious.
  • Adjust grade cutoffs.

COVID-19 and ChatGPT

Chan and Ahn say the spring 2020 semester provided a unique opportunity to research the validity of online exams for student evaluations. However, there were some limitations. For example, it wasn’t clear what role stress and other COVID-19-related impacts may have played on students, faculty, and teaching assistants. Perhaps instructors were more lenient with grading or gave longer windows of time to complete exams.

The researchers said another limitation was not knowing if the 18 classes in the sample normally get easier or harder as the semester progresses. In an ideal experiment, half of the students would have taken online exams for the first half of the semester and in-person exams for the second half.

They attempted to account for these two concerns by looking at older test score data from a subset of the 18 classes during semesters when they were fully in-person. The researchers found the distribution of grades in each class was consistent with the spring 2020 semester and concluded that the materials covered in the first and second halves of the semester did not differ in their difficulty.

At the time of data collection for this study, ChatGPT wasn’t available to students. However the researchers acknowledge AI writing tools are a game-changer in education and could make it much harder for instructors to evaluate their students. Understanding how instructors should approach online exams with the advent of ChatGPT is something Ahn intends to research.

Reference: “Unproctored online exams provide meaningful assessment of student learning” by Jason C. K. Chan and Dahwi Ahn, 24 July 2023, Proceedings of the National Academy of Sciences.
DOI: 10.1073/pnas.2302020120

The study was supported by a National Science Foundation Science of Learning and Augmented Intelligence Grant.

Be the first to comment on "Defying Expectations: Researchers Find Little Evidence of Cheating With Online, Unsupervised Exams"

Leave a comment

Email address is optional. If provided, your email will not be published or shared.