By Jason Lim
On June 10th and 11th of this year, I spent 20 hours working on a final. I worked from 10:00am Thursday to 7:48am Friday, breaking only occasionally and briefly. When I finished, I felt exhausted, defeated, and physically sick.
This test experience, a final for an upper-division math class, was not an anomaly. Rather, it was the unintentional result of a math department policy. The COVID-19 pandemic moved UCLA’s entire 2020-21 school year online. In response, the UCLA math department created a policy mandating all instructors to allow a time window of at least 24 hours for students to work on their exams. The intention of the policy was to equalize testing in the online environment by ensuring that all students, regardless of timezone, would have some convenient, daytime window in which to take the test. Instructors also permitted students to use their notes and online resources during the tests. The overall policy had the additional goal of reducing academic dishonesty. On one hand, a single 24-hour window prevents students who took the test earlier from communicating answers to those who took it later, as is often the case with exams that occur across multiple time windows. On the other hand, open-note policies simply permitted what was previously considered cheating rather than making it more difficult to cheat. Students were still prohibited from collaborating on tests, but there was practically no enforcement to discourage it. So while students were prevented from seeing exam questions early, a simpler form of cheating was allowed to run rampant.
In practice, the 24-hour exam policy was an almost immediate failure. In spring quarter 2020, when students were given use of their notes, lots of time to check their answers, and easy ways to collaborate without being caught, average scores skyrocketed far higher than what standard grading curves could handle. I remember the average score of my first 24-hour math midterm being over 90% while most math test averages lie in the 75-85% range. When everyone scores an A, you have to as well to end up with an “average” grade. With this higher class score distribution, one small error was enough to drop a near-perfect score to below average. I found it essential to thoroughly check my answers using my extra exam time. Moreover, math professors responded to the jump in class averages by making tests harder. To clarify, I have no problem with hard tests under normal circumstances: you get beat up for 50 minutes, pull out a B-, then get curved up to an A. Hard tests are a legitimate way to avoid a skewed, half-the-class-got-100 distribution that is harder to curve generously. But when given 24 hours with access to notes and internet resources, one could feasibly tackle all the multi-part, proof-based questions on a hard math test, scoring close to perfect if one worked long enough. Thus, averages remained high but students spent drastically more time working on exams. By spring quarter of 2021, when I worked on my final for 20 hours, it was not uncommon among my peers to hear of students spending upwards of 15 hours on a final.
The 24-hour exam policy created unnecessary hardship for students in the name of equality. However, the policy created greater inequality in testing for the groups most impacted by the COVID-19 pandemic. When students began to spend significant time on each exam, students in international time zones would have to start their exam in the middle of the night to spend as much time on the test as their California-based peers. The 24-hour exam policy ended up hurting the very students it was designed to benefit. Furthermore, students with obligations beyond schoolwork, like a job or caring for family members, weren’t able to dedicate as much time to the exam as their peers. Lastly, students who used support services like the Center for Accessible Education (CAE) were denied previous accommodations under the 24-hour exam policy, based on the belief that every student would have more than enough time to work on the exam. The students placed at a disadvantage by the policies intersect with groups that were most impacted by COVID-19. But this inequality is not inherent to online testing. In fact, many professors avoided the online testing question altogether by creating projects or essays in place of traditional tests. Even when restructuring an assessment is difficult, UCLA could look to our existing in-person testing as a model for strengthening equity in online exams.
The in-person collegiate testing environment works because, more than anywhere else, equal testing conditions can be achieved. Every student is in the same room at the same time taking the same test. Furthermore, support systems like CAE exist to further equalize the testing experience. The closest online substitute for traditional testing is a digital proctoring service like Respondus LockDown Browser, which times students while locking down the computer and recording the student to check for cheating. At first glance, Respondus is textbook Orwellian but it’s important to note its similarities to standard test environments: no notes, time limits, and monitoring from proctors. There are many areas in which Respondus could be improved and alternate online proctoring systems that trade some security for greater ease of use. But for the sake of argument, Respondus was one option readily available to UCLA that could have mimicked our in-person testing procedures during the online year.
The issue with an online testing solution like Respondus is inequality in students’ access to technology at home. In timed, high-stakes testing environments, technical difficulties are disastrous, making the tech disparity more problematic than in standard teaching tools like CCLE and Zoom. Furthermore, online proctoring tools are difficult to set up and have greater hardware requirements, such as a stable internet connection, working webcam and microphone, and administrator access to the testing computer. While the Bruin Tech Award provided limited funding for students to purchase technology for their home learning environments, there was, in my experience, a lack of guides and assistance on staying connected to campus digitally. We needed UCLA IT services to provide purchasing guides and assistance for instructors and students setting up virtual learning tools. The extensive requirements and lack of guidance placed students without easy access to tech resources at a serious disadvantage. Equalizing technology access deserves the same amount of effort we put into creating equal testing environments in-person. With this technological support in place, UCLA would be able to consistently replicate a traditional testing environment.
I sincerely hope that another campus closure can be avoided for a long time to come. However, in the case that UCLA ever moves online again, the math department should know that their supposedly helpful exam policy created unnecessary stress for many of their students during an already-challenging time without deterring cheating nor bolstering equity. Considering the secondary effects of a 24-hour exam policy, UCLA would have reduced student distress and promoted equal testing by choosing to replicate the in-person testing experience with proctored online testing environments. While Respondus (and similar tools) is surrounded with negative student opinion, a supportive UCLA could have made the testing tool feasible and accessible. Among undesirable online assessment policy options, Respondus presents the best choice for fairness and student wellbeing: one in which a student undergoes standard test-related stress for a reasonably short period, instead of anxiety, exhaustion, and exam related stress for an entire day.