Online Testing: Comparison of Online and Classroom Exams in an Upper-Level Psychology Course

Sara A. Brallier, Kerry A. Schwanz, Linda J. Palm, Laura N. Irwin

  Open Access OPEN ACCESS  Peer Reviewed PEER-REVIEWED

Online Testing: Comparison of Online and Classroom Exams in an Upper-Level Psychology Course

Sara A. Brallier1,, Kerry A. Schwanz2, Linda J. Palm2, Laura N. Irwin2

1Department of Sociology, Coastal Carolina University, Conway, South Carolina, USA

2Department of Psychology, Coastal Carolina University, Conway, South Carolina, USA

Abstract

The purpose of this study was to compare student performance on unproctored open book online exams to performance on traditional proctored paper and pencil exams. Data were collected over 12 semesters from students enrolled in a hybrid upper-level psychology course at a southeastern liberal arts university (N=274). In each semester the course was taught by the same instructor. During six semesters students completed the exams online and were allowed to use their textbook and notes; during the other six semester the students took the exams in the classroom, with the instructor present, and without access to their books and notes. Students scored significantly higher on the online exams compared to students who took the paper and pencil exams. Students who took the online exams scored significantly lower on the other course assignments compared to students who took the exams in the classroom. Additionally, students who took the online exams earned more As and Bs in the course than students who took the classroom exams. Implications for effectively incorporating online testing into a hybrid course are discussed.

Cite this article:

  • Brallier, Sara A., et al. "Online Testing: Comparison of Online and Classroom Exams in an Upper-Level Psychology Course." American Journal of Educational Research 3.2 (2015): 255-258.
  • Brallier, S. A. , Schwanz, K. A. , Palm, L. J. , & Irwin, L. N. (2015). Online Testing: Comparison of Online and Classroom Exams in an Upper-Level Psychology Course. American Journal of Educational Research, 3(2), 255-258.
  • Brallier, Sara A., Kerry A. Schwanz, Linda J. Palm, and Laura N. Irwin. "Online Testing: Comparison of Online and Classroom Exams in an Upper-Level Psychology Course." American Journal of Educational Research 3, no. 2 (2015): 255-258.

Import into BibTeX Import into EndNote Import into RefMan Import into RefWorks

1. Introduction

Over one-third of two and four year colleges and universities offer hybrid courses [16]. Hybrid courses blend the traditional face-to-face classroom method of instruction with online formats [6, 15]. Students enrolled in online courses typically spend a reduced amount of time in the classroom in exchange for more time spent outside of the classroom engaged in academic pursuits such as online discussions, activities, assignments, or exams [6]. Hybrid courses are viewed as a way to create more engaging and student-centered learning environments [15].

One component of hybrid courses is online testing. College instructors are increasingly incorporating online testing into their face-to-face courses [13, 18, 20]. Faculty members cite the numerous advantages of online exams as one reason for this increase. Online exams can be administered more efficiently, are less costly than paper and pencil exams, allow for automatic recording of grades, and offer more flexibility in scheduling for both students and faculty [2, 7, 10, 11, 20]. Additionally, the immediate scoring of online exams allows students to receive rapid feedback regarding their performance which is beneficial to learning [13].

2. Literature Review

There have been mixed findings regarding the equivalency of exams administered online and exams given face-to-face in the classroom and the importance of proctoring in the administration of online exams. Alexander, Bartlett, Truell, & Ouwenga [1] found no significant difference in student performance on proctored online exams and proctored paper and pencil exams in a computer technology course. Likewise, Hollister and Berenson [12] found no significant difference between proctored and unproctored online exams in an Introduction to the Computer in Business course. However, proctored exams are not an option for educational institutions with limited resources and/or widely dispersed students [13]. As a result, students often complete online exams outside of the classroom without supervision and therefore with access to their course textbook and notes. A primary concern regarding unsupervised online exams is that exam results will be inflated if students are allowed to consult course materials, each other, or other resources [5, 11, 19, 17]. There has not been a great deal of research examining the effects of allowing students to use their notes or other resources during unproctored online exams compared to proctored in-class exams where students cannot access notes, books and other resources. Most studies of online testing either don’t consider the effects of the use of outside materials as they compare proctored online exams to proctored paper-and-pencil exams (e.g. Alexander, Bartlett, Truell, & Ouwenga [1] or compare performance on exams of students in online courses compared to students in traditional courses. The comparisons of distance courses to traditional lecture-based courses capture more than the effects of testing; they also reflect the effects of different modes of instruction and differences in students who complete online courses compared to students in traditional courses [4].

In order to better understand the differences between unproctored online exams and proctored in-class exams Brallier and Palm [3] compared performance on online unproctored exams to proctored classroom exams in a lecture-based introductory sociology course. The students’ scored an average of four percent higher on the unproctored online exams compared to the proctored classroom exams. A comparison of exam performance as a function of exam mode was also conducted for students taking introductory sociology as a distance course. The students’ scores were almost ten percent higher on unproctored online exams compared to scores on proctored classroom exams. In both the lecture-based and distance sociology courses, the difference in exam performance as a function of exam mode was statistically significant. Carstairs and Myors [5] also reported a significant difference in performance on proctored and unproctored exams in a lecture-based upper-level psychology course, with students scoring an average of five points higher on an unproctored exam than on a proctored paper and pencil exam. Schultz, Schultz, and Gallogly [19] had comparable findings; they examined performance of students in distance marketing, management, and accounting classes on proctored paper and pencil exams and unproctored online exams and found that students performed three to four points higher on the unproctored exams.

Other researchers have found no significant difference between proctored classroom exams and unproctored online exams. Frein [9] compared student performance on proctored paper and pencil in-class exams, proctored online exams, and unproctored online exams in a lecture-based introductory psychology course. Student performance did not differ significantly across the exam formats. Escudier, Newton, Cox, Reynolds, and Odell, [8] also found that students in an undergraduate dental program in the United Kingdom performed similarly on online and paper and pencil exams.

As there are mixed findings in the literature regarding the equivalency of unproctored online exams and proctored in-class exams, the purpose of this study was to determine if there would be a significant difference in exam performance in an upper-level psychology course between students who took the exams online with open book and open notes allowed compared to students who took the exams in the classroom using the traditional proctored paper and pencil test format with no access to books or notes allowed. The researchers hypothesized that students who took the online exams would perform significantly higher than students who took the classroom exams. The researchers also examined performance on other course assignments and course grade as a function of exam mode (unproctored online exams versus proctored classroom exams).

3. Method

3.1. Course Design

This study was conducted within an upper level psychology course, Principles of Psychological Testing (PSYC 483), at a four-year public university in the southeastern United States. The course is one of two courses that fulfills a major requirement for psychology majors and is generally filled to capacity with primarily psychology students. The course consists of a lecture and lab component with students receiving three hours of lecture instruction and three hours of lab instruction each week. The course provides a survey of the psychometric process and topics covered include the principles of measurement and test score interpretation, discussion of the variety of group and individual tests available for psychologists, and the criteria for selecting and evaluating tests. The course objectives for this class were as follows: (1) Help students gain an understanding of the principles of testing, measurement, and assessment, (2) Provide students with experiences of administering, scoring and interpreting tests used by psychologists, (3) Guide students through the process of integrating assessment information into a written psychological report, (4) Familiarize students with the criteria for selecting and evaluating assessment tools, (5) Help students understand the legal and ethical standards related to testing and assessment, (6) Identify and explain ways in which assessment tools are used in the applied fields of school, clinical, counseling, forensic, and neuropsychology. Student mastery of these goals was assessed through four unit exams and a series of assignments.

3.2. Participants

The participants in this study were 274 students who completed the Principles of Psychological Testing course in one of 12 semesters in which the course was taught by the same instructor. The sample consisted of 233 women and 41 men. The mean age of the students was 23.63 (SD = 5.02). The racial distribution consisted of 233 Caucasian students and 41 Non-Caucasian students. The class rank distribution contained 236 seniors and 38 juniors. One hundred thirty-six students were assigned to the classroom exam mode (proctored classroom exams without the use of books or notes) and 138 students were assigned to the online exam mode (unproctored online exams with open book and open notes allowed).

3.3. Materials

Course materials were kept consistent for all sections of the classes and labs over the 12 semesters of the study. In each class, the same book, lab assignments, and exams were used. Every semester, all students were given access to copies of class handouts and power point presentations through Blackboard course delivery system.

The director of the university’s Department of Institutional Research and Assessment provided the researchers with demographic and academic information for the students enrolled in the Principles of Psychology Testing course. Students’ age, class rank, gender, race, and cumulative GPA were provided in accordance with the university’s privacy policies.

3.4. Procedure

Data for this study were collected across a total of 12 semesters. In each semester, the lecture and lab components of the Principles of Testing course were taught by the same instructor with a traditional on-ground method, using a lecture-based, discussion, lab activity format. Four exams were administered every semester. The exam content remained the same over the 12 semesters of this study and each exam contained multiple choice questions worth a total of 100 points. During six of the semesters, students were administered proctored course exams with the instructor present and students were not allowed to use the textbook or notes (classroom exam mode). These exams were technically not timed but all students completed the exams during the 75 minutes allotted for the class period. During the other six semesters, the students completed unproctored online course exams and were allowed to use their textbooks and notes (online exam mode). The online exams were timed and the students had 75 minutes to complete the exams before the course delivery system forced the completion. Students could complete the exams using a computer of their choosing and in a location of their choosing. They were told that they were allowed to use their textbook and/or course notes to help them with the exams, but that the exams did have a 75 minute time limit and that they would not have much time to look up specific answers to the questions, so that studying the material before taking the exam was very important. Additionally, students were told that they were not allowed to help one another with the exams, and if they did so, it would be considered cheating and they would receive an “F” in the course. These instructions were communicated to the class verbally prior to each exam and were also written in the course syllabus.

Assignment of students to each of the exam mode conditions (classroom or online) was not random; the particular semester students enrolled in the course determined whether they took the exams in the classroom or online. The course grading system was standardized across the 12 semesters of the study. In addition to the exams, students were graded on the following five course assignments: a composite profile report, a written test evaluation, a test evaluation presentation, a research proposal presentation, and class participation. A total of 600 points could be earned in the course, with the exams contributing 400 points (67%) and the course assignments contributing 200 points (33%). Final grades were based on the percent of points each student earned in the course and were assigned as follows: 90-100% = A, 80-89% = B, 70-79% = C, 60-69% = D, below 60% = F.

4. Results

Preliminary analyses were conducted to assess whether students who completed online exams (n = 138) and students who completed classroom exams (n = 136) were comparable with respect to their academic and demographic characteristics. Using cumulative grade point average (GPA) as a measure of academic achievement, there was no significant difference in mean GPA between online exam-takers (M = 3.14, SD =. 53) and classroom exam-takers (M = 3.12, SD =. 62), t(272) =. 30, p = 76, two-tailed. The two groups of students did not differ significantly in terms of age, t(272) =. 19, p =. 85, two-tailed. The mean age of students in the online exam mode was 23.69 (SD = 5.55) and the mean age of students in the classroom exam mode was 23.57 (SD = 4.43). Additionally, as shown in Table 1, chi square tests for independence revealed no significant relationship between exam mode and the demographic variables of gender, class rank, and race.

Table 1. Relationship between Exam Mode and the Demographic Variables of Gender, Class Rank, and Race

An exam score was calculated for each student by computing the total number of points earned on the four 100-point course exams and dividing the total by 400 possible points. An independent t test revealed a significant effect of exam mode on exam scores, t (272) = 6.85, p <. 001, two-tailed. Students who took online exams scored higher (M = 87.18, SD = 6.28) than students who took classroom exams (M = 81.83, SD = 6.63). The size of the effect of exam mode was medium, with exam mode accounting for 15% of the variance in exam scores.

The performance of students on the course assignments was also examined as a function of exam mode. An assignment score was calculated for each student by dividing the total points earned on the five assignments by 200 possible points. An independent t test indicated that students who took classroom exams scored significantly higher on assignments (M = 94.46, SD = 5.80) than students who took online exams (M = 92.27, SD = 6.03), t = 3.06, p =. 002, two-tailed. The size of the effect was small, with exam mode accounting for 3% of the variance in assignment scores.

A final analysis was conducted to examine the relationship between exam mode and the course grade distribution. The grade distribution for the two exam modes is shown in Table 2. A chi square test for independence revealed a significant relationship between exam mode and grades, with a higher percentage of students in the online exam mode earning As and Bs than students in the classroom exam mode, χ2 (3, N = 274) = 9.27, p =. 03.

Table 2. Course Grade Distribution as a Function of Exam Mode

5. Discussion

The findings, not surprisingly, indicated that when students were allowed access to their books and notes they performed significantly better on exams than when they took exams without access to these resources. While the difference was statistically significant, the size of the difference was moderate. Students only scored about 5% higher on exams when they had access to books and notes. The researchers believe the difference was moderate because there was a time limit on the exams; students had 75 minutes to answer 50 multiple choice questions. Additionally, the questions were conceptual in nature which required students to apply prior knowledge and have a deep understanding of the material in order to answer them correctly.

Students who took the exams in the classroom performed better on the other course assignments than students who took online exams. It is possible that some of these students put more time and effort into the assignments to compensate for lower exam grades especially if they felt like they were not good at studying and/or taking tests. Alternatively, some of the students who took online exams may have put less effort into the other course assignments because they were obtaining high exam scores. Again, although the difference in assignment scores between the two exam mode groups was significant, it was quite small. Only three percent of the variance on the course assignment grades could be accounted for by exam mode.

Finally, the findings suggest that when online exams account for a significant portion of the course grade, there is the potential for course grades to be slightly inflated. A higher percentage of students who completed the online exams earned As and Bs compared to students who took exams in the classroom. There is a considerable body of literature that suggests when using unproctored online exams, the exams should be viewed more as a learning activity and be used in conjunction with other assessments of student learning such as papers, portfolios, and assignments [13, 14]. The next time the course is taught the instructor will more heavily weight the other assessment measures (i.e., papers and other written assignments) to counteract the effects of the higher scores on the online exams.

A limitation of the study was that the participants were not randomly assigned to the exam mode. In the future, if the instructor could offer two sections of the same course within a semester, then the exam modes could be randomly assigned to the course sections. Additionally, the sample consisted of mostly senior-level psychology students. Further research is needed to determine whether online exams lead to the same type of grade inflation in courses with students from different majors and class standings.

References

[1]  Alexander, M. W., Bartlett, J. E., Truell, A. D. and Ouwenga, K. “Testing in a computer technology course: An investigation of equivalency in performance between online and paper and pencil methods.” Journal of Career and Technical Education, 18. 69-80. Fall. 2001.
In article      
 
[2]  Bonham, S. “Reliability, compliance and security of web-based pre/post testing.” In Proceedings of 2006 Physics Education Research Conference, American Institute of Physics, 133-136. January. 2007.
In article      
 
[3]  Brallier, S.A., and Palm, L.J. “Proctored and unproctored test performance in traditional and distance courses.” International Journal of Teaching and Learning in Higher Education, forthcoming, 2015.
In article      
 
[4]  Campbell, M., Floyd, J. and Sheridan, J.B. “Assessment of student performance and attitudes for courses taught online versus onsite.” The Journal of Applied Business Research, 18 (2). 45-51. March. 2002.
In article      
 
[5]  Carstairs, J. and Myors, B. “Internet testing: A natural experiment reveals test score inflation on a high-stakes, unproctored cognitive test.” Computers in Human Behavior, 25. 738-742. May. 2009.
In article      
 
[6]  Caufield, J. How to design and teach a hybrid course. Stylus Publishing, LLC, Sterling, Virginia, 2011.
In article      PubMed
 
[7]  DeSouza, E. and Fleming, M. “A comparison of in-class and online quizzes on student exam performance.” Journal of Computing in Higher Education, 14 (2). 121-134. Spring. 2003.
In article      
 
[8]  Escudier, M., Newton, T., Cox, M., Reynolds, P., and Odell, E. “University students' attainment and perceptions of computer delivered assessment; a comparison between computer-based and traditional tests in a 'high-stakes' examination.” Journal Of Computer Assisted Learning, 27 (5). 440-447. October. 2001.
In article      
 
[9]  Frein, S. “Comparing in-class and out-of class computer-based tests to traditional paper-and-pencil tests in introductory psychology courses.” Teaching of Psychology, 38 (4). 282-287. October. 2011.
In article      
 
[10]  Graham, J. M., Mogel, L. A., Brallier, S. A., and Palm, L. J. “Do you online?: The advantages and disadvantages of online education.” Bridges, 2. 27-36. Winter. 2008.
In article      
 
[11]  Harmon, O. R., and Lambrinos, J. “Are online exams an invitation to cheat?” Journal of Economic Education, 39 (2). 116-125. Spring. 2008.
In article      
 
[12]  Hollister, K. K., and Berenson, M. L. “Proctored versus unproctored online exams: Studying the impact of exam environment on student performance.” Decision Sciences Journal of Innovative Education, 7 (1). 271-294. March. 2009.
In article      
 
[13]  Khare, A., and Lam, H. “Assessing student achievement and progress with online examinations: Some pedagogical and technical issues.” International Journal on E- Learning, 7. 383-402. July. 2008.
In article      
 
[14]  Kinney, N. “A guide to design and testing in online psychology courses.” Psychology Learning and Teaching, 1 (1). 16-20. 2001.
In article      CrossRef
 
[15]  Marks, D. “The Hybrid Course: Leaning Into the 21st Century.” Journal Of Technology Integration In The Classroom, 5 (1). 35-40. Spring. 2013.
In article      
 
[16]  Parsad, B., and Lewis, L. Distance Education at Postsecondary Institutions: 2006-07. First Look. NCES 2009-044. National Center For Education Statistics, Washington, D.C., 2008.
In article      PubMed
 
[17]  Rovai, A. P. “Online and traditional assessments: What is the difference?” The Internet and Higher Education, 3 (3). 141-151. 3rd Quarter. 2000.
In article      
 
[18]  Rowe, N. C. “Cheating in online student assessment: Beyond plagiarism”. Online Journal of Distance Learning Administration, 7 (2). 1-10. Summer. 2004.
In article      
 
[19]  Schultz. M.C., Schultz, J.T., and Gallogly, J. “The management of testing in distance learning environments.” Journal of College Teaching & Learning, 4 (9). 19-26. September. 2007.
In article      
 
[20]  Stowell, J. D. & Bennett, D. “Effects of online testing on student exam performance and test anxiety.” Journal Of Educational Computing Research, 42 (2). 161-171. March. 2010.
In article      
 
  • CiteULikeCiteULike
  • MendeleyMendeley
  • StumbleUponStumbleUpon
  • Add to DeliciousDelicious
  • FacebookFacebook
  • TwitterTwitter
  • LinkedInLinkedIn