Implementation of a Technology-supported Three-stage Classroom Feedback System for Promotion of Self...

Volker Patzel

  Open Access OPEN ACCESS  Peer Reviewed PEER-REVIEWED

Implementation of a Technology-supported Three-stage Classroom Feedback System for Promotion of Self-regulation and Assessment of Student and Teacher Performance

Volker Patzel

Department of Microbiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore


Classroom feedback is essential to facilitate self-regulation and assessment of student and teacher performance. Here, the implementation of a technology (clicker)-supported classroom feedback system is described which provides students with three different levels of feedback: First, a direct computerized quantitative feedback; second, a dialogical external feedback from peers; and third, a class-wide qualitative external feedback from the teacher. This easy to set-up three-stage classroom feedback system which enables the application of several principles of good feedback practice triggered measurable learner and teacher self-regulation thereby improving the quality of learning and teaching.

At a glance: Figures

Cite this article:

  • Patzel, Volker. "Implementation of a Technology-supported Three-stage Classroom Feedback System for Promotion of Self-regulation and Assessment of Student and Teacher Performance." American Journal of Educational Research 3.4 (2015): 446-449.
  • Patzel, V. (2015). Implementation of a Technology-supported Three-stage Classroom Feedback System for Promotion of Self-regulation and Assessment of Student and Teacher Performance. American Journal of Educational Research, 3(4), 446-449.
  • Patzel, Volker. "Implementation of a Technology-supported Three-stage Classroom Feedback System for Promotion of Self-regulation and Assessment of Student and Teacher Performance." American Journal of Educational Research 3, no. 4 (2015): 446-449.

Import into BibTeX Import into EndNote Import into RefMan Import into RefWorks

1. Introduction

1.1. Motivation/Problem

The experience with teaching medium-sized and larger classes often reveals insufficient classroom feedback from students on their understanding of the topics being taught and whether the learning outcomes have been achieved. As a result, there might be limited potential for assessment, including self-assessment and promotion of student and teacher performance.

1.2. Hypothesis

It was hypothesized that the implementation of a regular technology-supported classroom feedback system can help to manage this problem, as such a system can promote (i) assessment/self-assessment of student performance by gaining qualitative and quantitative information on students’ understanding/self-regulation of learning with support from internal and external feedback, and (ii) assessment/self-assessment of teacher performance by identifying/closing the gaps between the learning goals defined by the teacher and the standards met by the students [1]. The successful implementation of a classroom feedback system would help to continuously improve the quality of teaching and learning as well as the final teaching/learning outcome. Commercially available classroom feedback or classroom response systems appear to be very similar and typically consist of a transmitter that students’ use to send their responses, a receiver to collect the students’ input, and a computer to analyze the data and display the results.

2. Methodology and Tools

The clicker technology along with the TurningPoint 2008 software package was selected as a classroom feedback system [2, 3, 4]. Clickers allow teachers to introduce multiple-choice questions during the lectures and to instantly collect and tabulate the answers given by the students with radio frequency remote transmitters. Thereby the instructor has a choice to publicly or anonymously collect and display the students’ input signals. Though the use of clickers takes some initial investment for the teacher, the technology can be made available to all students at no cost to them. The clicker-based classroom feedback system was implemented for the life sciences module “RNA Biology and Technology” under the curriculum offered by the Faculty of Science and the Yong Loo Lin School of Medicine at the National University of Singapore. A total of 42 or 45 third year undergraduate students attended the module in 2013 or 2014, respectively. Clicker sessions consisting of 14 to 18 questions were implemented to gain multiple levels of feedback. For each question, students were asked to select one best answer out of two to eight answer options predetermined by the instructor. Response data were saved, exported to Microsoft ExcelTM for analysis, and the results were monitored based on three performance indicators as described in detail below.

3. Results

3.1. Three-stage Classroom Feedback System Provided Students with Three Different Levels of Feedback

To gain maximum benefits from the selected clicker technology, a three-stage classroom feedback system was established. Three clicker sessions were held: the first after three lectures, the second before the mid-term exam, and the third in preparation of the final exam. The clicker sessions were structured in a way that each session consisted of two identical rounds of questioning. The first round required students’ immediate response, while the second round required students to discuss each question with their neighbours before giving the answers; this is to enhance active student engagement and to prompt students to think more deeply and critically about the module content. Peer instruction has been repeatedly reported to facilitate a deeper comprehension and to actively built knowledge [5, 6]. Together, this set-up provided the students with three different levels of feedback (Figure 1). First, the externally observable outcome of each clicker session delivered direct computerized quantitative feedback (refer to 1a or 1b of Figure 1). Second, the critical dialogue with classmates prior to submitting the answers for each second round clicker session provided students with dialogical external feedback (2). Finally, as a result of a facilitated in-class analyses and discussion of the outcomes of the first and second round of clicker sessions, the teacher was able to provide students with a class-wide qualitative external feedback (3a) from which each student could derive his/her individual feedback. The clicker-response questions were designed in a way not only to (i) gauge students’ comprehension to the module content, but also to identify (ii) areas of difficulty/confusion as well as (iii) areas of interest, which could then be addressed during subsequent lectures and tutorials fostering audience-paced instruction [7]. The outcome of repeated clicker sessions provided valuable information to the teacher that could be used to shape the level, speed, and content of subsequent teaching, thus indirectly providing the students with additional external feedback (3b) that was actively influenced by them. Three performance indicators were monitored: Firstly, the percentage of correct and false answers; secondly, the percentage of questions answered correctly by 100% of the students (CA100); and thirdly, the difference in percentage of correct answers between round 1 (R1) and round 2 (R2) questioning (ΔR2-R1).

Figure 1. Structure of a technology (clicker) supported three stage classroom feedback system to promote self-regulated student learning
3.2. Dialogical Feedback from Peers and Computerized First Round Feedback Triggered Measurable Learner Self-regulation

Self-regulation of student learning triggered by dialogical external feedback was directly reflected in the difference between the outcomes of the respective first and second round of questioning at each clicker session (Figure 2). In all cases, the percentage of correct answers increased from the first to the second round as reflected by positive ΔR2-R1 values. However the computerized first round feedback might have influenced the second round of responses as well. Notably, the difference in the percentage of positive answers between the first and second round of questioning steadily decreased from clicker sessions 1 to 3 in the analysis of the 2013 class data (Figure 2A). Under the assumption that students followed the given instruction, i.e. not to talk with their classmates during round 1 questioning and to exchange ideas during round 2 questioning, this result might indicate that the dialogical external feedback students received from their neighbors in the classroom increasingly became less relevant because they might have developed a higher level of confidence and self-esteem regardless of whether their answers were correct or wrong. However, if students did not follow the given instructions, this observation might alternatively be attributed either to increased conversation during round 1 questioning or decreased conversation during round 2 questioning.

Figure 2. Analysis of students’ feedback gained from each two rounds of three clicker sessions indicated by the percentage of correct and false answers. Identical questions were asked in the 2013 (A) and 2014 (B) classes. Sessions were conducted at different days; Rounds 1 and 2 were performed on the same day at the beginning and end of the respective lecture. ΔR2-R1: Increase of correct answers (%) from Round 1 to Round 2. CA100: Percentage of questions correctly answered by all students

The answer to this issue is evidenced by the analyses of the 2014 class data (Figure 2B). While the ΔR2-R1 value decreased here from clicker session 1 to session 2, an increase was observed from session 2 to session 3. Notably, in the 2014 class, round 1 of clicker session 3 was announced exceptionally as a competitive quiz the winners of which were awarded with a price. Hence, students had no interest in talking with their classmates during this round 1 questioning session which presumably resulted in a lower percentage of correct answers and a higher ΔR2-R1 value. That implies that students increasingly talked to their neighbors from session to session during round 1 questioning except for the quiz of clicker session 3 in 2014.

3.3. Classroom Feedback from Students Triggered Measurable Teacher Self-regulation

Self-regulation of the teacher’s questioning, during the clicker sessions but also during the mid-term and final exam, as a result of the students’ feedback was reflected indirectly by the qualitative and quantitative outcomes of clicker sessions 1 to 3. The percentage of correct answers steadily decreased from clicker sessions 1 to 3 each for the first and second round of questioning, indicating an increasing degree of difficulty of the questions. The only exception is a slight increase from clicker session 1 to session 2 for the 2014 class.

As a second measure, the percentage of questions that were answered correctly by all the students (CA100) significantly dropped from clicker sessions 1 to 3 in both years. Finally, student feedback also fostered self-regulation of the teacher’s lecturing as the design of the clicker-response questions revealed areas of difficulty/confusion as well as areas of interest, which were then addressed during subsequent lectures and tutorials.

3.4. Student and Teacher Self-regulation Triggered Improved Student Learning and Understanding

The quality of student feedback, i.e. the ability of the students to understand and correctly answer more difficult questions, increased in the same manner as the quantity of correct student feedback decreased. Notably, the quality of student feedback was a function of the complexity of the questions based on a classification by the teacher, and not a numerically measurable size.

A comparison of the overall class performance as reflected by the percentage of correct answers during each round of questioning revealed a much faster learning progress of the 2014 class compared with the 2013 class. The questions asked in both years were identical. While the 2014 class started weaker in clicker session 1, it was beating the performance of the 2013 class during sessions 2 and 3 (Figure 3). This observation might be explained either by a higher willingness of the 2014 class to learn and/or an improvement in the quality of teaching from 2013 to 2014 supported by the classroom feedback system.

Figure 3. Comparison of class performance. Identical questions were asked during the respective clicker sessions in 2013 and 2014
3.5. Conclusions

The three-stage classroom feedback system provided students with three levels of feedback, thereby enabling the teacher to apply several principles of good feedback practice [8, 9]. The set-up of the clicker sessions and the design of the clicker-response questions, together with the feedback given, (i) helped clarify what good performance is and (ii) supported students’ self-assessment. The class-wide discussion of results (iii) encouraged students to engage in teacher and peer dialogue and (iv) provided them with high quality feedback and (v) opportunities to close performance gaps. Finally, this system was helpful to the teacher as the students’ feedback (vi) helped address areas of confusion and areas of interest, improving the teaching for subsequent lectures in this way, and preparing the students for the multiple choice question parts of the exams.


I thank YU Han, Evalin, and Danielle Hui Ru Tan for logistic assistance. This work was supported by the NUS-Cambridge Start-up Grant and the Leadership in Academic Medicine Grant both from the National University of Singapore (NUS) and AcRF Tier 1 FRC Grant T1-2014Apr-02 from the Ministry of Education (MOE) of Singapore.


[1]  Fies, C. and Marshall, J, Classroom response systems: A review of the literature, Journal of Science Education and Technology, 15(1). 101-109. 2006.
In article      CrossRef
[2]  Caldwell, J.E, Clickers in the large classroom: Current research and best practice tips, Life Sciences Education, 6(1). 9-20. 2007.
In article      CrossRefPubMed
[3]  Dangel, H.L. and Wang, C.X, Student response systems in higher education: Moving beyong linear teaching and service leraning. Journal of Educational Technology Development and Exchange, 1(1). 93-104. 2008.
In article      
[4]  Hodges, L, Engaging students, assessing learning: Just a click away, Essays on Teaching Excellence, 21(3). 2010.
In article      
[5]  Crouch, C.H. and Mazur, E, Peer instrcution: Ten years of experience and results, American Journal of Physics, 69(9). 970-977. 2001.
In article      CrossRef
[6]  Smith, M.K, Wood, W.B, Adams, W.K, Wieman, C, Knight, J.K, Gulid, N. and Su, T.T, Why peer discussion improves student performance on in-class concept questions, Sciences, 323(5910). 122-124. 2009.
In article      CrossRefPubMed
[7]  Hall, R.H, Collier, H.L, Thomas, M.L, Hilgers, M.H, A student response system for increasing engagement, motivation, and learning in high enrollement lectures, Proceedings of the Americas Conference on Information Systems. 621-626. 2005.
In article      
[8]  Nicol, D. and Milligan, C, Rethinking technology-supported assessment practices in relatrion to the seven principles of good feedback practice. In C. Bryan and K. Clegg (Eds), Innovative Assessment in Higher Education, Taylor and Francis Group Ltd, London. 2006. 1-13.
In article      PubMed
[9]  Nicol, D. and Macfarlane-Dick, D, Formative assessment and self-regulated learning: A model and seven principles of good feedback practice, Studies in Higher Education, 31(2). 199-218. 2010.
In article      CrossRef
  • CiteULikeCiteULike
  • MendeleyMendeley
  • StumbleUponStumbleUpon
  • Add to DeliciousDelicious
  • FacebookFacebook
  • TwitterTwitter
  • LinkedInLinkedIn