Article Versions
Export Article
Cite this article
  • Normal Style
  • MLA Style
  • APA Style
  • Chicago Style
Research Article
Open Access Peer-reviewed

The Effectiveness of Peer/Self-assessment Approach in Urban Planning Studio-based Academic Education

Mohamed Anwer Zayed
American Journal of Educational Research. 2017, 5(6), 588-605. DOI: 10.12691/education-5-6-1
Published online: June 13, 2017

Abstract

Urban planning education is an important discipline that mainly depends on problem-solving learning activities. These activities are targeted at developing student skills in the design and planning of urban areas. The traditional approach to managing the urban planning education studio sets out a unilateral role for each party. Students submit their plans and tutors evaluate them. Then, either a feedback or discussion session takes place. The efficiency of this approach is somewhat limited. It consumes time and often requires paper-based rather than oral exchange of ideas. In comparison, peer/self-assessment has proved to be a powerful learning tool, especially in developing formative skills. It helps learners to objectively review their own work and that of their colleagues. As a result, it bridges the thinking gap between students and tutors. It offers the opportunity to look at the work from another dimension. This calls for wider consideration of the application of peer/self-evaluation as a learning tool to urban planning studios and this paper explores the potential of such application. It analyzes the impact of peer/self-assessment on the development of urban planning education. An experiment was conducted with students and staff as part of the urban planning course at Cairo University. Students performed peer/self-assessment during various learning activities. Compared with their counterparts in previous semesters, this group of students achieved higher levels of quality in their deliverables.

1. Introduction

Urban planning is an important field of education that focuses on the development of human settlements. It is a broad and interdisciplinary field. Physical urban planning is part and parcel of architecture education. It mainly depends on learning projects that simulate real physical planning projects. These learning projects are considered as problem-solving activities that require broad knowledge, analytical thinking, spatial awareness, numeracy and design skills. In terms of Bloom's taxonomy, they utilize a higher level of skills: synthesis skills. The projects are a teamwork activity that requires time management, interpersonal organization and communication skills. An educational physical urban planning project is a multifaceted large-scale learning activity that requires a lot of time and effort from both parties in the education process: students and tutors. By contrast, self-assessment is an evaluation tool that is widely used in many fields and offers people the opportunity to review and evaluate their own work. Peer/self-assessment enables students to play the role of their tutors, teachers and educators, and encourages them to view their own work through the eyes of those who are responsible for teaching them. This paper introduces the application of peer/self-assessment as a learning tool to the field of urban planning education. It argues that peer/self-assessment could be used throughout the stages of the associated learning projects to enhance the educational process and increase its efficiency. In this regard, it is important to differentiate between self-assessment and peer assessment. There is quite a difference between assessing one's own work and assessing that of a colleague. Competitiveness, sympathy, personal characteristics, differing measures and comprehension disparities can all give rise to inconsistency in such assessments. This is a very important issue in the application of peer/self-assessment to the field of urban planning education where teamwork is involved to a significant extent and where the university is, in any case, a collective learning environment.

The main goal of this paper is to identify the impact of peer/self-assessment on the learning outcome of a physical urban planning studio. It aims to answer three main questions, which are: (1) What are the positive impacts of applying peer/self-assessment to learning activities in urban planning education? (2) Is it possible to use peer/self-assessment as a tool for accurate evaluation of students' deliverables? (3) What are the determinants (influential factors) of accuracy in peer/self-assessment?

This research adopts a quantitative approach. Empirical investigation is used to examine the effect of peer/self-assessment on the learning outcomes of an urban planning education studio. Thus, a different learning technique was experimented with in one of urban planning courses at Cairo University. This technique was the peer/self-assessment of students’ deliverables for the Site Planning and Development course. Various data collection tools were used, and mathematical and statistical analyses were then applied.

The article continues with a literature review that further introduces the peer/self-assessment concept, illustrating its general benefits and challenges, and the prerequisites for success. The next part of the article describes the experimental methodology by which peer/self-assessment was applied to the Site Planning and Development course. This part introduces information about the experimental sample, instruments, procedures, data gathering and analyses. The results of the statistical and mathematical analyses are presented in the following section. Finally, these results are discussed and conclusions are then drawn.

2. Literature Review

Recently, university education has witnessed the emergence of the application of peer/self-assessment 1, 2. Both self-assessment and peer assessment are considered as student assessment tools that have an important educational function. It is a method of actively involving students in the educational process. By the turn of the 21st century, such student-based assessment started to be a worldwide phenomenon in many disciplines, including science, engineering, humanities and social sciences 3. This part of the article introduces the theoretical background of peer/self-assessment.

2.1. Definition

The literature has many definitions of self-assessment and peer assessment. Self-assessment could be defined as the evaluation, critical perception and judgment of one’s own knowledge, skills and attitudes 4. It is a learning tool whereby students can review and assess their own learning achievement and outcomes 5, 6. Self-assessment is one's own feedback for one's own work. It is a formative assessment through which students revise their own work or performance and determine how it matches the stated goals 7. Peer assessment, on the other hand, is described as a collaborative learning activity in which peers participate in judging and assessing each other's work 8, 9. It is a process of feedback provision, and sometimes grade assignment, in order to determine the quality of work submitted by colleagues 10. It is also a situation in which students are very much in a teacher-like role in order to quantitatively assess their peers’ products 11. According to these definitions, it could be concluded that both self-assessment and peer assessment are learning tools through which a student has the opportunity to review, evaluate and judge the work that is submitted by themselves, in self-assessment, and by colleagues, in peer assessment. Peer/self-assessment is a type of active learning activity that helps in developing students as independent learners 12. It offers the student the opportunity to consider student work from a different perspective: that of the teacher or tutor.

2.2. Benefits and Challenges

Current research work argues the benefits of self-assessment and peer assessment. Generally, both types of assessment help to achieve better learning outcomes, both in the short term and the long term 12. At the level of students, peer/self-assessment can realize the following benefits:

1. Development of lifelong learning skills in self-evaluation, feedback giving and negotiation 1.

2. Raising of student awareness of their own progress, strengths and weaknesses 13.

3. Enhancement of student comprehension of subject matter and the tasks required of them 14, 15, 16.

4. Provision of other opportunities for students to develop and even change their ideas and deliverables 13.

5. Improvement of student motivation 17, 18 and personal responsibility 19, as well as conversion from passive to active participation in the learning process.

6. Development of higher-order thinking skills 20.

In addition, the literature identifies many other positive impacts of peer assessment. Improving self-confidence and reducing stress are some of the principal such benefits 21. Peer assessment broadens the scope of feedback and ensures that students’ deliverables are reviewed through more perspectives 22. Peer review helps in improving a student’s deliverables in accordance with the feedback given 23, which will result in improvement of the student’s grades 24. Previous research has agreed on the realization of positive impacts from the development of transferable personal, professional 19 and social skills 20, as well as fostering the student’s insight and ability to identify the future development required of them.

At the level of educators (teachers or tutors), peer/self-assessment facilitates the grading process of students’ deliverables, as well as enhancing the accuracy of staff grading. This is due to the availability of additional information driven by student assessment 14. There is also a saving in teacher time due to a minimizing of feedback discussion and re-explanation; this is an important positive impact that is reflected in acceleration of the feedback process 1. Self-assessment helps teachers to evaluate a student’s performance 25, while peer assessment encourages teachers to scrutinize the learning contents and the required tasks 19.

In terms of the learning process, peer/self-assessment alters the traditional process of learning to a more productive, cooperative and friendly one 14. This alteration results in the creation of a heightened sense of belonging and ownership. Research work has shown the ability of self-assessment to make learning a more meaningful and useful process 17. Peer assessment raises the level of confidence in the processes of institutional assessment 19. Furthermore, it promotes the learning process 3 by academically, cognitively and emotionally engaging students in the development of their own learning systems 1. One of the most important positive impacts is that it changes how students perceive the learning process. After peer assessment, students consider assignments as more a part of the learning process than the grading process, so that mistakes are perceived as opportunities for development rather than signals of failure 15.

On the other hand, a number of challenges are also recorded in the existing research work, the most important being the accuracy and validity of evaluation 18. This is attributed to systematic bias 14, 26, lack of experience, or lack of evaluation standards 27, 28. In general, higher academic achievers tend to underrate deliverables and, by contrast, lower-performing students tend to overrate them 5.

In peer assessment, a concern of interference with the relationships between colleagues is raised 29. Students believe that evaluating the work of their fellow students may have adverse impacts on their relationships. Furthermore, many negative preconceptions are associated with peer assessment, such as that assessment is the exclusive responsibility of teachers 30 and/or that fellow students lack the necessary seriousness 31.

2.3. Prerequisites for Success

In order to mitigate such adverse impacts, the literature recommends some measures. The first is to design the process of peer/self-assessment to be systematic, structured and guided 32. The availability of a facilitator during the peer/self-assessment process is crucial 33. A teacher should supervise students during peer/self-assessment in order to keep the process on the right track and prevent inappropriate behavior, time-wasting and the mistakes of inexperience. Another measure is to encourage students to participate efficiently in the peer/self-assessment process by adopting an incentives strategy 34. In such a strategy, students are rewarded for the right assessment. This should be clearly announced to students before starting the process of peer/self-assessment. Finally, it is recommended that students are provided with clear and specific (i.e. detailed and comprehensive 36) assessment criteria 35, which may be explicit or tacit.

2.4. Peer/self-assessment as a Learning Tool

Despite the challenges of validity and accuracy of peer/self-assessment, it has a potential role to play in the education process 37. Literature highlighted the importance of peer/self-assessment as a learning tool more than a grading tool. It is considered as an invaluable procedure in the education context 38. This is because it helps learners to appropriately act to learning process in addition to effectively solve problems in later life. Peer/self-assessment helps in building student’s responsibility for her/his own learning 39. As a result, student turned to be a more active participant in the learning process rather than a passive receiver of content. This is a kind of enhancing learner autonomy 40 through which learner participates in formulating the learning process 41. Adopting peer/self-assessment in the learning process effectively assists students to achieve the intended learning outcomes 42. It enables learners to identify various opinions, receive multi-source immediate feedback and practice several roles (a proposer, a critic and an assessor). Deep learning is addressed as an explicit and effective impact of peer/self-assessment 43. It is associated with how much learner been developed during the process. Self-assessment was reported to help students to assimilate the content of learning process 44. If applied regularly, peer/self-assessment will be an important formative learning tool that enables students to align their work with the learning objectives 45. As a result, greater focus will be paid to the development of the learner experience and the achievement of the learning outcomes.

3. Methodology

Peer/self-assessment was experimented with in one of urban planning courses. The following section of this article introduces the experiment. It presents the sample, the procedure, the instruments used, the data collected and the tools of analysis.

3.1. Sample

The sample for this study was composed of students enrolled in the Architecture Engineering and Technology program at Cairo University. The students were registered in a module entitled Site Planning and Development in the fall of 2016, one of the courses in the urban planning discipline. All students in the class were involved in the peer/self-assessment activities during the course. A total of 52 students participated. The sample included both males and females. Participating students varied in terms of academic achievement level. Their grade point averages (GPAs) ranged between 2.15 and 3.93 out of 4.0. The students all had similar backgrounds because all had passed at least two architecture design modules and the Site Planning and Development module was their first in the discipline of urban planning.

3.2. Procedure

The study and data collection were conducted during class time throughout the Fall semester of 2016, in which two main stages could be identified. Stage 1 involved undertaking peer/self-assessment in relation to certain class activities. Three main educational activities were incorporated into the study:

1. Analysis-based activity 1. This was a comparative analysis between two urban areas. One of them was a planned urban area and the other an unplanned urban area. It was a one-time submission activity.

2. Analysis-based activity 2. This was a comparative analysis between three different types of human settlements: city, village and suburb. It was also a one-time submission activity.

3. Synthesis-based activity. This was a residential area planning project. Students were given a vacant plot in one of the new cities in the Greater Cairo region and were required to propose a master plan for a residential compound based on site constraints, potential and problems. This was a multiple-submission activity, comprising three draft submissions and then a final plan submission.

Although the main focus of this article was urban planning projects, the experiment also encompassed another type of learning activity. This was an analysis-based activity (reports). This was to broaden the evaluation of the proposed new tool and to judge the effects of interrelations between different learning activities.

Students experienced peer/self-assessment a total of six times in the activities described; once for each of the two analysis-based activities and four times for the synthesis-based activity. Before starting, the instructor introduced the process of peer/self-assessment and provided some general guidelines. During the process, students were free to discuss matters with each other and to compare their deliverables. The process ended with the assignment of a grade for each submission. This encompassed self-grading and peer grading as students evaluated all submissions. Figure 1 presents some photos showing the study context.

In Stage 2 of this study, three questionnaire-based surveys were undertaken. One was dedicated to the students who underwent the peer/self-assessment process six times during the course. This survey focused on student perceptions of the peer/self-assessment. The second survey was targeted at course staff members who had participated in the class activities as tutors. It focused on the opinions of the staff about the peer/self-assessment. The last survey was conducted with experts in the discipline of urban planning, who played the role of external evaluators in order to guarantee objectivity in evaluation. This survey was mainly focused on identifying the quality of the students’ deliverables when compared to those from previous semesters. Figure 2 presents a flow chart of the process of the whole study.

3.3. Instruments

For data collection, grading sheets were received from students on each occasion of peer/self-assessment. Staff also evaluated the quality of students’ deliverables through grading of the submissions according to certain criteria. This was a standard aspect of the course activities. For the synthesis-based activities, the students' draft proposals and final master plans were digitally captured. In addition, questionnaires were used, as described above, to gather data from stakeholders and experts. The first focused on students’ perceptions and consisted of four main parts that included 17 questions. The questionnaire was written in two languages (English and Arabic) in order to facilitate student response. The second questionnaire was a short survey to identify the opinions of course staff. This consisted of 12 questions categorized in three main parts. These two questionnaires were paper-based. The third questionnaire was an online survey that targeted experts in the discipline of urban planning. It included a total of 18 master plans, gathered from the current Fall semester (2016) and the two preceding it. After presenting each plan, six criteria were used to evaluate it.

3.4. Collected Data

Two main outputs were collected from Stage 1. The first output was the peer/self-assessment grades. For each of the six assessments, students assigned grades to their own submission as self-graders, and grades to their peers’ submissions as peer graders. This was in addition to the staff grading that were used as benchmarks. The second output from this stage was the deliverables of the four rounds of the synthesis-based planning activity, that is, the three draft proposals and the final plan. For Stage 2, responses from the three separate questionnaires were collected. Here, the number of respondents was 46 in the case of the students’ survey, six for the staff survey, and three for the expert survey. Figure 3 presents samples of the responses given to the students’ questionnaire.

3.5. Analyses

This study was primarily interested in five aspects: (1) accuracy of peer/self-assessment grading; (2) determinants of peer/self-assessment grading; (3) effect of peer/self-assessment on quality of deliverables; (4) student perception of peer/self-assessment; (5) staff opinion of peer/self-assessment. Various analysis tools were used and Table 1 summarizes these for each area of interest.

SSPS (Statistical Package for the Social Sciences) software (IBM Corporation, New York, U.S.) was used in correlation and ANOVA analyses, and Microsoft Excel was used in descriptive analysis and statistical chart production. It deserves to mention that due to the sample size of the experiment, the staff survey included some questions that were asked to students. This is considered as a kind of validating students’ response to some important issues that are related to the peer/self-assessment experiment. Besides, the expert survey helped in recognizing an overall assessment of the experiment through comparing the products of the students along three successive semesters where the last one only experienced peer/self-assessment.

4. Results

Performing the analyses described gave rise to large sets of numerical results, a detailed description of which follows.

4.1. Accuracy of Peer/Self-assessment Grading

This section presents the results of the analysis of grading accuracy. It focuses on a comparison of the grades assigned by staff with those assigned by students. In the case of self-assessment, the grade determined by each student for her/his own work was used. In the case of peer assessment, each submission was graded by all other students. Thus, the average peer grade was calculated for each submission and was treated as representative.


4.1.1. Self-assessment Grading

Project-based tasks. Differences in grading between staff and students decreased steadily through the four rounds of self-assessment. The sum of squared differences per capita was 0.46 by round four, compared to 5.73 in round one. Table 2 presents the results of the grading comparison between staff assessment and self-assessment.

The results of correlation analysis showed very low (negligible) correlation between staff assessment grades and those of self-assessment in the first three rounds. Then, a quite different result was recorded in the fourth round. The correlation coefficient rose to 0.75. This is a moderate-to-high level of correlation. Table 3 shows the evolution of the correlation coefficients across the four rounds.

Reporting-based tasks. Differences in grading between staff and students decreased over the two rounds of reporting-based tasks. The sum of squared differences per capita was 1.37 in round two, compared to 2.32 in round one. Table 4 presents the results of the grading comparison between staff assessment and self-assessment. It is noteworthy that each round encompassed a different task.

In addition, the correlation coefficient between staff grades and self-assessment grades increased across two rounds of self-assessment, scoring 0.36 in the second round, following a score of 0.11 in the first.

Overall, the correlation between staff assessment and self-assessment at the level of all tasks is considered to be weak, and sometimes negligible, because its coefficients did not exceed the threshold of 0.25. Nevertheless, the correlation in assessments of reporting-based tasks was higher than that of project-based tasks.


4.1.2. Peer Assessment Grading

Project-based tasks. Differences in grading between staff and peers decreased erratically across the four rounds of peer assessment, as shown in Table 5 Nonetheless, the sum of squared differences per capita was 0.19 by the fourth round, compared to 1.96 in the first.

The results of correlation analysis showed moderate correlation between staff assessment grades and those of peer assessment in the first three rounds. However, in the fourth round, there was strong correlation. In the first three rounds, the coefficients fluctuated between 0.53 (in the third round) and 0.79 (in the second), before increasing to 0.83 in the fourth round. Table 6 shows the evolution of the correlation coefficient across the four rounds.

Reporting-based tasks. As shown in Table 7, differences in grading between staff and peers increased significantly from 1.53 points per capita for the first assignment to 5.65 points per capita for the second. This trend was completely different from all others observed, and may be a result of the different natures of the two reporting-based tasks involved.

The correlation between staff grades and peer grades was shown to be weak for the first assignment with a coefficient of 0.379, and negligible for the second with a coefficient of 0.024.

Overall, the correlation between staff assessment and peer assessment was stronger in project-based tasks than reporting-based tasks.

4.2. Determinants of Peer/Self-assessment

ANOVA analysis was conducted to explore the impact on peer/self-assessment accuracy of potential influencing factors such as GPA, student’s grade for the course, academic level, class attendance and gender. In each case, students were divided into groups according to the given factor. The analysis was conducted at two levels. At the first level, two rounds were conducted: one included all cases of self-assessment, and the other included all cases of peer assessment. At the second level, four rounds were conducted: two covered project-based tasks, one round for self-assessment and one for peer assessment, and the other two rounds covered reporting-based tasks on the same basis.


4.2.1. Self-assessment Determinants

Generally, there was a significant difference in the accuracy of self-assessment grading for different categories in three areas of potential influence: student’s grade for the course, student’s attendance of the course classes, and student’s GPA. The resulting p-values were far below the significance level of 0.05, generating values of 0.000, 0.005 and 0.008, respectively. In contrast, the results showed that neither academic level nor gender made a significant difference to the accuracy level.

In terms of different categories of student's course grade, the ANOVA analysis indicated a significant difference in self-assessment accuracy for both project and reporting-based tasks, with p-values of 0.001 and 0.035, respectively. However, when it came to different categories of both class attendance and GPA, a significant difference in self-assessment accuracy was only seen for reporting-based tasks, where the resulting p-values were 0.019 and 0.028, respectively. Table 8 presents the detailed results of the 15 rounds of ANOVA analysis.

To investigate further, a correlation analysis was conducted to explore the existence of a correlation between the differences in staff grading and self-grading and students’ characteristics in relation to GPA, course grade, academic level, class attendance and gender.

In terms of project-based tasks, the results showed relatively high correlations in the third and fourth rounds of activity. A significant correlation with both student course grade and class attendance rate existed, with coefficient scores of between -0.486 and -0.350 for these last two rounds. In addition, a correlation with GPA in the third round and one with academic level in the fourth round also existed. Figure 4 presents a chart of the evolution of the correlation coefficients between difference in grading and potential characteristics of students along the four rounds.

For reporting-based tasks, a higher level of correlation existed between the differences in staff grading and self-grading and both class attendance rate and student grade for the course. The correlation coefficients for these two factors in the second reporting-based activity were -0.640 and -0.580, respectively. Furthermore, the correlations with both GPA and academic level are also worthy of mention because the coefficient scores were, respectively, -0.407 and -0.404.


4.2.2. Peer Assessment Determinants

The ANOVA analysis results of peer assessment accuracy at the level of project-based and reporting-based tasks did not indicate any significant difference due to the effect of any of the five potential influencing factors. Table 9 presents the detailed results of the 15 rounds of ANOVA analysis.

To investigate further, a correlation analysis was conducted to explore the existence of a correlation between the differences in grades assigned by each peer in each round of peer assessment and his/her characteristics in relation to GPA, course grade, academic level, class attendance and gender.

In terms of project-based tasks, all correlation coefficients were either in the negligible or weak zone. However, a notable increase in the correlation coefficient value occurred in the fourth and final round of activity between grading differences and both student’s GPA and course grade. Figure 5 illustrates the evolution of the coefficient values across the four rounds of peer assessment.

In the case of reporting-based tasks, the peer grading differences did not correlate with any of the potential factors of influence: all of the associated correlation coefficients fell into the negligible zone.

4.3. Quality of Deliverables

One of the main purposes of applying peer/self-assessment to the urban planning discipline is to assist the learning process and enhance the development of the educational projects. Thus, comparing the staff grades of the students’ deliverables during the project stages is very important. The first step was to calculate the averages of the staff grades in relation to the Fall 2016 class that undertook peer/self-assessment across four submissions (three drafts and one final) of the project. By comparing the averages, it was found that the average grade rose continuously from the first submission to the final one: the average index grade scores were 1.00, 1.11, 1.30 and 1.32, respectively. The rate of increase was not consistent across the three periods represented. The second step involved a comparison of the staff grading evolution seen in Fall 2016 with that of previous classes (Fall 2009, Fall 2010, Fall 2011 and Fall 2014). As might be expected, the evolution of grades in each previous semester was different, but all of them exhibited a decline in the final grading compared to the first. Figure 6 illustrates the progression of average staff grades for these different classes. It is clear that the progression in the case of the application of peer/self-assessment is an upwards one, whereas the other cases show fluctuating or steady decline.

As further analysis, the progression in staff grading for the Fall 2016 class was analyzed at the level of individual student performance. Out of 52 students, only three did not achieve relative progress in their final submission, scoring 0.95 of the index for their first submission grade. The remaining 49 students could be classified into three categories. The first category was those students who achieved a steady rise across all four submissions. There were 15 such students, representing 31% of the sample. The second category covered those students who recorded one relative decline through their four submissions. There were 27 of these, representing 55% of the sample. The third category were the students who recorded two declines through their four submissions, of which there were seven, representing 14% of the sample. Figure 7 presents a comparison of the progression seen in each of these three categories (C1, C2 and C3), respectively.

Finally, the residential master plans of the best three projects of each semester – Fall 2016, Fall 2015 and Fall 2014 – were blindly evaluated by specialists in the field. The external evaluators were asked to grade each plan according to six criteria, which were: (1) the general concept of the master plan; (2) road network planning; (3) residential lot division; (4) services center planning; (5) urban detailing; (6) presentation style. In general, the overall evaluation of the projects of Fall 2014 and Fall 2015 were very close to each other. The average overall grades were 58% and 56%, respectively. By comparison, the average overall grade of the Fall 2016 projects was 71%. This is a clear indicator of the improved quality of the deliverables of Fall 2016. At the level of the individual criteria, the greatest improvements happened in relation to the general concept, road network planning, and presentation style. These scored 1.29 times better than the corresponding values for Fall 2014 and Fall 2015. Improvement was also seen in the other criteria of urban detailing, services center planning and residential lot division, but to a relatively lower extent. Respectively, the scores were 1.26, 1.25 and 1.24 times higher. Figure 8 presents a histogram of the scores for each criterion for the three semesters.

4.4. Students' Perceptions

This section focuses on an investigation of how students perceived the process of peer/self-assessment. The associated questions can be broken into two main categories. The first category focuses on the perception of the process itself and includes questions about the assessment duration and venue, the use of assessment criteria and obtaining assistance. The second category sheds light on the impacts of peer/self-assessment and includes questions about the benefits accrued, the scope of such benefits, and recommendations for future application.

The results showed that the average estimation of the duration of the peer/self-assessment process was 12.5 minutes. The majority of respondents thought it took 10 minutes to complete the assessment process. Others suggested timings from 3 minutes to 30 minutes. In terms of assessment venue, 62% of students believed that the current venue was suitable for the process, while 37% disagreed. Figure 9 presents the replies of the respondents as to the estimated duration and venue suitability for the peer/self-assessment process.

In the course of six peer/self-assessments, 22% of students reported that they always used criteria, and 63% of them that they sometimes did. Overall assessment without specific criteria was adopted by only 7% of students. For those who adopted assessment criteria, four main ones were reported as used. The quality of drawings, including both neatness and presentation, was the most adopted criterion. Both the use of the design principles that were explained in the lectures and work load (the volume of work evident in the submission) were the next most used criteria. Finally, a small proportion of respondents reported the use of the staff evaluation criteria. Thus, these four elements scored 42%, 26%, 26% and 6%, respectively, among those using criteria. However, 46% of students reported that they had discussions with their peers during the assessment process.

At the level of the perceived impacts of the peer/self-assessment, 89% of respondents believed that there was a positive impact on their learning outcomes from peer/self-assessment. A further 9% disagreed with this notion and reported that there was no positive impact, and 2% did not answer this question.

Eight potential benefits were listed in the questionnaire for respondents to vote upon, with respondents asked to rank these benefits according to what they had experienced. There was strong agreement on two benefits having been realized through the peer/self-assessment. These were (1) viewing the ideas and work of other colleagues, and (2) recognizing the level of their own work. These two benefits scored 80% and 73% respectively. According to this, the peer/self-assessment process enabled students to have a good look at the work of others, which is a form of idea sharing and self-learning. Furthermore, it helped students to recognize the level of their own work in comparison to that of their colleagues. Another four benefits of the eight listed achieved moderate agreement among participants. These benefits were (1) identifying colleagues' opinions regarding the work submitted, (2) discussing the work with colleagues, (3) gaining more understanding of the task required, and (4) predicting their own grade for the assignment. These scored 58%, 51%, 49% and 45%, respectively. These scores are sufficiently close for the four benefits to be classified in one group. The remaining two potential benefits were not widely recognized by the respondents. These were (1) enhancing the understanding between workgroup members, and (2) relieving work pressure and tension, which scored only 29% and 19%, respectively. Figure 10 presents a detailed chart of the respondents’ perceptions of the benefits accrued from peer/self-assessment.

The questionnaire identified six areas in the discipline of residential area planning as subjects of potential benefit deriving from the peer/self-assessment, as perceived by the students. These areas were: (1) general planning concepts; (2) road network planning; (3) residential lot division; (4) services center planning; (5) presentation techniques; (6) urban detailing. Participants were asked to rank these areas according to the benefits realized. According to the rankings, students obtained most benefit in relation to the division of residential lots and the planning of the road network, which scored 74% and 72% agreement, respectively. In fact, these two areas were the focal elements of the project, with the pattern of residential lot distribution and the geometry of the lots having a direct effect on the residents, and the road network having a direct effect on accessibility to same. As the project progressed, these two elements witnessed notable improvement. Following these, the areas of general planning concepts and presentation techniques scored next, with 57% and 50% agreement, respectively. The least benefit was attributed to the areas of urban detailing and services center planning, which scored less than 50% agreement. Figure 11 presents a detailed chart of the respondents’ perception of the subject areas of benefit realized from peer/self-assessment.

Finally, 93% of respondents recommended that peer/self-assessment be performed in the next semester, against 7% who did not. The following are some of the comments that were written by respondents, highlighting the importance of the peer/self-assessment:

Student 1: Self-assessment made us concentrate on some of our mistakes that weren’t visible at first. It makes you feel competitive to achieve more and better.

Student 2: Self-assessment won’t [be] particularly beneficial in all courses but since this was our first time in urban planning then, yes, it’s beneficial.

Student 3: Self-assessment has many benefits.

It deserves mention that these results of students’ perceptions of the positive impacts of peer/self-assessment matched the results of another independent survey that was conducted as part of the college monitoring system and quality assurance. According to that survey, the level of agreement for the positive impact of the peer/self-assessment was 78%.

4.5. Staff Opinion

An independent questionnaire was conducted to investigate the opinion of the course staff about the effect of the peer/self-assessment. The results showed that the staff estimate of the average time taken for peer/self-assessment was 20 minutes. The suitability of the current venue for carrying out peer/self-assessment met with approval from 67% of staff.

In terms of the student benefits realized through the peer/self-assessment, the responses effectively classified the eight potential benefits into three groups. The first group of highest ranked benefits included students recognizing the level of their own work and viewing the ideas and work of others, which scored 92% and 90%, respectively. The second group covered four student benefits, which were more understanding of the work required, seeing their work through the eyes of others, discussing work with colleagues, and predicting their own grade for the work. These benefits scored 69%, 60%, 54% and 50%, respectively. The third and last group included the two potential benefits of relieving work pressure and tension, and increased understanding between team members, which scored only 35% and 33%, respectively. Figure 12 presents a detailed chart of the respondents’ perception of the benefits accrued from peer/self-assessment.

In terms of areas in the discipline of residential area planning, staff perceived the benefits of peer/self-assessment to students as best realized in four areas: road network planning, general concepts, residential lot division and presentation style. These scored 88%, 85%, 79% and 71%, respectively. The other two areas of services center planning and urban detailing had respective scores of 54% and 50%. There was a clear gap between the two groups. Figure 13 presents a detailed chart of the staff opinion of the benefits accruing to students from peer/self-assessment.

In relation to the benefits that were realized by the staff, a list of five was put forward for agreement. The results showed that two benefits were classified as the most realized. These were (1) enabling students to accept criticism, and (2) facilitating the delivery of feedback to students, which both scored 80% agreement among respondents. The other three possible benefits achieved comparatively low scores. Thus, the benefit of encouraging students to accept their grading scored 60%, the benefit of enhancing staff–student communication scored 47%, and the benefit of saving studio/discussion time scored just 43%. In addition, 67% of staff respondents agreed that the realization of benefits from peer/self-assessment was more significant in the synthesis-based project activity than the other assignments.

Only one staff member, out of six, reported that students asked for staff assistance during the peer/self-assessment process. There was consensus among staff members regarding the positive impact realized by peer/self-assessment, with all recommending that peer/self-assessment be retained for subsequent semesters.

Besides, during the experiment that lasted for 6 weeks, discussions were held with the staff concerning the applying of peer/self-assessment in the academic studio of urban planning. They agreed on the improvement of students’ attitude towards the studio. They found students started to play more active role during the feedback session of the project. Compared with previous semester, students who experienced the peer/self-assessment addressed new questions, new ideas and new proposals to their projects and to their peers’. Staff found the feedback session is turned to be more efficient and fruitful. The following are some of the comments that were given by staff:

Staff 1: Self-assessment was a great way to present all the ideas and work to the whole class. In my opinion, discussing the projects through all teams, self-criticism and accepting feedback was the most important advantages for this activity.

Staff 2: ARCN208 is the first course to carry it out in a somehow methodological process that yields outcomes that can be quantified & described in the form of bonuses added to students who gave grades approaching the actual one given by Teaching Assistants. This assured that students take the assessment seriously without any bias to certain colleagues.

4.6. Observations

During the peer/self-assessment process, some important observations were recorded by the researcher. First, students’ attitude was clearly changed. In the first rounds of peer/self-assessment the majority of students were focusing only on assigning a grade for each deliverable. After two or three rounds, they turned to focus on discussing the deliverables with each other, criticizing the work and sometimes proposing solutions and modifications. Second, some of the staff members were not convinced with the potentials of peer/self-assessment at the start of the experiment. They were claiming that studio time was very limited and they cannot waste part of it in other activities. Then, they turned to encourage students to effectively doing the assessment as they noticed the development in the quality of the deliverables. They described the peer/self-assessment as a good investment of the studio time. Finally, the studio atmosphere witnessed an emergence of positive, cooperative and effective attitude from students.

5. Discussion

This section considers the results of the analyses described above, which were focused on five main aspects: accuracy of peer/self-assessment grading, determinants of peer/self-assessment grading, quality of deliverables, students’ perceptions, and staff opinion. It determines the overall analysis of the application of peer/self-assessment to an urban planning course.

• In general, there are clear differences between self-grading and staff grading in the two types of activities. This suggests that the accuracy of self-grading is poor and, as a result, that it cannot replace staff grading. This is also addressed as one of the main challenges of peer/self-assessment in literature. Despite this, the experience gained had an important role in progressively enhancing the accuracy of self-grading, especially in the synthesis-based activities (project). Across the four rounds of self-assessment in the project, students achieved successively better levels of accuracy in grading their own deliverables.

• According to the correlation analysis results, the accuracy of peer grading is higher than that of self-grading, especially in synthesis-based activities. But it too is not accurate enough to replace staff grading. Again, however, the experience plays an important role in progressively enhancing the peer grading accuracy in relation to synthesis-based activities.

• Peer assessment is more accurate in synthesis-based activities than others. In this case, an average grade is calculated from the grades recorded by all other colleagues.

• The accuracy of self-assessment is predominantly affected by the student’s grade in the course, their class attendance and their GPA, with a clear variance recorded in accuracy between the corresponding categories of these factors. Literature review addressed the effect of student’s achievement level in education on the accuracy of the peer/self-grading. After two rounds of self-assessment in the synthesis-based activities, the students who achieved higher grades in the course and/or who attended most classes achieved more accurate self-grading than others. These two determinants, together with GPA and academic level, only had an effect on the self-grading accuracy in the analysis-based activities.

• The ANOVA results of peer assessment determinants did not show a clear effect on accuracy from any of the proposed factors. This means that course grade, course attendance, student GPA, and student academic level had no clear effect on the accuracy of peer grading. Despite this, a low adverse correlation was recorded between both the course grade and student GPA and differences in the grading accuracy for the synthesis-based activities after multiple rounds of peer/self-assessment.

• Gender factors did not have any effect on the accuracy of peer/self-assessment for either type of learning activity.

• Peer/self-assessment had a positive impact on the quality of deliverables in synthesis-based activities. The comparative analysis of staff grading for the successive submissions of the residential area planning project (three drafts and a final presentation) showed a notable enhancement of students’ grades in relation to that of the first draft. This was realized in the Fall 2016 project and was not seen in previous semesters, where peer/self-assessment was not applied. Furthermore, the enhancement in grades was steady across all four submissions for 31% of students, while most others witnessed fluctuation before still eventually attaining higher index scores.

• Peer/self-assessment is agreed by both students and tutors to have had positive impacts on the learning process in urban planning discipline. The benefits realized could be classified into three main groups. The first group, the most prominent benefits, included (1) exploring the ideas and work of other students, and (2) recognizing the level of students' own submitted work. These two benefits produced agreement scores above 70% in the questionnaires for both students and staff. The second group included benefits that scored agreement levels of more than 40% among both students and tutors. These benefits were (1) identifying the opinion of colleagues concerning the submitted work, (2) discussing the work with colleagues, (3) improved understanding of the work required, and (4) prediction of students' own grade for the work. The final group included two benefits with agreement scores of less than 35%, which were (1) greater understanding between team members, and (2) relief of work pressure and tension.

• There is an agreement between the quantitative results and literature on the realization of two main positive impacts as a result of applying peer/self-assessment. These impacts are raising self-awareness, provision opportunities for deliverables development and enhancing the comprehension of subject matter.

• It is notable that prediction of students' own grade for the work submitted was ranked only sixth. This can be seen as consistent with the relatively low accuracy of peer/self-assessment grading recorded. The most important positive impact of peer/self-assessment is the student educational development that results from exploring new ideas, comparing work submissions and discussing work with peers.

• The benefits of peer/self-assessment were not limited to students, and some were realized by staff too. The latter regarded the most important benefits as encouraging students to accept criticism and facilitating the delivery of feedback to students. It’s worth mentioning that current literature agreed on these benefits especially saving tutors’ time and facilitating feedback giving.

• With regard to the discipline of residential area planning, there was agreement between the students’ and tutors’ opinions on the one side, and the experts’ evaluation on the other, concerning the realization of benefits to students in three main areas as part of the associated project. These were general planning concepts, road network planning and residential lot division. Three other areas – services center planning, urban detailing and presentation techniques – witnessed less benefit.

• The peer/self-assessment process typically took between 10 and 20 minutes. This time was deducted from the time allocated for the studio, which was 180 minutes per week for the class. It could be considered a good investment of time as the positive impacts realized were very beneficial to the students.

• For most participants, the venue was suitable for the process of peer/self-assessment. This means that there were no specific physical provisions needed to perform the peer/self-assessment other than what was originally required by the associated learning activity.

• The real added value of applying peer/self-assessment to the urban planning course is realized in widening a student’s experience and exposure to other ideas, concepts and works. This results in self-review of the student’s own work in the light of a broader experience that incorporates colleagues' variety of experience. Although peer/self-assessment grading lacks sufficient accuracy to replace staff grading, it could be used as a preliminary indicator of the quality of the work submitted.

• Finally, both students and tutors were agreed on the recommendation that peer/self-assessment be adopted as an effective learning tool for subsequent semesters.

6. Conclusion

Peer/self-assessment is an effective learning tool that has emerged in many disciplines of university education. It enables students to review, evaluate and judge the work submitted either by themselves or by their peers. The literature agrees on the positive impacts of applying this tool, not only in the interests of students but also in the interests of both staff and the learning process. Peer/self-assessment enhances students’ awareness of their own personal standing in addition to broadening their knowledge and experience. For staff, it facilitates the grading process and enhances its accuracy. In addition, it raises the efficiency of the learning process and converts it into a more cooperative process. On the other hand, challenges such as grading accuracy and psychological or social bias may exist. These challenges demand some mitigation measures. The availability of facilitators, adoption of an incentives strategy, and the use of specified criteria represent some such measures. In this context, peer/self-assessment holds much potential to enhance urban planning education. The already time-constrained urban planning education studio could benefit from such an effective tool to facilitate the learning process and enhance its outcomes.

This statistical and comparative analysis of experimental peer/self-assessment in the Site Planning and Development course at Cairo University reveals many important results. Peer/self-assessment offers a great opportunity to develop the quality of students’ output for the course to a higher level and within a shorter timescale. Both students and staff agree on the benefits realized by peer/self-assessment, which include exploring new ideas, recognizing their own deliverable quality, understanding the opinions of colleagues, predicting their own work grade and, to a lesser extent, enhancing understanding between team members and relieving work pressures. There is an agreement with the literature on the effective role of peer/self-assessment to save educational studio time and facilitate the delivery of feedback to students. On the other hand, the accuracy of peer/self-assessment grading is relatively low. Differences between peer/self-assessment grades and corresponding staff ones are clear. As a result, students’ grading cannot replace staff grading. The course grade, class attendance and GPA of a student all influence the accuracy of their self-grading but have no effect on the accuracy of peer grading. In addition, a student's academic level and gender do not affect the accuracy of either self-grading or peer grading. Nevertheless, the evidence is clear that experience gained from performing peer/self-assessment multiple times results in enhancement of the accuracy of self-grading. Given the benefits realized, the time deducted from the class to perform peer/self-assessment is considered to be a good investment, and no extra physical provisions are needed to conduct peer/self-assessment than those already provided for the original learning activity. Students and staff recommend continuing the practice of peer/self-assessment in the urban planning educational studio.

Further studies are required in order to establish how the accuracy of peer/self-assessment evolves in the course of its continuous application and to confirm the current results.

References

[1]  Bloxham S, Boyd P. Developing Effective Assessment in Higher Education: A Practical Guide. Maidenhead, UK: Open University Press; 2007.
In article      PubMed
 
[2]  Van Den Berg I, Admiraal W, Pilot A. 2006. Designing Student Peer Assessment in Higher Education: Analysis of Written and Oral Peer Feedback. Teaching in Higher Education, 11(2):135-47.
In article      View Article
 
[3]  Falchikov N, Goldfinch J. Student Peer Assessment in Higher Education: A Meta-Analysis Comparing Peer and Teacher Marks. Review of Educational Research. 2000; 70(3):287-322.
In article      View Article
 
[4]  Aitchison RE. The Effects of Self-Evaluation Techniques on the Musical Performance, Self-Evaluation Accuracy, Motivation, and Self-Esteem of Middle School Instrumental Music Students [doctoral dissertation]. [Iowa City (IA)]: The University of Iowa; 1995.
In article      View Article
 
[5]  Lindblom-Ylänne S, Pihlajamäki H, Kotkas T. Self-, Peer- and Teacher-Assessment of Student Essays. Active Learning in Higher Education. 2006; 7(1): 51-62.
In article      View Article
 
[6]  Boud D, Falchikov N. Quantitative Studies of Student Self-Assessment in Higher Education: A Critical Analysis of Findings. Higher Education. 1989; 7(1): 529-49.
In article      View Article
 
[7]  Andrade H, Du Y. Student Responses to Criteria Referenced Self- Assessment. Assessment & Evaluation in Higher Education. 2007; 32(2): 159-81.
In article      View Article
 
[8]  Fischer F, Kollar I, Fischer F. Peer Assessment as Collaborative Learning: A Cognitive Perspective. Learning and Instruction. 2010; 20(4): 344-48.
In article      View Article
 
[9]  Jones I, Alcock L. Peer Assessment without Assessment Criteria. Studies in Higher Education. 2014; 39(10): 1774-87.
In article      View Article
 
[10]  Spiller D. 2012. Assessment Matters: Self-Assessment and Peer Assessment. Hamilton, New Zealand: Teaching Development, University of Waikato; 2012.
In article      View Article
 
[11]  Topping KJ. Trends in Peer Learning. Education Psychology. 2005; 25(6): 631-45.
In article      View Article
 
[12]  Thomas G, Martin D, Pleasants K. Using Self- and Peer-Assessment to Enhance Students’ Future-Learning in Higher Education. Journal of University Teaching & Learning Practice. 2011; 8(1); Article 5.
In article      View Article
 
[13]  Crowell TL. Student Self Grading: Perception vs. Reality. American Journal of Educational Research. 2015; 3(4): 450-55.
In article      View Article
 
[14]  Sadler PM, Good E. The Impact of Self- and Peer-Grading on Student Learning. Educational Assessment. 2006; 11(1): 1-31.
In article      View Article
 
[15]  Planas Lladó A, Soley LF, Fraguell Sansbelló RM, Pujolras GA, Planella JP, Roura-Pascual N, et al. 2014. Student Perceptions of Peer Assessment: An Interdisciplinary Study. Assessment & Evaluation in Higher Education. 2014; 39(5): 592-610.
In article      View Article
 
[16]  Crowe JA, Silva T, Ceresola R. The Effect of Peer Review on Student Learning Outcomes in a Research Methods Course. Teaching Sociology. 2015; 43(3): 201-13.
In article      View Article
 
[17]  Kersh N, Evans K, Kontiainen S, Bailey H. Use of Conceptual Models in Self-Evaluation of Personal Competences in Learning and in Planning for Change. International Journal of Training and Development. 2011; 15(4): 290-305.
In article      View Article
 
[18]  Tighe-Mooney S, Bracken M, Dignam B. Peer Assessment as a Teaching and Learning Process: The Observations and Reflections of Three Facilitators on a First-Year Undergraduate Critical Skills Module. All Ireland Journal of Teaching and Learning in Higher Education. 2016; 8(2): 28301-18.
In article      View Article
 
[19]  Topping K. Peer Assessment Between Students in Colleges and Universities. Review of Educational Research. 1998; 68(3): 249-76.
In article      View Article
 
[20]  Carlson PA, Berry FC, Voltmer D. Incorporating Student Peer-Review into an Introduction to Engineering Design Course. In: FIE '05. Proceedings 35th Annual Conference Frontiers in Education. New York: IEEE; 2005. p. 20-25.
In article      View Article
 
[21]  Topping K. Self and Peer Assessment in School and University: Reliability, Validity and Utility. In: Segers M, Dochy F, Cascallar E, editors. Optimising New Modes of Assessment: In Search of Qualities and Standards. Dordrecht, The Netherlands: Springer; 2003. p. 55-87.
In article      View Article
 
[22]  Arnold L, Willoughby L, Calkins V, Gammon L, Eberhart G. Use of Peer Evaluation in the Assessment of Medical Students. Journal of Medical Education. 1981; 56(1): 35-42.
In article      View Article
 
[23]  Levine RE., Kelly A, Karakoc T, Haidet P. Peer Evaluation in a Clinical Clerkship: Students’ Attitudes, Experiences, and Correlations With Traditional Assessments. Academic Psychiatry. 2007; 31(1): 19-24.
In article      View Article  PubMed
 
[24]  Bloxham S, West A. Understanding the Rules of the Game: Marking Peer Assessment as a Medium for Developing Students' Conceptions of Assessment. Assessment & Evaluation in Higher Education. 2004; 29(6); 721-33.
In article      View Article
 
[25]  Das M, Mpofu D, Dunn E, Lanphear JH. Self and Tutor Evaluations in Problem-Based Learning Tutorials: Is There a Relationship? Medical Education. 1998; 32(4): 411-18.
In article      View Article  PubMed
 
[26]  Foley S. Student Views of Peer Assessment at the International School of Lausanne. Journal of Research in International Education. 2013; 12(3): 201-13.
In article      View Article
 
[27]  Hanrahan SJ, Isaacs G. Assessing Self- and Peer-Assessment: The Students’ Views. Higher Education Research & Development. 2001; 20(1): 53-70.
In article      View Article
 
[28]  Kaufman JH, Schunn CD. Students’ Perceptions about Peer Assessment for Writing: Their Origin and Impact on Revision Work. Instructional Science. 2011; 39(3): 387-406.
In article      View Article
 
[29]  Cestone CM, Levine RE, Lane DR. Peer Assessment and Evaluation in Team-Based Learning. New Directions For Teaching and Learning. 2008; 2008(116): 69-78.
In article      View Article
 
[30]  Wen ML, Tsai CC. University Students’ Perceptions of and Attitudes toward (Online) Peer Assessment. Higher Education. 2006; 51(1): 27-44.
In article      View Article
 
[31]  Cotrell WH, Weaver RL. Peer Evaluation: A Case Study. Innovative Higher Education. 1986; 11(1): 25-39.
In article      View Article
 
[32]  Cassidy S. Assessing ‘Inexperienced’ Students’ Ability to Self-Assess: Exploring Links with Learning Style and Academic Personal Control. Assessment & Evaluation in Higher Education. 2007; 32(3): 313-30.
In article      View Article
 
[33]  O’Brien SO, McNamara G, O’Hara J. Supporting the Consistent Implementation of Self-Evaluation in Irish Post-Primary Schools. Educational Assessment, Evaluation and Accountability. 2015; 27(4): 377-93.
In article      View Article
 
[34]  Gielen S. Peer Assessment as a Tool For Learning [doctoral dissertation]. Leuven, Belgium: Katholieke Universiteit Leuven; 2007.
In article      
 
[35]  De Grez L, Valcke M, Roozen I. How Effective Are Self- and Peer Assessment of Oral Presentation Skills Compared to Teachers’ Assessments? Active Learning in Higher Education. 2012; 13(2): 129-42.
In article      View Article
 
[36]  Miller PJ. The Effect of Scoring Criteria Specificity on Peer and Self-Assessment. Assessment & Evaluation in Higher Education. 2003; 28(4): 383-94.
In article      View Article
 
[37]  Tan K, Keat LH. Self and Peer Assessment as an Assessment Tool in Problem-based Learning. In: Tan K, editor. Problem-Based Learning: New Directions and Approaches. Singapore: Learning Academy; 2005. p. 162-75.
In article      PubMed
 
[38]  Bickmore DK. The Effects of Student Self-evaluation and Pupil-teacher Conferences on student perceptions, self-concepts and learning [doctoral dissertation]. Utah, United States: Brigham Young University; 1981.
In article      
 
[39]  Cho K, Schunn CD, Wilson RW. Validity and Reliability of Scaffolded Peer Assessment of Writing from Instructor and Student Perspectives. Journal of Educational Psychology. 2006; 98(4): 891-901.
In article      View Article
 
[40]  Salehi M, Masoule ZS. An investigation of the reliability and validity of peer, self-, and teacher assessment. Southern African Linguistics and Applied Language Studies. 2017; 35(1): 1-15.
In article      View Article
 
[41]  Zundert M Van, Sluijsmans D, Merrie J Van. Effective peer assessment processes: Research findings and future directions. Learning and Instruction. 2010; 20(4): 270-9.
In article      View Article
 
[42]  Willey K, Gardner AP. Investigating the capacity of Self and Peer Assessment to Engage Students and Increase their Desire to Learn. In: Attracting Young People to Engineering, Proceedings of the SEFI 37th Annual Conference. Rotterdam; 2009.
In article      View Article
 
[43]  Li Y, Chen L. Peer- and self-assessment: A Case Study to Improve the Students’ Learning Ability. Journal of Language Teaching and Research. 2016; 7(4): 780-7.
In article      View Article
 
[44]  Sande JCG De, Murthy A. Including Peer and Self-Assessment in a Continuous Assessment Scheme in Electrical and Electronics Engineering. In: FIE '14. Proceedings 44th Annual Conference Frontiers in Education. Madrid: 2014. p. 1-5.
In article      View Article
 
[45]  Schuessler JN. Self Assessment as Learning: Finding the Motivations and Barriers for Adopting The Learning-oriented Instructional Design of Student Self Assessment [doctoral dissertation]. Minneapolis, United States: Capella University; 2010.
In article      View Article
 

Creative CommonsThis work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/

Cite this article:

Normal Style
Mohamed Anwer Zayed. The Effectiveness of Peer/Self-assessment Approach in Urban Planning Studio-based Academic Education. American Journal of Educational Research. Vol. 5, No. 6, 2017, pp 588-605. https://pubs.sciepub.com/education/5/6/1
MLA Style
Zayed, Mohamed Anwer. "The Effectiveness of Peer/Self-assessment Approach in Urban Planning Studio-based Academic Education." American Journal of Educational Research 5.6 (2017): 588-605.
APA Style
Zayed, M. A. (2017). The Effectiveness of Peer/Self-assessment Approach in Urban Planning Studio-based Academic Education. American Journal of Educational Research, 5(6), 588-605.
Chicago Style
Zayed, Mohamed Anwer. "The Effectiveness of Peer/Self-assessment Approach in Urban Planning Studio-based Academic Education." American Journal of Educational Research 5, no. 6 (2017): 588-605.
Share
  • Figure 5. Bar chart of the correlation coefficients between differences in peer grading and student’s course grade and student’s GPA
  • Figure 9. Estimations of peer/self-assessment duration (right); student responses as to venue suitability for peer/self-assessment (left)
  • Table 2. Comparative analysis results of differences in grades across four rounds of self-assessment of project-based tasks
  • Table 4. The results of differences in grades across two rounds of self-assessment of reporting-based tasks
  • Table 5. The results of differences in grades across four rounds of peer assessment of project-based tasks
  • Table 7. The results of differences in grades across two rounds of peer assessment of reporting-based tasks
[1]  Bloxham S, Boyd P. Developing Effective Assessment in Higher Education: A Practical Guide. Maidenhead, UK: Open University Press; 2007.
In article      PubMed
 
[2]  Van Den Berg I, Admiraal W, Pilot A. 2006. Designing Student Peer Assessment in Higher Education: Analysis of Written and Oral Peer Feedback. Teaching in Higher Education, 11(2):135-47.
In article      View Article
 
[3]  Falchikov N, Goldfinch J. Student Peer Assessment in Higher Education: A Meta-Analysis Comparing Peer and Teacher Marks. Review of Educational Research. 2000; 70(3):287-322.
In article      View Article
 
[4]  Aitchison RE. The Effects of Self-Evaluation Techniques on the Musical Performance, Self-Evaluation Accuracy, Motivation, and Self-Esteem of Middle School Instrumental Music Students [doctoral dissertation]. [Iowa City (IA)]: The University of Iowa; 1995.
In article      View Article
 
[5]  Lindblom-Ylänne S, Pihlajamäki H, Kotkas T. Self-, Peer- and Teacher-Assessment of Student Essays. Active Learning in Higher Education. 2006; 7(1): 51-62.
In article      View Article
 
[6]  Boud D, Falchikov N. Quantitative Studies of Student Self-Assessment in Higher Education: A Critical Analysis of Findings. Higher Education. 1989; 7(1): 529-49.
In article      View Article
 
[7]  Andrade H, Du Y. Student Responses to Criteria Referenced Self- Assessment. Assessment & Evaluation in Higher Education. 2007; 32(2): 159-81.
In article      View Article
 
[8]  Fischer F, Kollar I, Fischer F. Peer Assessment as Collaborative Learning: A Cognitive Perspective. Learning and Instruction. 2010; 20(4): 344-48.
In article      View Article
 
[9]  Jones I, Alcock L. Peer Assessment without Assessment Criteria. Studies in Higher Education. 2014; 39(10): 1774-87.
In article      View Article
 
[10]  Spiller D. 2012. Assessment Matters: Self-Assessment and Peer Assessment. Hamilton, New Zealand: Teaching Development, University of Waikato; 2012.
In article      View Article
 
[11]  Topping KJ. Trends in Peer Learning. Education Psychology. 2005; 25(6): 631-45.
In article      View Article
 
[12]  Thomas G, Martin D, Pleasants K. Using Self- and Peer-Assessment to Enhance Students’ Future-Learning in Higher Education. Journal of University Teaching & Learning Practice. 2011; 8(1); Article 5.
In article      View Article
 
[13]  Crowell TL. Student Self Grading: Perception vs. Reality. American Journal of Educational Research. 2015; 3(4): 450-55.
In article      View Article
 
[14]  Sadler PM, Good E. The Impact of Self- and Peer-Grading on Student Learning. Educational Assessment. 2006; 11(1): 1-31.
In article      View Article
 
[15]  Planas Lladó A, Soley LF, Fraguell Sansbelló RM, Pujolras GA, Planella JP, Roura-Pascual N, et al. 2014. Student Perceptions of Peer Assessment: An Interdisciplinary Study. Assessment & Evaluation in Higher Education. 2014; 39(5): 592-610.
In article      View Article
 
[16]  Crowe JA, Silva T, Ceresola R. The Effect of Peer Review on Student Learning Outcomes in a Research Methods Course. Teaching Sociology. 2015; 43(3): 201-13.
In article      View Article
 
[17]  Kersh N, Evans K, Kontiainen S, Bailey H. Use of Conceptual Models in Self-Evaluation of Personal Competences in Learning and in Planning for Change. International Journal of Training and Development. 2011; 15(4): 290-305.
In article      View Article
 
[18]  Tighe-Mooney S, Bracken M, Dignam B. Peer Assessment as a Teaching and Learning Process: The Observations and Reflections of Three Facilitators on a First-Year Undergraduate Critical Skills Module. All Ireland Journal of Teaching and Learning in Higher Education. 2016; 8(2): 28301-18.
In article      View Article
 
[19]  Topping K. Peer Assessment Between Students in Colleges and Universities. Review of Educational Research. 1998; 68(3): 249-76.
In article      View Article
 
[20]  Carlson PA, Berry FC, Voltmer D. Incorporating Student Peer-Review into an Introduction to Engineering Design Course. In: FIE '05. Proceedings 35th Annual Conference Frontiers in Education. New York: IEEE; 2005. p. 20-25.
In article      View Article
 
[21]  Topping K. Self and Peer Assessment in School and University: Reliability, Validity and Utility. In: Segers M, Dochy F, Cascallar E, editors. Optimising New Modes of Assessment: In Search of Qualities and Standards. Dordrecht, The Netherlands: Springer; 2003. p. 55-87.
In article      View Article
 
[22]  Arnold L, Willoughby L, Calkins V, Gammon L, Eberhart G. Use of Peer Evaluation in the Assessment of Medical Students. Journal of Medical Education. 1981; 56(1): 35-42.
In article      View Article
 
[23]  Levine RE., Kelly A, Karakoc T, Haidet P. Peer Evaluation in a Clinical Clerkship: Students’ Attitudes, Experiences, and Correlations With Traditional Assessments. Academic Psychiatry. 2007; 31(1): 19-24.
In article      View Article  PubMed
 
[24]  Bloxham S, West A. Understanding the Rules of the Game: Marking Peer Assessment as a Medium for Developing Students' Conceptions of Assessment. Assessment & Evaluation in Higher Education. 2004; 29(6); 721-33.
In article      View Article
 
[25]  Das M, Mpofu D, Dunn E, Lanphear JH. Self and Tutor Evaluations in Problem-Based Learning Tutorials: Is There a Relationship? Medical Education. 1998; 32(4): 411-18.
In article      View Article  PubMed
 
[26]  Foley S. Student Views of Peer Assessment at the International School of Lausanne. Journal of Research in International Education. 2013; 12(3): 201-13.
In article      View Article
 
[27]  Hanrahan SJ, Isaacs G. Assessing Self- and Peer-Assessment: The Students’ Views. Higher Education Research & Development. 2001; 20(1): 53-70.
In article      View Article
 
[28]  Kaufman JH, Schunn CD. Students’ Perceptions about Peer Assessment for Writing: Their Origin and Impact on Revision Work. Instructional Science. 2011; 39(3): 387-406.
In article      View Article
 
[29]  Cestone CM, Levine RE, Lane DR. Peer Assessment and Evaluation in Team-Based Learning. New Directions For Teaching and Learning. 2008; 2008(116): 69-78.
In article      View Article
 
[30]  Wen ML, Tsai CC. University Students’ Perceptions of and Attitudes toward (Online) Peer Assessment. Higher Education. 2006; 51(1): 27-44.
In article      View Article
 
[31]  Cotrell WH, Weaver RL. Peer Evaluation: A Case Study. Innovative Higher Education. 1986; 11(1): 25-39.
In article      View Article
 
[32]  Cassidy S. Assessing ‘Inexperienced’ Students’ Ability to Self-Assess: Exploring Links with Learning Style and Academic Personal Control. Assessment & Evaluation in Higher Education. 2007; 32(3): 313-30.
In article      View Article
 
[33]  O’Brien SO, McNamara G, O’Hara J. Supporting the Consistent Implementation of Self-Evaluation in Irish Post-Primary Schools. Educational Assessment, Evaluation and Accountability. 2015; 27(4): 377-93.
In article      View Article
 
[34]  Gielen S. Peer Assessment as a Tool For Learning [doctoral dissertation]. Leuven, Belgium: Katholieke Universiteit Leuven; 2007.
In article      
 
[35]  De Grez L, Valcke M, Roozen I. How Effective Are Self- and Peer Assessment of Oral Presentation Skills Compared to Teachers’ Assessments? Active Learning in Higher Education. 2012; 13(2): 129-42.
In article      View Article
 
[36]  Miller PJ. The Effect of Scoring Criteria Specificity on Peer and Self-Assessment. Assessment & Evaluation in Higher Education. 2003; 28(4): 383-94.
In article      View Article
 
[37]  Tan K, Keat LH. Self and Peer Assessment as an Assessment Tool in Problem-based Learning. In: Tan K, editor. Problem-Based Learning: New Directions and Approaches. Singapore: Learning Academy; 2005. p. 162-75.
In article      PubMed
 
[38]  Bickmore DK. The Effects of Student Self-evaluation and Pupil-teacher Conferences on student perceptions, self-concepts and learning [doctoral dissertation]. Utah, United States: Brigham Young University; 1981.
In article      
 
[39]  Cho K, Schunn CD, Wilson RW. Validity and Reliability of Scaffolded Peer Assessment of Writing from Instructor and Student Perspectives. Journal of Educational Psychology. 2006; 98(4): 891-901.
In article      View Article
 
[40]  Salehi M, Masoule ZS. An investigation of the reliability and validity of peer, self-, and teacher assessment. Southern African Linguistics and Applied Language Studies. 2017; 35(1): 1-15.
In article      View Article
 
[41]  Zundert M Van, Sluijsmans D, Merrie J Van. Effective peer assessment processes: Research findings and future directions. Learning and Instruction. 2010; 20(4): 270-9.
In article      View Article
 
[42]  Willey K, Gardner AP. Investigating the capacity of Self and Peer Assessment to Engage Students and Increase their Desire to Learn. In: Attracting Young People to Engineering, Proceedings of the SEFI 37th Annual Conference. Rotterdam; 2009.
In article      View Article
 
[43]  Li Y, Chen L. Peer- and self-assessment: A Case Study to Improve the Students’ Learning Ability. Journal of Language Teaching and Research. 2016; 7(4): 780-7.
In article      View Article
 
[44]  Sande JCG De, Murthy A. Including Peer and Self-Assessment in a Continuous Assessment Scheme in Electrical and Electronics Engineering. In: FIE '14. Proceedings 44th Annual Conference Frontiers in Education. Madrid: 2014. p. 1-5.
In article      View Article
 
[45]  Schuessler JN. Self Assessment as Learning: Finding the Motivations and Barriers for Adopting The Learning-oriented Instructional Design of Student Self Assessment [doctoral dissertation]. Minneapolis, United States: Capella University; 2010.
In article      View Article