The study was carried out to find out the extent that teachers of English in secondary schools in Ebonyi State validate their test items. To guide the study, three research questions were formulated and one null hypothesis was postulated and tested at 0.00 level of significance. The design of the study was the descriptive survey. The population consisted of all the teachers of English in all the government-owned secondary schools in the three education zones of Ebonyi State. Purposive sampling technique was used to select 367 teachers which made up 50% of the entire population as sample. A 22-item researcher-developed questionnaire entitled Test Item Validation Questionnaire (TIVQ) was constructed, validated, trial tested and used to elicit data from the respondents. Data obtained were presented and analyzed using mean and standard deviation to answer the research questions and t-test statistical tool was used to test the hypothesis. The study revealed, amongst other things, that majority of the teachers of English in public secondary schools in Ebonyi State do not validate their test items before administration. The researchers recommended that test item review committees be set up; training programs should be provided for teachers of English and that there should be strenuous supervision of academic activities in secondary schools in Ebonyi State by both internal and external authorities.
A test is a measurement devise used by assessors to gather certain information about the testees in order to make important decisions. The test is the most commonly used instrument for assessing cognitive achievements 1. Also, tests connote the presentation of a standard set of items to be responded to and the responses derived provide a basis for determining the level of achievement. A language test, therefore, is a measurement devise used for measuring the proficiency of an individual in using a particular language or in a language course.
Types of tests include diagnostic tests, proficiency tests, achievement tests, and aptitude tests. In classroom situations (which the study is concerned with) the achievement test is most commonly used. The achievement test is used to measure the degree of success attained in a specific area of learning 2. It is the type of ability test that is concerned with what a person has learnt and its importance lies in its use to find out the progress made in aspects of language that has been taught. Achievement test, thus, is closely tied to particular school subjects. The merits of achievement tests lie in its provision of objective, independent and accuracy of measurement in what has been learned 3. Achievement tests are also known to provide norms for comparing students’ performance with their counterparts within and outside their schools.
Achievement tests are of two types – the standardized test and the teacher-made test (also called the classroom test). Teacher-made tests are locally developed by subject teachers in schools to assess the achievement of their students in areas covered in instruction. While the standardized test is valid, reliable and has a table of norms, the teacher-made test does not possess any form of norms 2. The standardized test is designed to be used on a much larger scale than the teacher-made test. As a result, test items are subjected to series of standardization processes before they are administered on testees. The teacher-made test, on the other hand, has no specific method of assuring its quality; the method of assuring the quality of the teacher-made tests varies from institution to institution.
In the selection of a measuring instrument, two fundamental questions arise. These are:
Does the instrument measure a variable consistently?
Is the instrument a true measure of the variable?
The former is an indication of reliability while the later raises issues of validity. The adequacy (and quality) of a measuring instrument is determined by its reliability and validity. Reliability refers to the consistency or stability of measurement 4. Further, reliability is viewed as the degree of consistency between two sets of scores or observation obtained with the same instrument or equivalent forms of an instrument 2. Validity, on the other hand, has to do with the ability of test items to measure what they are meant to measure. In other words, while reliability is concerned with the consistency of scores, validity is closely tied to the adequacy of test items in testing a specified area of instruction. This study is particularly interested in the validity of teacher-made language tests in the study area.
Validity basically is the assessment of whether a test measures what it aims to measure. A test is valid when it measures what it sets out to measure; for instance, a reading comprehension test which asks, “What is the difference between a tropical climate and a temperate climate? may not be valid especially as the question looks like a Geography question; it would, however, be valid if it is tied to passage 4. Validity is believed to be the most important characteristic of a test and a test that lacks validity is worthless 1. Standards of validity are content, criterion-related and construct validity but content validity is most vital for the classroom teacher 1. Further, a teacher-made test must fulfil two conditions to be termed valid. These are:
It must measure achievement in the subject for which it was prepared.
It must measure achievement in the learning objective defined for the subject.
A test that satisfies these two conditions would be seen to possess content validity. The question, however, is to what extent do classroom teachers subject their test items to validity check in tertiary institutions.
Tests are very important in the school system. This is because they give insight to how much the objective of learning is achieved; how well the method(s) of teaching has worked; and how worthwhile a programme is. Teachers construct and administer tests and the learners’ performance will determine the level of achievement. Thus, test construction should not be taken lightly. One may argue that the teacher who taught a subject has the ability to develop valid test items, however, it is observed that the manner that tests are developed in schools often present problems in scoring and grading of achievement 5. Also, many teachers do not use correct procedure in preparing classroom tests 6. From personal observation, classroom teachers in Ebonyi State sometimes construct test items on the day they are required to be taken. Even when the items are constructed earlier, there is often no time or resources to ensure their content validity before administration. These suggest that there is more to be desired. If the language teacher who is the key agent of the implementation of the language policy in Nigeria is not performing at optimum level, there is a problem 7. Effort should be made to ensure quality in the education system and it should start from the classroom.
This study sets out to find out the extent to which language teachers in secondary schools in Ebonyi State subject their test items to processes of testing their validity (especially content validity). Also, the study seeks to discover the characteristics of validity that secondary school teachers in Ebonyi State use to test the validity of their test items. Finally, the study will proffer suggestions that will help to ensure quality of teacher-made tests in secondary schools in Ebonyi State.
The findings of this study are significant in that it exposed the extent to which secondary school teachers in Ebonyi State their test items to validation processes. Also, it revealed the characteristic of validity that is employed by secondary school teachers in Ebonyi State to validate their test items. In essence, teachers, curriculum planners, students and parents will greatly benefit from the findings of this study. Teachers will see the need to improve the quality of their test items; curriculum planners will identify areas of need and pay attention to the right areas and not perceived ones; students and parents will benefit most as qualitative education will be provided and better qualified students will be produced. The findings, also, provide accurate information that will enable informed decisions on educational policies. Finally, the study serves as a reference material for future researchers who may wish to carry out researches in similar areas.
Quality of tests is determined by the reliability, validity and usability; this study is delimited to the validity of teacher-made tests in secondary schools. Also, the study focused on government-owned (or public) secondary schools in the three educational zones of Ebonyi State.
One null hypothesis was formulated and tested at 0.05 level of significance.
HO1: There is no significant difference in the mean rating of male and female teachers of English in secondary schools in Ebonyi State on the extent to which they validate their test items.
Quality in the education system has to with the standard enforced in the implementation of programmes. Quality in education, thus, connotes standard of education, standard of service, management, relevance, significance, and efficiency of product. Quality is an inalienable index of education programmes and it is imperative that every segment of the system must establish and maintain quality 8. To achieve the objective of education, therefore, the quality of tests must be assured. Quality assurance in testing is the systematic construction, administration and scoring of teacher-made tests using competent teachers and appropriate test items among other things 6. Thus, assuring the quality of tests will enable a high educational standard. Quality assurance is the process of setting, maintaining and improving standards in all aspects of the school system 9. Quality assurance is an all embracing, ongoing and a continuous process of improving the education system, institutions and programs 10.
Quality assurance is aimed at preventing faults from occurring. Quality is designed to ensure that products or services meet predetermined specifications; quality assurance aims at providing products and services completely devoid of defect by doing things right at all time 11. By implication, quality will be assured in the education system by feeding quality inputs into the system in order to get quality outputs.
3.2. Concept of Importance of Tests in EducationThe importance of tests in education cannot be overemphasized; it helps the teacher to take decisions on course improvement; identify the needs of students; and help educational administrators and curriculum planners to judge how good the school system is. Also, tests helps the evaluator to evaluate human ability, personality characteristics as well as adjustment and mental health 12. The purposes of tests include giving direction to instructional activities; measuring achievement; providing empirical basis for curricular activities; determining the merits and limitations of the instructional program; and supplying the data for a comprehensive judgement in the learners 13. Further, tests help the education administrator to make decisions in educational planning; determine strengths and weaknesses of instructional programs; identify areas where supervision is needed; and determine overall effectiveness of schools 3. Also, tests help teachers to gain understanding of the achievement and ability levels of individual students and classes; determine whether to adjust instructional practices; diagnose students learning difficulties; measure students’ attainment; and make decisions regarding grouping students within subject matter areas 3.
From the foregoing, tests are devices used to evaluate if learners are coping with the lesson being taught 14. Tests are standard sets of questions which are administered to testees to determine the extent they have attained previous identified objectives 12. Also, a test is a procedure used to evaluate human ability, personality, characteristics, adjustment and mental health. This means that for tests to be administered to learners, they must have been exposed to learning experiences 15.
3.3. Concept of Validity in TestingThe importance of validity in testing is equated with the accuracy of a wristwatch; a wrist watch that is consistently late by five minutes may be reliable but not valid as it is consistently below the accurate time 16. In other words, validity assesses the relevance of an instrument to its purpose. Validity is also seen as the extent to which a test adequately measures what it is supposed to measure 1. Validity, which could be content, construct or face related, is the most important characteristic of a test 1. A test possesses content validity if it contains items that measure and cover the intended area to be covered by instruction. Usually test content and classroom instruction are in close relationship.
This study is a descriptive survey. A survey research is a systematic collection of data or information from a population (sometimes referred to as universe) or sample of a population (considered to be a representative of the entire group of interest), through the use of personal interview and/or questionnaire 17. This design was considered appropriate as the study collected data from the sample, with the aid of a researcher-developed questionnaire, to describe an entire population under study.
The area of the study comprised all the government-owned secondary schools in the three education zones of Ebonyi State. The three education zones are Abakaliki, Onueke and Afikpo. Also, data was obtained from the state Secondary Education Board (SEB) at Abakaliki to help the study. This is due to the fact that the government-owned secondary schools are centrally controlled by the Secondary Education Board.
The study was interested in the methods of ascertaining validity of teacher-made language (English) test items, so, the population of the study comprised all teachers of English in the two hundred and forty-three (243) government-owned (public) secondary schools in the three (3) education zones of Ebonyi State. The choice of the population was based on the fact that these institutions all offer English as a compulsory subject at all levels while other languages such as Igbo and French are compulsory only in junior classes but taken as optional subjects in the senior classes. Data obtained from the Secondary Education Board in Abakaliki revealed that there were six hundred and fifty-seven (657) teachers of English in the two hundred and twenty-seven (227) government-owned secondary schools. Thus, the population of the study was 657 secondary school teachers.
The purposive sampling technique was used to select one hundred and twenty-two (122) schools which represent 50% of the public secondary schools in Ebonyi State. All the teachers of English from the 122 schools were used since the number was not very large. The total number of the sample was three hundred and sixty-seven (367) teachers of English. The purposive sampling technique was deemed appropriate because the population was not very large as there were often few teachers of English in schools. Moreover, matters of quality should not be trifled with and the more responses sought, the better the result obtained.
The instrument for data collection was a researcher developed teachers’ questionnaire entitled Test Items Validation Questionnaire (TIVQ). The questionnaire items were generated from data gathered in the review of related literature. There were two parts in the questionnaire – Part A (which solicited information on respondents’ personal data) and Part B (which contained items on knowledge and practice of validation processes). Further, Part B was in three sections and the clustered items relate to and attempt to answer the three research questions.
Face and content validity of the instrument were determined by two experts from the department of Arts Education, Ebonyi State University, Abakaliki; and two experts from the Department of the same university. Copies of the questionnaire were given to these experts and their corrections and suggestions were incorporated. As a result, the instrument was seen to possess both content and face validity.
The reliability of the instrument was determined by pre-testing it on thirty (30) teachers of English in public secondary schools in Enugu State. The scores obtained from the respondents were collated and analyzed to determine the co-efficient of the set of scores for the items in each of the sections. The Cronbach Co-Efficient Alpha was used to obtain the reliability co-efficient of 0.85, 0.82 and 0.87 respectively for sections 1, 2 and 3.
The researcher employed the services of six (6) research assistants to help in the administration and collection of questionnaires on the spot to avoid loss. The rationale behind the number of the research assistants is that two (2) research assistants covered the schools in each of the three education zones in the state. The expectation and reality is that all the questionnaires administered were returned and used in the study.
Data collected was analyzed using simple percentage and frequency count. The YES option implies that the respondent(s) agree with the statement while the NO option implies that the respondent(s) disagree with the statement. Fifty percent (50%) and above indicates approval while forty-nine percent (49%) and below indicates disapproval. The chi-square statistics was used to test the single hypothesis and a critical value of <0.05 was accepted.
Data presented and analyzed here are based on the research questions guiding the study. The items are clustered according to the research questions and analyzed thus.
Research Question 1: To what extent do secondary school teachers in Ebonyi State validate their test items?
The result of data in Table 1 revealed that the respondents in item 1-10 had the mean scores of 1.46 ± 0.55, 1.29 ± 0.47, 2.04 ± 1.06, 1.41 ± 0.62, 1.72 ± 0.91, 1.38 ± 0.57, 1.59 ± 0.87, 1.63 ± 0.92, 1.48 ± 0.75 and 1.35 ± 0.49. This indicates that the respondents disagreed on all the item that test items are constructed on the day of the test and that test items are sent to test experts to scrutinize before administration, that test items are constructed on the day of the test, that test items are analyzed before administered to testees, that test items are submitted to the HOD for assessment, that corrections and inputs are made by the HOD before administration, that the HOD is often too busy to look at the test items before administration, that a committee reviews test items before administration, that every teacher handles his/her test items alone to avoid leakage and that individual teachers are at liberty to do as they see fit with test items in their subjects. The grand mean score of all the respondent is 1.56 with the standard deviation of 0.72. This mean sore is below than 2.50 benchmark of acceptance. Therefore, secondary school teachers in Ebonyi State do not validate their test items before administration.
Research Question 2: What characteristic of validity do secondary school teachers in Ebonyi State use to test the validity of their test items?
The result of data in Table 2 revealed that the respondents in item 11-17 had the mean scores of 3.07 ± 0.84, 1.26 ± 0.45, 3.00 ± 0.89, 1.51 ± 0.73, 1.46 ± 0.73, 2.54 ± 0.75 and 2.72 ± 0.91. This indicates that the respondents disagreed on item 12, 14 and 15 that test items cover only some aspects of instruction that test items do not necessarily look like language tests and that test items are taken from any area of the content of instruction. The data also revealed that the greater number of respondents accepted items 11, 13, 16 and 17 that test items cover every aspect of instruction, that test items look like language tests, test items are constructed using a test blue print and test items correspond with the goals of instruction. The grand mean score of all the respondents was 2.22 which was lower than the 2.50 benchmark. Therefore, teachers in Ebonyi state secondary schools sometimes do not test the validity of their test items.
Research Question 3: What can be done to beef up the quality of teacher-made tests in secondary schools in Ebonyi State?
The result of data in Table 3 revealed that the respondents in item 18-22 had the mean scores of 2.99 ± 0.83, 2.69± 0.91, 2.70 ± 0.86, 2.94 ± 0.87 and 2.92 ± 0.91. This indicates that the respondents disagreed on item 18, 19, 20, 21 and 22. This indicates that all the respondents accepted that the teachers should construct test items with a test blue print, teachers must periodically attend seminars and workshops, that a committee should be formed to review test items before administration, that principals should supervise teachers to make sure they are doing the right thing and that HODs must moderate test items before Administration. The grand mean score of all the respondents is 2.84 which is lower than 2.50 benchmark. Therefore, principals must supervise text item before administration in order to beef it up to avoid error.
Data in Table 4 showed that the mean scores of male and female teachers on the extent to which they validate their test items were 2.6377 and 2.4723 with the standard deviation of 0.28449 and 0.23637 respectively. This indicates that more male teachers than female teachers in Ebonyi State secondary schools validate their test items. It also showed a P-Value of 0.000 which is lower than the chosen level of significance, 0.05. The null hypothesis which states that there is no significant difference in the mean rating of male and female teachers of English in secondary schools in Ebonyi State on the extent to which they validate their test items was consequently rejected.
The first research question sought information on the extent to which teachers of English in public secondary schools in Ebonyi State validate their test items. The findings indicate that teachers of English generally do not validate their test items as 78.5% accepted that they construct test items on the day of administration; only 21.5% of the population rejected that their test items are constructed earlier than the administration day. From the data, there will not be time to subject test items to processes of validation before administration if teachers only hurriedly construct them on the actual day of administration. Also, to give credence to the claim, 95.9% rejected the statement that test items are analyzed before administered to testees. This shows that no form of item analyses is done to determine the effectiveness of test items before administration.
Furthermore, it was discovered that test items were not submitted to any authority such as an examination committee, the head of department or the dean of studies for scrutiny. Data showed that 55.9% rejected the assertion that test items are submitted to the HOD for assessment; a whopping 97.3% rejected that there is a committee that reviews test items before administration; and 88.8% completely disagreed that test items are sent to test experts to scrutinize before administration. Finally, it was discovered that every teacher handles his/her tests items alone. Available data revealed that 83.7% affirmed that every teacher handles his/her test items alone to avoid leakage and 92.9% concur that individual teachers are at liberty to do as they see fit with test items in their subjects.
Table two presented data that answered the second research question. Research question two sought answers on the characteristics of validity applied by teachers of English in their assessment of test items. Despite the fact that table one revealed that absolutely no measures are taken to validate language test items, data here showed that test items actually possess face and content validity even if it is by the assessment of the individual teachers. For instance, 56.1% of the population affirm that their test items cover every aspect of instruction; 80.7% concede that their test items do resemble language tests and 77.7% agree that their test items correspond with set goals of instruction. In other words, more of the respondents accepted that language tests look like language tests and that their test items agree with the objectives of instruction.
However, these claims are only one sided since no other person was required to verify them. More so, no specification or test blue print was used in the construction of test items by teachers of English in public secondary schools in Ebonyi State. This is seen from the 51% of the population who rejected the statement on the use of test blue print to prepare test items.
From Table 3, which presented data on the strategies to be employed in order to improve the quality of teacher-made tests in secondary schools in Ebonyi State, evidence showed that all the five items scored above 50% indicating that they were all accepted. To elucidate further, 86.4% affirmed that test items should be constructed with a test blue print; 80.4% supported that teachers must periodically attend seminars and workshops if they are to improve in test item development and ensure quality in the system; 84.7% accepted that committees be set up to oversee test construction and administration in schools; 88% felt that principals should engage in active supervision; and 79% accepted that heads of departments must moderate test items before administration. This implies that teachers want the best for the system and are prepared to do the right thing in a bid to enhance the quality of their products.
The study explored the extent of validation of teacher-made language tests in secondary schools in Ebonyi State. Given the importance of evaluation in schools it is appalling to note that effort is not made to ensure that tests are constructed and administered properly. Language tests items are constructed in a hurry and no time is taken to analyze and ascertain their effectiveness. Even though test items are constructed bearing in mind the objectives of instruction, they are not sent to any expert to validate; neither are they constructed with the aid of test blue print or table of specification. By so doing, results may go either way – students may pass too well or they may fail drastically. Either way, the result will not be a true outcome of instruction.
However, reports over the years have shown that there is massive failure in English in external examinations such as West African Senior Secondary School Certificate Examination (WASSSCE). One begins to wonder whether inability to validate test items is a major cause of the failure. Ensuring quality in teacher-made language tests is a good way of improving performance of students in secondary schools. Some of the strategies that can help to ensure that language test items are validated in secondary schools in Ebonyi State are heads of departments should adequately supervise teachers under them to ensure that they construct their test items with the aid of test blue print and that committees should be set up to review and validate test items.
Based on the findings of the study, the following recommendations are made.
Without proper supervision, activities in schools will be chaotic. The study recommends strenuous supervision of academic activities in secondary schools in Ebonyi State by both internal and external authorities.
Teachers of English can only give what they have. In this dispensation when graduates of English are employed to teach English without teaching qualifications, adequate training in test item construction should periodically be provided for teachers of English to pad the inadequacy.
Departments of English in public secondary schools in Ebonyi State must as a matter of urgency set up test item review committees to monitor test items before administration.
[1] | Maduabum, M. A. (1996). Handbook for effective continuous assessment. Owerri:Versatile Publishers. | ||
In article | |||
[2] | Okpala, P. N.; Onocha, C. O. & Oyedeji, O. A. (1993). Measurement and evaluation in education. Ibadan: Stirling-Horden Publishers. | ||
In article | |||
[3] | Akinpelu, O. F. (2005). “Appraisal of students: test and non-test devices.” In A. I. Idowu (Ed.). Guidance and counseling in education. P. 164-189. | ||
In article | |||
[4] | Williams, D. (1990). English language teaching: an integrated approach. Ibadan: Spectrum Books Limited. | ||
In article | |||
[5] | Nworgu, B. G. (2003). Educational measurement and evaluation: theory and practice (3rd ed.). Nsukka: University Trust Publishers. | ||
In article | |||
[6] | Ikoro, S. I. & Opa, F. A. (2014). “Quality assurance in teacher-made-tests for sustainable development: the way forward.” In Ebonyi State College of Education, Ikwo Journal of Educational Research (EBSCOEIJER). 2(2), P. 120-128. | ||
In article | |||
[7] | Izuagba, A. C. & Ezenwa, P. C. N. (2011). “Teaching language in large classes: the teacher’s strategies.” In Nigerian Journal of Curriculum Studies 18(2) p. 120-126. | ||
In article | |||
[8] | Odo, E. E.; Nwambe, R. N. &Emeh, C. O. (2016). “Management strategies that enhance quality assurance of science education in secondary schools in Ebonyi State.” In Ebonyi State College of Education, Ikwo Journal of Educational Research (EBSCOEIJER). 4(1), P. 143-166. | ||
In article | |||
[9] | Federal Republic of Nigeria (2004). National policy on education (4th Ed.). Abuja: Nigerian Educational Research and Development Council (NERDC). | ||
In article | |||
[10] | Sanyal, B. C. (2013). “Quality assurance of teacher education in Africa.” In UNESCO: fundamentals of teacher development. Addis Ababa: UNESCO. | ||
In article | |||
[11] | Okoroma, N. S. (2006). “A model for funding and ensuring quality assurance in Nigerian universities.” In National Journal of Educational Administration and Planning (NAEAP). 6(1), p. 1-15. | ||
In article | |||
[12] | Kazdim, A. E. (2000). Encyclopaedia of psychology. Washington DC: American Psychological Association. | ||
In article | |||
[13] | Singh, Y. K. (2008). Education and measurement. New Delhi: APH Publishing Co-operation. | ||
In article | |||
[14] | Anene, G. U. & Ndubuisi, O. G. (2003). “Tests development process.” In B. G. Nworgu (Ed.). Educational measurement and evaluation: theory and practice. (3rd ed.). Nsukka: University Trust Publishers. P. 112-118. | ||
In article | |||
[15] | Herbor, P. V. F. (1999). Noteworthy points on measurement and evaluation. Enugu: Snaap Press Limited. | ||
In article | |||
[16] | Jimoh, S. A. (1995). “Introduction: a statement of intention.” In S. A. Jimoh (Ed.). Research methodology in education: an interdisciplinary approach. University of Ilorin: Library and Publication Committee. P. vii – x. | ||
In article | |||
[17] | Abdullahi, O. E. (1995). “Typology of research.” In S. A. Jimoh (Ed.). Research methodology in education: an interdisciplinary approach. University of Ilorin: Library and Publication Committee. P. 13-23. | ||
In article | |||
Published with license by Science and Education Publishing, Copyright © 2019 Nwani-Grace Ugwu and Solomon Okechukwu Mkpuma
This work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of this license, visit
https://creativecommons.org/licenses/by/4.0/
[1] | Maduabum, M. A. (1996). Handbook for effective continuous assessment. Owerri:Versatile Publishers. | ||
In article | |||
[2] | Okpala, P. N.; Onocha, C. O. & Oyedeji, O. A. (1993). Measurement and evaluation in education. Ibadan: Stirling-Horden Publishers. | ||
In article | |||
[3] | Akinpelu, O. F. (2005). “Appraisal of students: test and non-test devices.” In A. I. Idowu (Ed.). Guidance and counseling in education. P. 164-189. | ||
In article | |||
[4] | Williams, D. (1990). English language teaching: an integrated approach. Ibadan: Spectrum Books Limited. | ||
In article | |||
[5] | Nworgu, B. G. (2003). Educational measurement and evaluation: theory and practice (3rd ed.). Nsukka: University Trust Publishers. | ||
In article | |||
[6] | Ikoro, S. I. & Opa, F. A. (2014). “Quality assurance in teacher-made-tests for sustainable development: the way forward.” In Ebonyi State College of Education, Ikwo Journal of Educational Research (EBSCOEIJER). 2(2), P. 120-128. | ||
In article | |||
[7] | Izuagba, A. C. & Ezenwa, P. C. N. (2011). “Teaching language in large classes: the teacher’s strategies.” In Nigerian Journal of Curriculum Studies 18(2) p. 120-126. | ||
In article | |||
[8] | Odo, E. E.; Nwambe, R. N. &Emeh, C. O. (2016). “Management strategies that enhance quality assurance of science education in secondary schools in Ebonyi State.” In Ebonyi State College of Education, Ikwo Journal of Educational Research (EBSCOEIJER). 4(1), P. 143-166. | ||
In article | |||
[9] | Federal Republic of Nigeria (2004). National policy on education (4th Ed.). Abuja: Nigerian Educational Research and Development Council (NERDC). | ||
In article | |||
[10] | Sanyal, B. C. (2013). “Quality assurance of teacher education in Africa.” In UNESCO: fundamentals of teacher development. Addis Ababa: UNESCO. | ||
In article | |||
[11] | Okoroma, N. S. (2006). “A model for funding and ensuring quality assurance in Nigerian universities.” In National Journal of Educational Administration and Planning (NAEAP). 6(1), p. 1-15. | ||
In article | |||
[12] | Kazdim, A. E. (2000). Encyclopaedia of psychology. Washington DC: American Psychological Association. | ||
In article | |||
[13] | Singh, Y. K. (2008). Education and measurement. New Delhi: APH Publishing Co-operation. | ||
In article | |||
[14] | Anene, G. U. & Ndubuisi, O. G. (2003). “Tests development process.” In B. G. Nworgu (Ed.). Educational measurement and evaluation: theory and practice. (3rd ed.). Nsukka: University Trust Publishers. P. 112-118. | ||
In article | |||
[15] | Herbor, P. V. F. (1999). Noteworthy points on measurement and evaluation. Enugu: Snaap Press Limited. | ||
In article | |||
[16] | Jimoh, S. A. (1995). “Introduction: a statement of intention.” In S. A. Jimoh (Ed.). Research methodology in education: an interdisciplinary approach. University of Ilorin: Library and Publication Committee. P. vii – x. | ||
In article | |||
[17] | Abdullahi, O. E. (1995). “Typology of research.” In S. A. Jimoh (Ed.). Research methodology in education: an interdisciplinary approach. University of Ilorin: Library and Publication Committee. P. 13-23. | ||
In article | |||