Article Versions
Export Article
Cite this article
  • Normal Style
  • MLA Style
  • APA Style
  • Chicago Style
Research Article
Open Access Peer-reviewed

Developing an Evidence-Based Evaluation Tool for Continuous Improvement Program of the Department of Education

Melvin A. Garcia, Ernie C. Cerado
American Journal of Educational Research. 2020, 8(7), 502-507. DOI: 10.12691/education-8-7-8
Received June 16, 2020; Revised July 17, 2020; Accepted July 27, 2020

Abstract

The study aimed to formulate a credible evaluation tool for the Continuous Improvement (CI) Program of School-Based Management (SBM) schools in the Philippines. Participants involved 6 CI Masters and 30 program implementers of the Department of Education in SOX Region. Self-survey questionnaires and an evidence-based evaluation tool were utilized in data gathering. The research was carried out through mixed methods that essentially pooled quantitative and qualitative data. Findings indicated that the developed evaluation tool demonstrated a high validity suggesting its suitability and accuracy. Generally, the CI teams viewed their programs as very well implemented along with these dimensions, namely: strategic management, operational management, change adoption, stakeholder analysis, and CI sustainability. With the use of the evidence-based tool, the researcher’s assessment showed that schools have remarkable implementation as manifested by its well-presented, organized, and functional documents across all dimensions. The statistical test concluded that the researchers’ program evaluation is quite higher than the CI teams’ perceptions. The direct observation and knowledge of the researcher on the actual evidence are more independent, accurate, and objective than the mere perceptions of the implementers themselves. Ultimately, the best practices of schools were articulated in effectively executing and sustaining the continuous improvement initiatives.

1. Introduction

As education sectors move toward the future, they have to face new challenges to remain sustainable in providing quality and equitable education by adding value to what is offered. This effort essentially earned the trust of the stakeholders in helping to innovate with the learning institution anchored in 21st-century education. This is extremely challenging since it requires the school administration to predict what is needed and to discover future opportunities 1. Accordingly, the Philippine educational system has resorted to numerous innovations in the education structure, thus giving a more independent status of operations to schools guided by self-regulation in compliance with the Department of Education (DepEd) standard of excellence.

Continuous Improvement (CI) is a methodology which continually assesses, analyzes and acts on the improvement of key processes focusing on both the customer needs and the desired performance that fuel DepEd’s commitment to building a culture of continuous learning and growth. The concept is patterned from Kaizen (Kai - do, change, Zen - well), a kind of Japanese thinking and management. CI is transformed in the west as unending, continuous improvement 2.

Along with the school context, the identified priority improvement areas linked to the school measures are the basis for the targets. As a method, CI provides a very scientific, systematic, practical, and evidence-based approach in arriving project selection for schools. It is implemented for solutions to make sure that the gains are sustainable, which is integrated within the learning environment. Also, it is to better respond to the changing times and to mobilize the vision of shaping a culture that will have a direct and relevant impact on the learners. Currently, DepEd is investing much in research to promote innovation related to accessible and responsive basic education. To date, one significant study has proven that it is an effective and structured problem-solving tool on a modelling level to understand and address perennial problems in school performance 3.

On the other hand, School-based Management (SBM) was instituted since every school administrator focuses on numerous complexities of its operations as a strategy to improve education by transferring significant decision-making authority from DepEd Division Offices to individual schools. Thus, the School Principals, teachers, students, and parents are being provided greater control over the education process; thereby giving them responsibility for decisions about the budget, personnel, and the curriculum. Through the involvement of teachers, parents, and other community members in these key decisions, SBM can create more effective learning environments for children 4. In Region XII or popularly known as SOX, all accredited SBM Level 3 schools are trying their best effort to undertake initiatives in response to identified needs for continuous improvement.

As there is a dearth of studies on the Continuous Improvement Program at present, the researcher desired to supplement existing literature. While it has been claimed as a new approach to enhancing competitive excellence in manufacturing industries, nevertheless, there is a lack of evidence on the crucial part of the program in a school setting notably among SBM secondary schools that piloted the CI initiatives in regional scope. Owing to the novelty of the program, there is neither a standard nor existing DepEd evaluation tool for the purpose. Hence, the crafting and utilization of an evidence-based evaluation tool would likely achieve an accurate, rational, and objective assessment results. It would provide equity to SBM schools, CI teams and the stakeholders who are working relentlessly for the effective discharge and sustainability of CI initiatives.

It is on this premise that the researcher, who is a CI advocate and practitioner himself, was prompted to undertake the study.

1.1. Objectives of the Study

This study attempted to develop a valid and credible evaluation tool to assess the implementation of Continuous Improvement (CI) Program of SBM Level 3 secondary schools. It sought to satisfy these specific objectives:

1. Establish the validity of the evidence-based evaluation tool for Continuous Improvement (CI) Program

2. Describe briefly the CI programs of the schools

3. Evaluate the CI programs as perceived by the implementers on the dimensions of strategic management, operational management, change adoption, stakeholder analysis, and CI Sustainability

4. Evaluate the CI programs as assessed by the researcher on the same dimensions

5. Compare the ratings on the implementation of CI programs using the self-survey questionnaires of the implementers and the evidence-based evaluation tool of the researcher

6. Identify the best practices of SBM schools in the implementation of CI programs

2. Methodology

Convergent parallel mixed methods design merges quantitative and qualitative data, which builds on the results to explain in more details with qualitative research by providing a comprehensive analysis of the research problem 5. Results of the self-survey questionnaires answered by the CI Team and the evidence-based evaluation tool used by the researcher has proved and explained the contradictions or incongruent findings in various dimensions of the program. Moreover, interview sessions were undertaken to highlight the best practices and challenges of the program implementers.

Participants of the study were 6 CI Masters in the Division of Sultan Kudarat, who acted as expert validators of the researcher-developed tool. Also, 30 respondents were taken through complete enumeration of the 6 CI teams with 5 members from each of the accredited SBM Level 3 secondary schools in the SOX region.

The instrument for data collection had three (3) sets, namely: two (2) structured questionnaires, and an evidence-based evaluation tool for CI program implementation. The first set of the questionnaires have two (2) parts and were used to gather information from the target respondents of SBM schools. Part I consists of a socio-demographic profile of respondents, and Part II is the CI Program profile, and the self-survey questionnaires on the program dimensions of strategic management, operational management, change adoption, stakeholder analysis, and CI Sustainability. The second set deals on the experts’ validation of the evidence-based evaluation tool for CI Program in terms of content, relevance, and acceptability.

The third set was the validated evidence-based evaluation tool that was used by the researcher to objectively assess the programs. The contents of the instrument were modified from the earlier studies 1, 3 on CI program that underscored the 5 dimensions. To ensure its reliability, it was pilot-tested in 3 prominent schools in Esperanza District II, Division of Sultan Kudarat that have CI programs in the past. A Cronbach alpha of 0.931 was computed signifying a high consistency of the instrument. Besides, the qualitative data necessary to uncover the best practices of schools in implementing and sustaining the development programs was gathered through an interview with the CI implementers.

In initiating the data collection, the researcher sought the approval of the Department of Education Region XII (SOX), and the 6 Schools Division Superintendents. When the official consent was secured, the latter visited the respondent-schools. He requested a schedule from the School Principals for the administration of the questionnaires, actual evaluation of the CI program, and the interview with the implementers.

The gathered data were subjected to statistical treatment and tests. In particular, the mean was used in describing the validity of the developed tool for Continuous Improvement (CI) Program, and in determining the extent of implementation of its 5 dimensions. On the other hand, paired t-test was applied to compare the ratings of the CI teams’ self-survey and the researcher’s assessment using the evidence-based tool. The qualitative responses on the best practices were factored according to themes.

3. Results and Discussions

3.1. Validity of the Evidence-Based Evaluation Tool

Table 1 gives an indication of the validity ratings of the developed evidence-based evaluation tool for CI Program on its content, relevance, and acceptability as assessed by the program experts in the Department of Education.

  • Table 1. Mean Ratings and Verbal Description on the Validity of the Developed Evidence-based Evaluation Tool for CI Program as Validated by Experts

The results of the validation process suggest that the CI program experts were unanimous in judgment across the parameters of content, relevance, and acceptability. As shown, these criteria for evaluating the instrument appeared to be highly valid. Implicitly, the tool is expected to measure what it intends to measure and predict what it intends to predict 6. It means also that the developed instrument will correctly determine the level of implementation of any CI program under evaluation.

In a research, if the results are not considered valid therefore they are meaningless to our study. When it does not measure what we want it to measure, it means that the results cannot be utilized to answer the research question. Responding correctly to the problem statement is the main objective of any study. Also, these results cannot be used to generalize the findings, and the inquiry becomes a useless exercise. It was deeply stressed that validity as one of the scientific research principles should not be compromised to eliminate many sources of error 7.

3.2. Continuous Improvement Program of SBM Secondary Schools

In SOX Region, the DepEd has nine (9) School Divisions of which only 6 were able to recommend the most competitive SBM secondary schools that have successfully initiated and implemented a distinct Continuous Improvement Program (CIP). These development initiatives are creative answers of schools to a problem or essential need. Each project carries a unique name that is expressive of its nature and objective. Based on design, any CIP is managed by a select team of teachers. Table 2 presents the details of the assessed CI programs.

3.3. Implementers’ Perception on the Implementation of Continuous Improvement Programs in SBM Level 3 Secondary Schools

Table 3 shows the digested assessment on how the CI Team members perceived the implementation of the Continuous Improvement programs for the five dimensions, namely: strategic management, operational management, change adoption, stakeholder analysis, and CI sustainability. Being the implementers themselves and who have direct participation in their respective school projects, they likely enjoy total awareness, appreciation, and understanding on its implementation.

The summarized data indicates that the 6 CI programs under evaluation are very well implemented. The description is generally factual despite the lower level of assessment on the operational management dimension. It can be noted that the respondents perceived a bit inferior performance of the secondary schools in CI management process at the operational level. The dimension mainly involved the identification of needs, solutions, and actions necessary to solve problems, as well as in examining the extent of continuous improvement as part of daily work methods, and how employees are being rewarded for their contributions.

This finding is merely on the perspectives of the CI Teams. While their assessments are almost subjective, however, it cannot be simply underestimated because they have practically the context and absolute knowledge on the actual status of their programs. Hence, their observations on how the programs were implemented are noteworthy and likely dependable.

3.4. Implementation of Continuous Improvement Program using the Evidence-Based Evaluation Tool

To validate the self-survey results of the CI teams, the researcher carried out a parallel assessment on the program implementation on the same dimensions using the evidence-based evaluation tool. It is important to note that the latter’s approach was basically anchored on the availability, presentation, organization, and purpose of the required evidences. Table 4 discusses the assessment results.

As shown, the condensed data suggests that all surveyed CI programs in SOX are very well implemented. The adjectival ratings are also confirmed to be consistent across the 5 dimensions of strategic management, operational management, change adoption, stakeholder analysis, and CI sustainability. This is indicative that the studied schools have been remarkably implementing methods and activities to find solutions to areas of concern that have direct impact on student learning through the application of school-based management principles.

Particularly, the practice of strategic management in 6 CI programs is very evident as all verifiable documents during the evaluation were available, presented, organized, and functional. Schools have existing documents on organizational structure of CI Project Team, communication process on CI review requirements, financial support of the administration, supportive leadership of School Principal, integration of CI in School Improvement Plan (SIP) and Annual Investment Plan (AIP), and the Principal’s initiatives to adapt CI process in classroom management.

About operational management, the schools have intact documents and proofs about the CI initiatives like utilizing the social media platforms in information dissemination, periodic conduct of INSET for the faculty, proofs of school recognition to celebrate creativity and innovation, review and adjustments on CI goals, CI updates posted on school bulletin facilities, financial resources for expedient support from internal and external stakeholders, and ICT resources being used in enhancing the CI processes.

On the dimension of change adoption, the researcher found a wide-ranging documents that show CI initiatives for the improvement of learning achievement such as modelling of personal change by teachers; on-going, evolving, and adapting working environment for constructive change, conducting in-service training, seminars or workshops to promote risk-taking, creativity, and innovation as part of CI culture; and promoting success stories in the implementation of these initiatives

Also, the schools have presented a very substantial documents showing CI initiatives for visible community partnership, for instance, conducting consultation to draw support on their willingness to abide, maintaining school’s reputation as an engaging and productive place, creating proper collaboration, quality partnerships, and diversity of stakeholder’s involvement as well as commitment towards the project, and fitting together on related participatory approaches through involvement in dialogues in planning and implementation of these initiatives.

Moreover, CI sustainability is very evident because it was found out that schools have almost all documents showing CI initiatives for sustaining the program, namely: development and documentation of CI standard process for intensive monitoring; evaluating data results before and after the CI program to measure improvements; formulated long-range development plan for timely monitoring sessions; continued systematic evaluation of the effectiveness of the CI Project highlighting the citations or recognitions received as a positive impact of CI; updated notes through interface of relevant research findings; and noted as resource person, lecturer or consultant for CI adoption to other schools. However, some schools claimed to have gap records to verify improvement on academic performance (NAT).

The foregoing findings are deemed conclusive because the assessment was primarily based on the presented documents and other material evidences that were actually viewed, examined and validated by the researcher. The latter’s judgement is dependable as he fully understood the objectives of the study, and has practically far-reaching knowledge and appreciation of the complexities of Continuous Improvement program.

3.5. Comparison of the Self-Survey and Evidence-based Evaluation Ratings

Table 5 presents the comparison of mean ratings between the perceptions of the CI Team and the actual evaluation of the researcher in the 5 dimensions and the program itself through t-test analysis.

As indicated, the difference of means in the operational management of the Continuous Improvement Programs is statistically significant. This means that the researcher’s evaluation (M=4.49) is relatively higher than that of the CI Team perceptions (M=3.85). In some other dimensions, it is apparent that the ratings are statistically the same.

Overall, the researcher’s evaluation (M=4.52) is quite higher than the CI Team self-survey (M=4.21). The differing results between the CI team and the researcher are predictable. Understandably, the CI team or the implementers were subjective in the evaluation owing to their ownership of the school development initiatives. Meanwhile, the researcher tends to be independent and objective in the evaluation as it is largely anchored on the observable evidences. Under the “evidence-based practice” principle, the better the evidence, the more confidence you can have in your conclusions 8. One expert also noted that our capacity to observe and detect causal relationships is built into us. Evaluators often rely on their own judgment as second best, but realistically all program evaluations entail a substantial number of judgment calls from valid, reliable and appropriate comparisons of data available 9. Program evaluation must involve human interaction, with simple explanations and inconclusive interpretations. Clearly, then, systematically gathered evidence is a key part of any good program evaluation, but evaluators need to be prepared for and accept the responsibility of exercising professional judgment at all times.

3.6. Generated Themes for the Best Practices of Schools on Continuous Improvement Program Implementation

In qualitative study, identifying the themes, recurring ideas or language, and patterns of belief that apparently link the participants is the most challenging part of data analysis but one that actually incorporates the whole pursuit 10. Theme analysis involves detecting how the participants’ expressions fitted into a chosen theme or category, while another might have indicated a divergence from it.

By definition, best practices are routine strategies, processes or procedures proven to achieve the best results 11. These are innovative solutions to school problems that the implementers have introduced. Besides, these habitual actions also encourage new ideas and insights to surface from the teachers and staff. The outstanding practices have the potential to be replicated within the school and even to other sectors. Schools would probably learn from successful organizations by adopting their best practices.

In this exploratory study, it was gathered and analysed that there are around 11 themes of emergent best practices in CI program implementation. This include accepting and willingness for personal change, crafting unique intervention tools, displaying administrative support, drawing out support of stakeholders, employing research for academic forums, flexible schedule adjustments, retraining colleagues for CI processes, safekeeping of organized and functional documents, seeking expert or specialized teachers to execute the intervention, strengthening monitoring and evaluation, and undertaking open communication.

Table 6 presents the best practices of each secondary school in implementing the CI program.

4. Conclusions and Recommendations

The developed evidence-based evaluation tool for the school’s Continuous Improvement Programs is highly valid. Thus, it is practically accurate, objective and usable. Perceptions of the implementers showed that CI programs in SBM Level 3 secondary schools are very well implemented. Besides, the evidences during the evaluation made by the researcher indicated that programs are also very well implemented. Statistical tests however revealed that the implementers’ self-survey and the researcher’s evaluation using the evidence-based tool in strategic management, change adoption, stakeholder analysis, and CI sustainability dimension are comparable. Nonetheless, the researcher’s overall program rating is reasonably higher than the CI team self-survey result. Indeed, the researcher’s assessment tends to be more independent, comprehensive, and factual to establish a credible verification process in checking the way CI programs are implemented. Despite the obvious odds met in program implementation, the respondent-schools have demonstrated best practices to sustain their development initiatives.

With the foregoing findings and conclusions, it is recommended that DepEd officials may consider the evidence-based evaluation tool for adoption as it is confirmed valid. However, if there is a necessity to review it then the Department may form a group of experts to study and introduce pertinent enhancements. In the absence of a standard instrument to assess the Continuous Improvement initiatives, the developed tool of the researcher is highly commendable. It is likely that with a valid tool to use, an objective and credible evaluation of one of the vital programs of DepEd can be instituted.

In the case of operational management of the CI programs, it is suggested that school administrators will formulate a functional operational mechanism that utilizes key supporting process towards building a culture of CI champion schools. It is also proposed that a CIP committee to be composed of trained, competent and recognized implementers be constituted to conduct an annual or bi-annual assessment using the new tool. Besides, the granting of award or appropriate incentive to the “Best CIP Implementer” can be adopted. This will pave the way towards the institutionalization of CIP evaluation using the standard tool.

In the light of the best practices concerning program implementation, these relevant themes would provide proper framework as well as planning ingredients to DepEd officials and other SBM schools to draw appropriate plans or actions. Also, these tried practices are cues to stimulate others to initiate and implement effective CI programs. It is similarly suggested to future researchers to deepen the study on the context designed to understand the current practices relative to development framework of the CI program as basic requisites for SBM level accreditation. Lastly, replicate this study in other regions of the country to test the consistency of the current results, and likewise to the elementary schools implementing their CI programs utilizing the evidence-based evaluation tool.

Acknowledgements

The authors wish to extend their profound gratitude to the officials of the Department of Education in SOX Region, particularly the Regional Director, Division Superintendents, and concerned administrators who gave the permission for the study to be carried out in select School-Based Management Schools (SBM) Level 3 secondary schools.

References

[1]  Madrigal, J. (2012). Assessing Sustainability of the Continuous Improvement through the Identification of Enabling and Inhibiting Factors (Doctoral dissertation), Virginia Polytechnic Institute and State University.
In article      
 
[2]  Singh, J., & Singh, H. (2015). Continuous improvement philosophy-literature review and directions. Benchmarking an International Journal, 22(1), 75-119.
In article      View Article
 
[3]  Martinez, R., & Yap, G. M. (2017). Continuous improvement innovation in Philippine education: A reflective approach. Retrieved from https://search.proquest.com/docview/1967758559?accountid=173 015 [Accessed September 12, 2019].
In article      
 
[4]  Department of Education (2012). Revised School-Based Management. DO No. 83, s. 2012 (Implementing Guidelines on the Revised SBM Framework, Assessment Process, and Tool).
In article      
 
[5]  Creswell, J.W. (2014). Research Design (4th Ed.). Qualitative, quantitative, and mixed methods approach. USA: Sage Publications, Inc.
In article      
 
[6]  Online Evaluation Resource Library (n.d.) Quality Criteria for Instruments. https://oerl.sri.com/instruments/instrcrit.html.
In article      
 
[7]  What is validity and why is it important for survey results? Retrieved from https://www.nbrii.com/faqs/data-analysis/validity- important/ [Accessed July 10, 2020].
In article      
 
[8]  CFCA (November 2013). Evidence-based practice and service-based evaluation. https://aifs.gov.au/cfca/publications/evidence- based-practice-and-service-based-evaluation [Accessed December 4, 2019].
In article      
 
[9]  McDavid, J. C., Huse, I., Hawthorn, L. R., & Ingleson, L. R. (2012). Program evaluation and performance measurement. USA: Sage Publications Inc.
In article      
 
[10]  De Vos. A.S. (2005). Qualitative data analysis and interpretation. In De Vos, A.S. (Ed.), Strydom, H., Fouché, C.B. & Delport, C.S.L. Research at Grassroots: For the Social Sciences and Human Service Professions. 3rd ed. Pretoria: Van Schaik Publishers.
In article      
 
[11]  CIToolKit (2020). Best Practices. https://citoolkit.com/articles/best-practices/ [Accessed February 21, 2020].
In article      
 

Published with license by Science and Education Publishing, Copyright © 2020 Melvin A. Garcia and Ernie C. Cerado

Creative CommonsThis work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/

Cite this article:

Normal Style
Melvin A. Garcia, Ernie C. Cerado. Developing an Evidence-Based Evaluation Tool for Continuous Improvement Program of the Department of Education. American Journal of Educational Research. Vol. 8, No. 7, 2020, pp 502-507. http://pubs.sciepub.com/education/8/7/8
MLA Style
Garcia, Melvin A., and Ernie C. Cerado. "Developing an Evidence-Based Evaluation Tool for Continuous Improvement Program of the Department of Education." American Journal of Educational Research 8.7 (2020): 502-507.
APA Style
Garcia, M. A. , & Cerado, E. C. (2020). Developing an Evidence-Based Evaluation Tool for Continuous Improvement Program of the Department of Education. American Journal of Educational Research, 8(7), 502-507.
Chicago Style
Garcia, Melvin A., and Ernie C. Cerado. "Developing an Evidence-Based Evaluation Tool for Continuous Improvement Program of the Department of Education." American Journal of Educational Research 8, no. 7 (2020): 502-507.
Share
  • Table 1. Mean Ratings and Verbal Description on the Validity of the Developed Evidence-based Evaluation Tool for CI Program as Validated by Experts
  • Table 3. Mean Ratings and Verbal Description on the Implementation of CI Programs of SBM Secondary Schools as perceived by the CI Team
  • Table 4. Mean Ratings and Verbal Description on the Implementation of CI Programs of SBM Secondary Schools in SOX as evaluated using the Evidence-Based Tool
  • Table 5. Analysis between the CI Team Self-Survey and the Researcher’s Evaluation of the Program Implementation
[1]  Madrigal, J. (2012). Assessing Sustainability of the Continuous Improvement through the Identification of Enabling and Inhibiting Factors (Doctoral dissertation), Virginia Polytechnic Institute and State University.
In article      
 
[2]  Singh, J., & Singh, H. (2015). Continuous improvement philosophy-literature review and directions. Benchmarking an International Journal, 22(1), 75-119.
In article      View Article
 
[3]  Martinez, R., & Yap, G. M. (2017). Continuous improvement innovation in Philippine education: A reflective approach. Retrieved from https://search.proquest.com/docview/1967758559?accountid=173 015 [Accessed September 12, 2019].
In article      
 
[4]  Department of Education (2012). Revised School-Based Management. DO No. 83, s. 2012 (Implementing Guidelines on the Revised SBM Framework, Assessment Process, and Tool).
In article      
 
[5]  Creswell, J.W. (2014). Research Design (4th Ed.). Qualitative, quantitative, and mixed methods approach. USA: Sage Publications, Inc.
In article      
 
[6]  Online Evaluation Resource Library (n.d.) Quality Criteria for Instruments. https://oerl.sri.com/instruments/instrcrit.html.
In article      
 
[7]  What is validity and why is it important for survey results? Retrieved from https://www.nbrii.com/faqs/data-analysis/validity- important/ [Accessed July 10, 2020].
In article      
 
[8]  CFCA (November 2013). Evidence-based practice and service-based evaluation. https://aifs.gov.au/cfca/publications/evidence- based-practice-and-service-based-evaluation [Accessed December 4, 2019].
In article      
 
[9]  McDavid, J. C., Huse, I., Hawthorn, L. R., & Ingleson, L. R. (2012). Program evaluation and performance measurement. USA: Sage Publications Inc.
In article      
 
[10]  De Vos. A.S. (2005). Qualitative data analysis and interpretation. In De Vos, A.S. (Ed.), Strydom, H., Fouché, C.B. & Delport, C.S.L. Research at Grassroots: For the Social Sciences and Human Service Professions. 3rd ed. Pretoria: Van Schaik Publishers.
In article      
 
[11]  CIToolKit (2020). Best Practices. https://citoolkit.com/articles/best-practices/ [Accessed February 21, 2020].
In article