Article Versions
Export Article
Cite this article
  • Normal Style
  • MLA Style
  • APA Style
  • Chicago Style
Research Article
Open Access Peer-reviewed

Instructional Leaders and Understanding Data: The Status and Prevalence of Research and Statistics Courses in a Midwestern State’s Educator Preparation Programs

Dr. Erasmus Chirume
American Journal of Educational Research. 2018, 6(7), 997-1004. DOI: 10.12691/education-6-7-16
Published online: July 18, 2018

Abstract

The study investigated the extent to which courses in statistics and research (other than in STEM majors) were prevalent in the educator preparation programs (EPPs) for preservice teachers. A chi-square goodness of fit test, which compared the counts of these courses objectively identified through a content analysis of the EPPs’ course descriptions, was conducted. The chi-square (2) p = .003, (p < .05) reflects that the hypothesis that courses in statistics and in research were as equally prevalent as assessment courses was rejected. For the decisions of instructors to be robust, they should be based on multi-sourced data.

1. Introduction

Instructors ought to know how to use data and evidence to inform decisions associated with learning, especially as such decisions often become standard practice 1 for practitioners in the educator preparation programs (EPPs) and in local school districts. Instructional leaders are the educators who spend time in the teaching and learning spaces in institutions of higher education and K-12, planning and implementing decisions inside and outside of the classroom to improve the academics and quality of life for all students 2, 3. This charge occurs against the backdrop of the national average yearly dropout rate of 16 to 24-year-olds, which has steadily remained at approximately 10% from 2000 to 2015 4. However, the dropout rates for blacks and Hispanics have been higher than the national average. Still, many U.S. students are falling through the cracks of the education system. The scientific community today believes that data are the best illuminator 5 of school organizational dynamics. These include, inter alia, the tracking of students’ performance outcomes, ethical and professional practices, policy inputs, processes, and outputs as well as challenges, concerns, possibilities, and choices from an array of competing solutions available to the EPPs and local school districts 6, 7 to improve student learning and their quality of life.

At a time when calls for accountability have intensified in the field of education, there are in the U.S., more than 1,300 large and small, public and private colleges and universities that provide EPPs across the 50 states 8. Approximately 3.92% (N=52) of these programs are located in a midwestern state selected for this study. For the faculty in EPPs, who strive to maintain accreditation status, and those in local school districts called upon to respond to the accountability measures 9, research methods and statistical analysis courses would seem to be an important part of the content of the teacher preparation programs. These courses support the development of the measurement skills of teachers in the areas of assessment and evaluation 10.

2. Literature Review

Proficiency in evaluation and assessment enables instructional leaders to ensure that subject courses and school programs being offered do benefit student learning, in EPPs and especially in local districts or at the site level 9, 11, 12, 13. The successful teaching, supervising, and guiding of students in K-12 schools depend, among other activities, on formative and summative assessments 14. The teachers’ understanding of the statistics of probability on one hand and the use of qualitative data on the other, along with the correct application of the concomitant constructs, namely, validity and reliability for the former, and authenticity and trustworthiness, or acceptability for the latter 15, 16, 17, 18, will ensure the competent performance of the aforementioned activities. Educators have always been expected to take assessment courses as part of their teacher preparation. Assessment courses help educators to develop assessment literacy 19. With this pedagogical knowledge, preservice teachers step into schools where, as educators, they are now increasingly required to make programmatic and programing decisions about the utility, effectiveness, and consequences of school programs. More often than not, the confidence of educators in the EPPs and in school districts is challenged, and the educators are confounded by the data, whereas studies in statistics and research would have provided appropriate competencies to cope with the tasks 16, 20.

In the business world, decisions about audience targeting techniques and strategies for choosing which ‘tactics to use and when to use them’ derive from programmatic decision-making 21, where the role of information technology (IT) support is exponentially growing quantitatively and qualitatively. Incidentally, in the institutions of learning, educators are dealing with the performance of human beings where the culture of evidence may also require research and statistical analysis techniques 18. They need this capacity to be able to cope with making sense of complex data to achieve effective, informed decisions affecting over 50% of minority students from depressed communities who are falling through the cracks of the education system annually 2. As expectations increase for educators to address data competently both in EPPs and in local school districts, the eagerness also intensifies to determine whether teacher preparation programs are adjusting sufficiently to provide appropriate courses to prepare preservice teachers to become data literate 9.

2.1. Conditions in the EPPs & K-12 School Districts Regarding Data Use

Literature from Deans for Impact 22 reveals the existing condition of patchwork data, internally developed substandard data collection instruments, and few common data sources, resulting in ineffective assessment systems variably prevalent in some of the EPPs around the country. These factors indicate a lack of data literacy in some departments at institutions of higher learning. Some EPPs experience ineffective cooperation and collaboration within and among departments for data reflection and the utilization of information to improve teacher preparation programming. Deans for Impact charges that several EPPs are inadequately preparing preservice teachers for the realities of the classroom. An official with Deans for Impact suggests that there is a need to ‘shift from a culture of compliance to a culture of inquiry that is focused on program improvement’ 23. That shift can be accelerated by providing research and statistics education to our preservice teachers who will serve as instructional leaders to K-12 students of diverse backgrounds, exhibiting discrepant learning achievement 3. The observations about the conditions in some of the EPPs are corroborated by the experiences of some of the teachers in the field.

From the extant literature, some instructional leaders report that much of the data made available to them seem not to provide the types of insight these educators need to make critical instructional decisions 23. Under the existing conditions, the instructional leaders sometimes feel buried in data, as the field of education is awash with accountability data, a situation that can be described as data rich but information poor. Improvement research calls for data, not for purposes of ranking individuals or organizations, ‘but for learning about how instructional practices and organizational processes actually work’ 24 and for developing values, beliefs and capacity for making data-informed decisions. According to Marsh, Kerr, Ikemoto, Darilek, and Barney 25, teachers wondered whether test scores reliably measured students’ knowledge, whether students took tests seriously, or whether tests were aligned with standards. Instructional leaders desire to understand the types and quality of data and how that data could be collected and analyzed in ways useful to decision making and instructional improvement. Such doubts affect the confidence of educators to accept and support the reported data 25.

In their study, Supovitz and Klein 26 were surprised by the limited technical capacity of the faculty even in schools that had been identified as innovative data users. Approximately 19% of the educators felt that they possessed the skills to manipulate data to answer questions of interest to them. Capacity issues did not only relate to the technology and use of computer software to run analyses but also relate to the competences of educators to formulate research questions, isolate relevant data outputs, interpret results and effectively develop and use classroom assessments 25. This is why, in recent years, public concern for the quality of teacher education is pushing accreditation agencies and policymakers to hold EPPs accountable for the effectiveness of their completers 27. CAEP requires EPPs’ self-studies to provide impact data for their completers working with K-12 students.

In light of the seemingly widespread call for educators to develop distinctive capabilities in matters of handling data, it is instructive to determine what educator preparation programs are doing with respect to teaching research and statistics courses as a way of equipping preservice teachers to be proficient data users. The scarcity of studies that seem to have investigated the inclusion of research and statistics method courses 28, 29 in the teacher preparation programs 30 justifies the need for this study.

3. Statement of the Research Problem

CAEP expects educator preparation providers to create a culture of evidence to inform their work to produce effective teachers who, among other things, will make data-informed decisions to improve K-12 education. A culture of evidence is built in schools on an infrastructure that supports data collection, analyses, and monitoring as a collective faculty effort in inclusive participation and feedback from appropriate stakeholders regarding teaching and learning. Therefore, in schools, data are continuously generated from outcome quantitative and qualitative measures used to assess and evaluate programs, teaching and learning 10. Effective EPP completers are expected to apply content and pedagogical knowledge as reflected in outcome assessments in response to the standards of the National Board for Professional Teaching (NBTS), Specialized Professional Associations (SPA), the Interstate Teacher Assessment and Support Consortium (InTASC), and other related bodies.

However, a substantial barrier to implementing data-informed decision-making is the deficiency of expertise among educators in the area of data analysis 20, 31, 32. The findings of this study can be used by the EPPs to pursue possible remedial steps to strengthen the preservice curriculum for completers to eventually actuate instructional leaders’ commitments to reproduce a culture of evidence in K-12 school districts. The findings of this study will be of interest to educators at all levels, parents and students, the US Department of Education, professional development providers, and the officials of the CAEP, among other agencies.

3.1. Research Question

How frequently are statistics and research courses, compared to assessment courses, taught in the midwestern educator preparation programs?

3.2. Hypothesis

The educator preparation programs in the state of interest to the study provide statistics, research, and assessment courses in equal proportion.

3.3. Research Sub-Questions

1. What is the prevalence of the statistics, research, and assessment courses in the EPPs located in the state covered in the study?

2. What would be the lowest acceptable level of pedagogical content knowledge to make use of data skills?

3. What implications does the state of affairs regarding the extent of courses with statistics, research and assessment content have on the continuous improvement of K-12 student achievement?

4. Conceptual Framework

It is reasonable to attempt to understand what courses in research and statistics are offered in the EPPs as a baseline for understanding measurement in educational assessment and evaluation. On this knowledge base, a culture of evidence in which preservice teachers are immersed as part of their preparation to induct the same into their K-12 teaching and learning environments can be built within a conceptual framework of data literacy.

In their book titled Data Literacy for Educators, Mandinach and Gummer 19 developed a conceptual framework, namely, data literacy for teachers (DLFT), which represents a dynamic process of action instead of torpid knowledge formation. The authors argue that DLFT provides a description of how EPPs, in the context of K-12 schools, can support the development of the dispositions of teacher educators and teachers and their habits of mind along with data literacy skills to responsibly use student data. As a conceptual framework, the DLFT draws on multiple sources of teacher knowledge about how students learn and who they are, standards of the Interstate Teacher Assessment and Support Consortium (InTASC), standards of the Specialized Professional Association (SPA), and standards of the CAEP. Also included as sources is the knowledge of the curriculum, as the total formal and informal student experience of schooling, and the pedagogical content as it relates to specialized knowledge of instruction, subject content, and discipline specific knowledge.

The dynamic nature of the framework implies that DLFT is an eminent tool in the educators’ paraphernalia to inform the continuous improvement of instruction by increasing the teachers’ choices regarding ways to buttress their students by using all kinds of data: from assessments, from research, licensure examination requirements, and supports for students, teachers and schools. This means that these data are not only used for the accountability of teachers to school authorities, or for policy makers to rank students and schools and to fire teachers and close schools for low student scores, which creates a culture of fear and suspicion among educators and teachers. This manner of data use has served to undermine the effort to advance data literacy as well as assessment literacy across the field of teaching and learning, while the former experience of data use reinforces the consolidation and expansion of data literacy across the field of teaching and learning. In their scholarship, Mandinach and Gummer advise users of data not to conflate data literacy with assessment literacy. The two are interconnected but distinct constructs.

Mandinach and Gummer distinguish data literacy from assessment literacy. They describe assessment literacy as knowledge to select, develop, and analyze assessments. While these activities of assessment are essential for supporting data literacy, the practice of DLFT involves drawing data from sources that exceed assessment data. DLFT emphasizes that the data educators use ought to be derived from a variety of sources, including motivation, attitude, health indicators, and behavior, rather than from learner performance data alone.

The goal of the data literacy movement is to raise awareness about the difference between data literacy and assessment literacy. Data literacy comes with a deeper culture of evidence and knowledge of data, sources, types, validation, testing, assumptions analysis and interpretation and use of information, as the conceptual framework attempts to reflect. As a conceptual framework, DLFT is a five-stage data inquiry cycle that details expected educators’ knowledge, skills, and dispositions regarding data literacy. Below is a figure that illustrates the five-stage figure of the DLFT.

Figure 1 illustrates the iterative dynamic process of DLFT that allows for making meaning out of data and for translating the meaning into instructional action. Mandinach and Gummer explain that each stage of the cycle represents the disposition, knowledge, and skills that educators require to experience the iterative process of making and translating meaning into instructional action. For instance, to use data, a teacher should understand not only the multiple sources but also the statistical and psychometric properties of data. To pick up information from data, an educator should understand how to identify patterns and test the underlying theoretical assumptions (such as, for example, the assumption of the normality of the data distribution, the assumption of independent observation, and the homogeneity of covariance assumption) of the data under consideration to make informed interpretations. To transform information from data into instructional action, educators ought to possess both curricular, as in discipline-specific knowledge, as well as pedagogical content knowledge of how students learn particular subject matter. In this age of insight, data literacy will allow educators to manage and use large data meaningfully 21.

4.1. Research Design

This study adopts a quantitative content analysis research design that includes a chi-square statistic test to determine if the expected matches the observed frequency regarding the prevalence of courses in statistics and research compared to courses in assessment as part of the EPPs at an a priori alpha level =.05. The counts of statistics and research courses were conducted based on the objective and systematic quantification of manifest aspects 33 in the form of words, phrases, and terms identifiable as part of the planned curriculum of the sampled educator preparation programs. Shannon and Hsieh 34 observe that isolating and identifying certain words or phrases in a text, for the purpose of the quantification of particular objective aspects of that text, is an attempt to explore the presence and frequency of use of such contextual content. Potter and Levine‐Donnerstein 35 referred to this form of analysis as ‘manifest content analysis.’ Manifest analysis underscores the importance of the procedures of the quantitative analysis, which may not account for the character of the available data, namely, the qualitative analysis of data 36. Research courses and statistics do provide a veritable infrastructure for creating a culture of evidence in the assessment and evaluation measurements of teaching and learning in K-12 education 10. Quantitative content analysts usually set their coding scheme early in the research process and would most likely maintain it without modifying it, or doing so only slightly, during the data collection period 37. For this study, data collection strictly followed predetermined categories guided by the texts of course descriptions following the coding scheme shown in Table 1.

4.2. Limitations of the Study

This study utilizes quantitative procedures, which only focus on quantifiable data. It is obvious that some researchers, such as 38, consider using a research design that blends quantitative and qualitative analyses as a way of creating a more robust analytical metric, which does not miss any credible data 39. Nonetheless, this study critically addresses matters of validity and reliability to provide credible findings as detailed below.

4.3. Validity and Reliability

Mueller 40 writes that evidence for validity comes in small portions with none of those pieces providing adequate proof of the validity of a particular study. The CAEP Accreditation Handbook 10 argues that “validity is not a property of data”. It refers to the pertinence of inferences from data and the credibility by which interpretations are made about the findings of measurements conducted (p. 193). Consequently, researchers generally believe that validity is good enough as a strong chain of clear evidence that meaningfully links the research questions, data, and findings. According to Best and Kahn 41, the process of validation begins with a process of providing a credible mechanism that meets the claim for measuring particular data. Second, what is being measured must be the right construct in order to generate the quality of data required to answer the research question at hand. Ultimately, therefore, validity is accounted for in terms of how data can be applied as evidence to support the conclusions drawn in a study 42. Although validity coefficients can be computed, validity is a unitary qualitative product that can be reported through the process of documenting research procedures that a particular study follows. This study details all the steps followed in conducting and reporting its findings.

On the other hand, reliability in the context of this study conveys the message that data generated for this study are reasonably complete to meet the intended purposes of accounting for the prevalence of statistics and research courses in comparison with assessment courses offered in the EPPs located in a midwestern state. Additionally, the coding of the constructs is derived from the extant literature and the existing texts of course descriptions that are used by schools in the state under study. The coding scheme that was followed in preparing for the data collection is provided in Table 1.

5. The Current Study

5.1. Data Collection

Data collection began by pulling from the internet the early and middle childhood program course descriptions of 44 of the 52 (private and public) initial educator preparation programs in a midwestern state of interest to this study. After consulting the extant literature, Table 1 shows the three predetermined categories and the course titles, words and phrases extracted as the manifestly quantifiable attributes that reflected the statistics courses, assessment courses, and research courses that are taught in the EPPs located in a midwestern state.

The three categories were then used as variables in the quantification and demonstration of the prevalence of statistics, research and assessment courses in the teacher preparation programs. Table 2 provides the coding scheme used for generating the data for the current study.

5.2. Findings

This study sought to determine the prevalence of courses in statistics and research, compared to assessment courses, in teacher preparation programs in both public and private schools. Specifically, is the observed frequency of statistics, research, and assessment courses equal to the expected distribution 43? Earlier in the study, it was noted that statistics and research content would seem important to support the professional skills of teachers in assessing teaching and student learning 9, 19. Therefore, this study hypothesized that the initial educator preparation programs in the state in question each provide a minimum of one course in assessment, one course in statistics, one course in research.

SPSS 11.5 for Windows (IBM Corporation, New York, US) was used to compute a chi-square goodness of fit test, establishing a significant deviation from the hypothesized equality of proportions in the prevalence of courses in statistics and research compared to assessment courses. Based on the chi-square (2) p = .003, (p < .05), the hypothesis that statistics courses and research courses were as equally prevalent as the assessment courses in teacher preparation programs was rejected. This finding is consistent with a number of studies that have called for the study of statistics and research in order to increase data literacy. Scheaffer 44 argues that there is a huge need for statistical competencies in modern workplaces such as schools, courthouses, and many other governmental and non-governmental agencies in order to enhance fairness, justice and productivity, and the efficiencies in workplaces.

Overall, it appears that there are three important takeaways from the analyses of the data in this study. First, the state under study has 52 educator preparation programs. From this number, a random sample of 44 EPPs was extracted through a simple random sampling method. To provide an equal probability for each of the EPPs to be chosen, the names of the 52 EPPs were placed in a hat, which was shaken to mix the names before randomly drawing 44 names. The descriptive analysis of data showed that each of these EPPs offered at least one assessment course. Only 25% (N=11) of the EPPs offered at least one course in statistics and only approximately 23% (N=10) of the EPPs offered one course in research to preservice teachers whose major was not in the STEM field.

Regarding what would seem to be the lowest acceptable level of pedagogical content knowledge, the data from the 11 schools that offer a statistics course and the 10 schools that offer a research course appears to hint at the possibility of a pattern that can be followed in the rest of the EPPs. As a starting point, each EPP could do well to offer at least one statistics course and at least one research course to enhance the proficiency of educators in handling data with increased confidence.

Third, the implications regarding the current level of inclusion of courses in statistics and research courses cannot be minimized. By not offering statistics and research courses to the same extent as assessment courses, low confidence levels and limited proficiency in data use will continually undermine the collegiate and professoriate mission to improve student academic performance.

6. Discussion

The disparity in the commitment of EPPs to offer statistics and research courses versus courses in assessment calls for a critical examination of the implications for the improvement of EPPs and K-12 student achievement. Ineffective self-study reports to the CAEP, which has identified a significant number as AFI, are indeed a critical source of public concern. Chepko 45 observed that a good number of EPPs needed more development on program ‘use of data to make their case that each of the five standards is met’. Summing up this observation, Chepko remarks, ‘We think there is a data literacy gap’ 22. On the other hand, Means et al. 31 determined that 52% of the teachers reported in an NETT survey that data interpretation items, which involved drawing inferences from data patterns and which really are the subject of variability 18, 46, proved difficult to handle. These pieces of evidence point to the fact that confidence in the use of data among faculty in the EPPs and local school districts is at risk of not improving if the teaching of statistics and research does not improve.

Mandinach and Gummer 19 note a factual fallacy that arises from conflating assessment literacy and data literacy. Assessment literacy is a critical component of data literacy, but assessment literacy alone is insufficient. Data literacy is constitutive of discovery sciences that hugely need statistical competencies 18, 46 and research. Research is the practice of science. In this practice, data are collected to test new ideas or disprove old ones. Science makes a contribution when a previously unknown fact (a discovery) sheds light, explains some problem 39, or overturns a previously accepted idea. In this way, science serves as an element of the culture of data literacy. For this reason, the culture of data literacy 47, beyond assessment literacy, enlists competitive, high stakes functional knowledge and skill in the areas of probability theory and statistics to understand the variability in data interpretations 18. The ability to interpret the variability of the data values of empirical inquiry generally refers to separating chance variability from systematic variations. These are competencies learned from parametric and nonparametric statistics. Comparing models and empirical data and dealing with variability are integral parts of learning statistical practice 16. For all these reasons, data literacy is a distinct and deeper form of literacy that should not be conflated with assessment literacy.

Mertler 48 reminds us that the concept of using assessment information to make decisions about instructional improvement has always been part of the practice of teaching, though the form and content of the profession has changed over the years. From the 1900s, as education began to be formalized across the United States, the sources of assessment information included teaching philosophy, personal experience, and intuition. These variables constituted what are known as the old tools of the profession, which were in keeping with the cultural ecology of the time. Cultural ecologically, the profession of teaching and learning was conceived as the practice of an ‘art’. In recent years, the old tools, which fall into the category of ‘gut instinct,’ seem not to be sufficient anymore for a profession now conceived of and perceived to be partially the practice of an art, and part the practice of a science, and the two forms are not necessarily mutually exclusive. At the same time, assessment has devolved into accountability requirements. Beginning with the No Child Left Behind (NCLB) policy, which was adjusted into the Race to the Top (RTTT), based on the Common Core State Standards (CCSS), standardized tests have been administered as the main vehicle of assessment. Now, most states rate the effectiveness of their school districts on approximately 25-35 performance indicators, which largely draw from student performance data on standardized tests 48. Standardized tests constitute the new tools of the profession. In the current history, tied to standardized scores are also negative consequences for teachers and schools through accountability policies. Schools that are publicly labelled failing are either closed or lose funding, while teachers who are publicly labeled as failing are fired, especially in non-union environments, cultivating a culture of fear among faculty.

These trends have partly and invariably served to conflate data literacy and assessment literacy across the field 19, as an unintended consequence, yet data literacy, as was already noted, is distinct from data assessment. In this spirit, instead of only looking at data as a means of compliance, the professional world, which includes educators, views data as a key multi sourced tool or an eminent illuminator to inform continuous improvement in the service area of human needs across the sectors of modern society 19.

At the dawn of the age of ideation and insight 49, the new tools of assessment based solely on standardized tests have evidently become insufficient. Rather than depending on standardized tests as the sole source of data to improve instruction, the DLFT framework envisions the use of the concept of big data in the service area of education. The DLFT framework proposes drawing data from multiple sources, including health indicators, attitudinal scales, motivational factors, behavioral preferences, and support structures. The framework encourages the use of an assortment of tools to analyze and interpret data as an iterative inquiry process of identifying issues, collecting data, drawing information, making informed decisions and evaluating outcomes intended to advance the wellbeing of students and schools. In this context, offering statistics and research courses in the EPPs in order to cultivate data literacy is not be a choice but a compelling priority in building the appropriate competencies and pedagogically distinctive capabilities of preservice teachers.

Within the DLFT framework, the use of big data has a multiplier effect on providing options and choices for the generation of solutions that leaders and stakeholders including those in education can apply in solving problems facing the profession. This means that data literacy will not only allow instructional leaders to use big data meaningfully, but through the use of more data, will also enable instructional leaders to see new possibilities to resolve challenges and see better choices for informed decision; more data will enable instructional leaders to see differently in the age of discovery science. For that reason, instructional leaders will, in an authentic and measured manner, move all learners beyond mere opinion to critical thinking 50. In academic discourse, critical thinking is essentially what is meant by student improvement. The Critical Thinking Foundation asserts that as young people learn to think more critically, they become proficient at scientific, historical, and mathematical thinking. They develop “skills, abilities, and values critical to success in everyday life” 51. Alwehaibi 52 describes critical thinking as a judicious reflective thought process accompanied by in-depth analysis and accuracy resulting in astute judgement to determine the merit of a theory, object or decision.

Depth and accuracy are functions of measurement and manifest in quantitative terms, which may serve to substantiate or operationalize qualitative variables 16. In the classroom, critical thinking also involves creative thinking. According to Carmichael and Farrell 53, creative thinking involves analysis, evaluation, and the synthesizing of facts and ideas. Possessing the capacity to logically and creatively exercise in-depth judgment and reflection is an effective demonstration of operating in the realm of complex ideas and big statistical data 54. These capabilities exemplify critical thinkers whom the instructional leaders attempt to raise in their classrooms every day for the world of work and productivity.

7. Conclusions

Taken together, three conclusions can reasonably be drawn from this empirical study. The first is that while the study found that nearly all EPPs located in the midwestern state offer at least one course in assessment, only about a quarter or less of those EPPs currently offer courses in both statistics and research, while the extant literature suggests that data literacy is not synonymous with assessment literacy. Therefore, it would be advisable to avoid conflating data literacy with assessment literacy. A wide range of stakeholder institutions can begin to improve the data literacy of the teaching workforce by ensuring that statistics and research courses are offered. Second, it is concluded that informed decisions to improve instruction are variably dynamic, robust and multi-sourced beyond measures of standardized testing. Third, in spite of challenges, in the age of ideation and insight, the DLFT framework appears to provide a credible space for instructional leaders to initiate the development of the culture for using big data and evidence to inform decisions for the continuous improvement of educator preparation programming and the strengthening of local school districts’ curricula. The proper and informed use of big data will facilitate instructional leaders’ amplification of opportunities for learning while diminishing the negative forces that constrain student improvement in schools and productivity in the wider society.

8. Recommendations for Further Research

Data literacy can surely contribute to turning educational institutions around from being data rich but information poor to becoming information rich in order to improve decision making. The Every Student Succeeds Act, which is the new reauthorization of the federal program designed to support the education of disadvantaged students, requires that states and districts use evidence-based interventions to support school improvement 55. However, the challenge is determining how the DLFT can be institutionalized among the community of learners who are facing a tight bundle of competing needs, regarding scheduling new courses for EPPs. A further investigation, which preferably uses a research design that combines quantitative and qualitative techniques to resolve, inter alia, this challenge is recommended.

Acknowledgements

I would like to thank Dr. Monique Cherry-McDaniel for her inspiration and encouragement she provides in the areas of academic leadership and scholarship; and my son, Charles Chirume for his companionship and support throughout the time I worked on this project.

References

[1]  Thomas, K., & Huffman, D. (2016). Navigating the challenges of helping teachers use data to inform educational decisions. Administrative Issues Journal, 1(2), 94-102.
In article      View Article
 
[2]  Cook, L. (2015). U.S. education: Still separate and unequal. US News & World Report. Retrieved from https://www.usnews.com/news.
In article      View Article
 
[3]  Noguera, P. (2015). Race, equity, and education: Sixty years from brown. New York, NY: ASCD.
In article      View Article
 
[4]  U.S. Department of Education, & National Center for Education Statistics. (2017). The condition of education 2017 (NCES 2017-144). Status dropout rates. Washington, D.C.: NCES.
In article      View Article
 
[5]  Mandinach, E. B., & Gummer, E. S. (2016a). Data and educator preparation programs: Data for programmatic continuous improvement and data literacy for teachers. Keynote Presentation at Council for Accreditation of Educator Preparation Conference. Washington Hilton Hotel, Washington, D.C.
In article      
 
[6]  Allen, M., Coble, C., & Crowe, E. (2014). Building an evidence-based system for teacher preparation. Washington, DC: Teacher Preparation Analytics.
In article      
 
[7]  Council of Chief State School Officers (CCSSO). (2016). The 2016 legislative conference workbook. Washington, DC: CCSSO.
In article      
 
[8]  Wilson, F., & Ferrini-Mundy, J. (2001). Teacher preparation research: Current knowledge, gaps and recommendations. Washington, DC: Center for the Study of Teaching and Policy.
In article      
 
[9]  Shepperson, T. L. (2013). Prevalence of evaluation method courses in education leader doctoral preparation. International Journal of Educational Leadership Preparation, 8(1), 1-14.
In article      View Article
 
[10]  Council for Accreditation of Teacher Education (CAEP). (2016). CAEP accreditation handbook. Washington, DC: CAEP.
In article      
 
[11]  Darling-Hammond, L., Meyerson, D., LaPointe, M., & Orr, M. T. (2007). Preparing principals for a changing world: Lessons from effective school leadership programs. Palo Alto, CA: Stanford Educational Leadership Institute.
In article      View Article
 
[12]  Lauer, P. A. (2006). What principals need to know about education research. Principal, 85(5), 12-17.
In article      
 
[13]  Pont, B., Nusche, D., & Moorman, H. (2008). Improving school leadership: Policy and practice (Vol. 1). Paris: OECD.
In article      
 
[14]  Kizlik, B. (2017). Measurement, assessment, and evaluation in education. Retrieved from https://www.adprima.com/measurement.htm.
In article      View Article
 
[15]  Garfield, J., & Ben-Zvi, D. (2008). Developing students’ statistical reasoning: Connecting research and teaching practice. Berlin: Springer.
In article      View Article
 
[16]  Garfield, J. (2011). Statistical literacy, reasoning, and thinking. In M. Lovric (Ed.), International encyclopedia of statistical science (pp. 1439-1442). Berlin: Springer.
In article      View Article
 
[17]  Dyer, K. (2014). Data literacy – what it is and how it differs from assessment literacy. Retrieved from https://www.nwea.org/.
In article      View Article
 
[18]  Roth, W.-M. (2014). On understanding variability in data: A study of graph interpretation in an advanced experimental biology laboratory. Educational Studies in Mathematics, 86(3), 359-376.
In article      View Article
 
[19]  Mandinach, E. B., & Gummer, E. S. (2016b). Data literacy for educators: Making it count in teacher preparation and practice. New York, NY: Teachers College Press.
In article      View Article
 
[20]  Doerr, H. M. (2000). How can i find a pattern in this random data? The convergence of multiplicative and probabilistic reasoning. Journal of Mathematical Behavior, 18(4), 431-454.
In article      View Article
 
[21]  King, G. (2011). Ensuring the data-rich future of the social sciences. Science, 331(6018), 719-721.
In article      View Article  PubMed
 
[22]  Deans for Impact. (2016). From chaos to coherence: A policy agenda for accessing and using outcomes data in educator preparation. Austin, TX: Deans for Impact.
In article      
 
[23]  Datnow, A., & Hubbard, L. (2016). Teacher capacity for and beliefs about data-driven decision making: A literature review of international research. Journal of Educational Change, 17(1), 7-28.
In article      View Article
 
[24]  Bryk, A. S. (2015). Learning to improve how America's schools can get better at getting better. Cambridge, MA: Harvard Education Press.
In article      View Article
 
[25]  Marsh, J. A., Kerr, K. A., Ikemoto, G. S., Darilek, H., & Barney, H. (2006). Strategies to promote data use for instructional improvement: Actions, outcomes, and lessons from three urban districts. American Journal of Education, 112(4), 496-520.
In article      View Article
 
[26]  Supovitz, J. A., & Klein, V. (2003). Mapping a course for improved student learning: How innovative schools systematically use student performance data to guide improvement. Philadelphia, PA: The Consortium for Policy Research in Education.
In article      View Article
 
[27]  Bastian, K. C., Henry, G. T., Pan, Y., & Lys, D. (2016). Teacher candidate performance assessments: Local scoring and implications for teacher preparation program improvement. Teaching and Teacher Education, 59(Supplement C), 1-12.
In article      View Article
 
[28]  Bustamante, R. M., & Combs, J. P. (2011). Research courses in educational leadership programs: Relevance in an era of accountability. International Journal of Education Policy and Leadership, 6(3), 1-11.
In article      View Article
 
[29]  Huck, S. W. (2008). Reading statistics and research. Boston, MA: Pearson Education Inc.
In article      PubMed
 
[30]  Shepperson, T., & Fierro, L. A. (2010). Teaching evaluation: Cross-disciplinary research on graduate training in public health and educational administration. In proceedings of the International Conference on the Social Sciences, 2010. Honolulu, Hawaii.
In article      
 
[31]  Means, B., Padilla, C., DeBarger, A., & Bakia, M. (2009). Implementing data-informed decision making in schools–teacher access, supports and use. Menlo Park, CA: SRI International.
In article      View Article
 
[32]  Friel, S.N., Frances R. Curcio, R., Bright, G. W. (2001). Making sense of graphs: Critical factors influencing comprehension and instructional implications. Journal for Research in Mathematics Education. 32 (2) pp. 124-158.
In article      View Article
 
[33]  Mann, S. (2010). A critical review of qualitative interviews in applied linguistics. Applied Linguistics, 32(1), 6-24.
In article      View Article
 
[34]  Shannon, S. E., & Hsieh, H. (2005). Three approaches to qualitative content analysis. Retrieved from www.researchdesignreview.com.
In article      View Article
 
[35]  Potter, W. J., & Levine‐Donnerstein, D. (1999). Rethinking validity and reliability in content analysis. Journal of Applied Communication Research, 27(3), 258-284.
In article      View Article
 
[36]  Roller, M. R. (2016). Social constructionism & qualitative research design. Retrieved from https://www.rollerresearch.com/MRR.
In article      View Article
 
[37]  Krippendorff, K. (2013). Content analysis: An introduction to its methodology (3rd ed.). Thousand Oaks, CA: Sage
In article      
 
[38]  Roller, M. R. & Lavrakas P. J. (2015). Applied Qualitative Research Design: A Total Quality Framework Approach. New York, NY: The Guilford Press
In article      View Article
 
[39]  Kangai, C. (2009). Social research methods in higher education: A critical analysis of methodological issues and emerging trends at the Zimbabwe open university. Retrieved from https://cdn.intechopen.com/.
In article      View Article
 
[40]  Mueller, D. J. (1986). Measuring social attitudes: A handbook for researchers and practitioners. New York: Teachers’ College Press.
In article      
 
[41]  Best, J. B., & Kahn, J. V. (2006). Research in education (10th Ed.). Boston, MA: Pearson Education Inc.
In article      
 
[42]  Gall, M. D., Gall, J. P., & Borg, W. R. (2003). Educational research: An introduction. Boston, MA: Pearson Education Inc.
In article      View Article
 
[43]  Brase, C. H.and Brase, C. P. (2012). Understandable Statistics: Concepts and Methods, (10th Ed.). Boston, MA: Cengage Learning
In article      View Article
 
[44]  Scheaffer, R. L. (2011). Statistics education. In M. Lovric (Ed.), International encyclopedia of statistical science (pp. 1482-1484). Berlin: Springer.
In article      View Article
 
[45]  Chepko, S. (2016). Teacher-prep accreditation group seeks to regain traction. Retrieved from https://www.edweek.org/ew/articles.
In article      View Article
 
[46]  English, L. D. (2012). Data modelling with first-grade students. Educational Studies in Mathematics, 81(1), 15-30.
In article      View Article
 
[47]  Peck, C. A. & McDonald, M. A. (2014). What is a culture of evidence? How do you get one? … Should you want one? Teachers College Record 116 (3)
In article      View Article
 
[48]  Mertler, C. A. (2014). The data-driven classroom: How do i use student data to improve my instruction? Retrieved from www.ascd.org
In article      View Article
 
[49]  Dorow, P. F., Dávila, G., Varvakis, G., & Vallejos, R. V. (2015). Generation of ideas, ideation and idea management. NAVUS-Revista de Gestão e Tecnologia, 5(2), 51-59.
In article      View Article
 
[50]  Jones, R. C. (2013). The instructor’s challenge: Moving students beyond opinions to critical thinking. Madison, WI: Faculty Focus, Magna Publications.
In article      
 
[51]  Sandercook, I. (2015). Strategies for providing effective and efficient instructor feedback. Retrieved from https://teachonline.asu.edu/author/irma/.
In article      View Article
 
[52]  Alwehaibi, H. (2012). Novel program to promote critical thinking among higher education students: Empirical study from Saudi Arabia. Asian Social Science, 8(11), 193-204.
In article      View Article
 
[53]  Carmichael, E., & Farrell, H. (2012). Evaluation of the effectiveness of online resources in developing student critical thinking: Review of literature and case study of a critical thinking online site. Journal of University Teaching and Learning Practice, 9(1), 1-17.
In article      View Article
 
[54]  Garfield, J., & Ben‐Zvi, D. (2007). How students learn statistics revisited: A current review of research on teaching and learning statistics. International Statistical Review, 75(3), 372-396.
In article      View Article
 
[55]  Dynarski, M. (2015). Using research to improve education under the Every Student Succeeds Act. Retrieved from https://www.brookings.edu.
In article      View Article
 

Published with license by Science and Education Publishing, Copyright © 2018 Dr. Erasmus Chirume

Creative CommonsThis work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/

Cite this article:

Normal Style
Dr. Erasmus Chirume. Instructional Leaders and Understanding Data: The Status and Prevalence of Research and Statistics Courses in a Midwestern State’s Educator Preparation Programs. American Journal of Educational Research. Vol. 6, No. 7, 2018, pp 997-1004. https://pubs.sciepub.com/education/6/7/16
MLA Style
Chirume, Dr. Erasmus. "Instructional Leaders and Understanding Data: The Status and Prevalence of Research and Statistics Courses in a Midwestern State’s Educator Preparation Programs." American Journal of Educational Research 6.7 (2018): 997-1004.
APA Style
Chirume, D. E. (2018). Instructional Leaders and Understanding Data: The Status and Prevalence of Research and Statistics Courses in a Midwestern State’s Educator Preparation Programs. American Journal of Educational Research, 6(7), 997-1004.
Chicago Style
Chirume, Dr. Erasmus. "Instructional Leaders and Understanding Data: The Status and Prevalence of Research and Statistics Courses in a Midwestern State’s Educator Preparation Programs." American Journal of Educational Research 6, no. 7 (2018): 997-1004.
Share
  • Table 2. Coding Guide for Courses in Statistics, Assessment and Research at EPPs in the Midwestern State
[1]  Thomas, K., & Huffman, D. (2016). Navigating the challenges of helping teachers use data to inform educational decisions. Administrative Issues Journal, 1(2), 94-102.
In article      View Article
 
[2]  Cook, L. (2015). U.S. education: Still separate and unequal. US News & World Report. Retrieved from https://www.usnews.com/news.
In article      View Article
 
[3]  Noguera, P. (2015). Race, equity, and education: Sixty years from brown. New York, NY: ASCD.
In article      View Article
 
[4]  U.S. Department of Education, & National Center for Education Statistics. (2017). The condition of education 2017 (NCES 2017-144). Status dropout rates. Washington, D.C.: NCES.
In article      View Article
 
[5]  Mandinach, E. B., & Gummer, E. S. (2016a). Data and educator preparation programs: Data for programmatic continuous improvement and data literacy for teachers. Keynote Presentation at Council for Accreditation of Educator Preparation Conference. Washington Hilton Hotel, Washington, D.C.
In article      
 
[6]  Allen, M., Coble, C., & Crowe, E. (2014). Building an evidence-based system for teacher preparation. Washington, DC: Teacher Preparation Analytics.
In article      
 
[7]  Council of Chief State School Officers (CCSSO). (2016). The 2016 legislative conference workbook. Washington, DC: CCSSO.
In article      
 
[8]  Wilson, F., & Ferrini-Mundy, J. (2001). Teacher preparation research: Current knowledge, gaps and recommendations. Washington, DC: Center for the Study of Teaching and Policy.
In article      
 
[9]  Shepperson, T. L. (2013). Prevalence of evaluation method courses in education leader doctoral preparation. International Journal of Educational Leadership Preparation, 8(1), 1-14.
In article      View Article
 
[10]  Council for Accreditation of Teacher Education (CAEP). (2016). CAEP accreditation handbook. Washington, DC: CAEP.
In article      
 
[11]  Darling-Hammond, L., Meyerson, D., LaPointe, M., & Orr, M. T. (2007). Preparing principals for a changing world: Lessons from effective school leadership programs. Palo Alto, CA: Stanford Educational Leadership Institute.
In article      View Article
 
[12]  Lauer, P. A. (2006). What principals need to know about education research. Principal, 85(5), 12-17.
In article      
 
[13]  Pont, B., Nusche, D., & Moorman, H. (2008). Improving school leadership: Policy and practice (Vol. 1). Paris: OECD.
In article      
 
[14]  Kizlik, B. (2017). Measurement, assessment, and evaluation in education. Retrieved from https://www.adprima.com/measurement.htm.
In article      View Article
 
[15]  Garfield, J., & Ben-Zvi, D. (2008). Developing students’ statistical reasoning: Connecting research and teaching practice. Berlin: Springer.
In article      View Article
 
[16]  Garfield, J. (2011). Statistical literacy, reasoning, and thinking. In M. Lovric (Ed.), International encyclopedia of statistical science (pp. 1439-1442). Berlin: Springer.
In article      View Article
 
[17]  Dyer, K. (2014). Data literacy – what it is and how it differs from assessment literacy. Retrieved from https://www.nwea.org/.
In article      View Article
 
[18]  Roth, W.-M. (2014). On understanding variability in data: A study of graph interpretation in an advanced experimental biology laboratory. Educational Studies in Mathematics, 86(3), 359-376.
In article      View Article
 
[19]  Mandinach, E. B., & Gummer, E. S. (2016b). Data literacy for educators: Making it count in teacher preparation and practice. New York, NY: Teachers College Press.
In article      View Article
 
[20]  Doerr, H. M. (2000). How can i find a pattern in this random data? The convergence of multiplicative and probabilistic reasoning. Journal of Mathematical Behavior, 18(4), 431-454.
In article      View Article
 
[21]  King, G. (2011). Ensuring the data-rich future of the social sciences. Science, 331(6018), 719-721.
In article      View Article  PubMed
 
[22]  Deans for Impact. (2016). From chaos to coherence: A policy agenda for accessing and using outcomes data in educator preparation. Austin, TX: Deans for Impact.
In article      
 
[23]  Datnow, A., & Hubbard, L. (2016). Teacher capacity for and beliefs about data-driven decision making: A literature review of international research. Journal of Educational Change, 17(1), 7-28.
In article      View Article
 
[24]  Bryk, A. S. (2015). Learning to improve how America's schools can get better at getting better. Cambridge, MA: Harvard Education Press.
In article      View Article
 
[25]  Marsh, J. A., Kerr, K. A., Ikemoto, G. S., Darilek, H., & Barney, H. (2006). Strategies to promote data use for instructional improvement: Actions, outcomes, and lessons from three urban districts. American Journal of Education, 112(4), 496-520.
In article      View Article
 
[26]  Supovitz, J. A., & Klein, V. (2003). Mapping a course for improved student learning: How innovative schools systematically use student performance data to guide improvement. Philadelphia, PA: The Consortium for Policy Research in Education.
In article      View Article
 
[27]  Bastian, K. C., Henry, G. T., Pan, Y., & Lys, D. (2016). Teacher candidate performance assessments: Local scoring and implications for teacher preparation program improvement. Teaching and Teacher Education, 59(Supplement C), 1-12.
In article      View Article
 
[28]  Bustamante, R. M., & Combs, J. P. (2011). Research courses in educational leadership programs: Relevance in an era of accountability. International Journal of Education Policy and Leadership, 6(3), 1-11.
In article      View Article
 
[29]  Huck, S. W. (2008). Reading statistics and research. Boston, MA: Pearson Education Inc.
In article      PubMed
 
[30]  Shepperson, T., & Fierro, L. A. (2010). Teaching evaluation: Cross-disciplinary research on graduate training in public health and educational administration. In proceedings of the International Conference on the Social Sciences, 2010. Honolulu, Hawaii.
In article      
 
[31]  Means, B., Padilla, C., DeBarger, A., & Bakia, M. (2009). Implementing data-informed decision making in schools–teacher access, supports and use. Menlo Park, CA: SRI International.
In article      View Article
 
[32]  Friel, S.N., Frances R. Curcio, R., Bright, G. W. (2001). Making sense of graphs: Critical factors influencing comprehension and instructional implications. Journal for Research in Mathematics Education. 32 (2) pp. 124-158.
In article      View Article
 
[33]  Mann, S. (2010). A critical review of qualitative interviews in applied linguistics. Applied Linguistics, 32(1), 6-24.
In article      View Article
 
[34]  Shannon, S. E., & Hsieh, H. (2005). Three approaches to qualitative content analysis. Retrieved from www.researchdesignreview.com.
In article      View Article
 
[35]  Potter, W. J., & Levine‐Donnerstein, D. (1999). Rethinking validity and reliability in content analysis. Journal of Applied Communication Research, 27(3), 258-284.
In article      View Article
 
[36]  Roller, M. R. (2016). Social constructionism & qualitative research design. Retrieved from https://www.rollerresearch.com/MRR.
In article      View Article
 
[37]  Krippendorff, K. (2013). Content analysis: An introduction to its methodology (3rd ed.). Thousand Oaks, CA: Sage
In article      
 
[38]  Roller, M. R. & Lavrakas P. J. (2015). Applied Qualitative Research Design: A Total Quality Framework Approach. New York, NY: The Guilford Press
In article      View Article
 
[39]  Kangai, C. (2009). Social research methods in higher education: A critical analysis of methodological issues and emerging trends at the Zimbabwe open university. Retrieved from https://cdn.intechopen.com/.
In article      View Article
 
[40]  Mueller, D. J. (1986). Measuring social attitudes: A handbook for researchers and practitioners. New York: Teachers’ College Press.
In article      
 
[41]  Best, J. B., & Kahn, J. V. (2006). Research in education (10th Ed.). Boston, MA: Pearson Education Inc.
In article      
 
[42]  Gall, M. D., Gall, J. P., & Borg, W. R. (2003). Educational research: An introduction. Boston, MA: Pearson Education Inc.
In article      View Article
 
[43]  Brase, C. H.and Brase, C. P. (2012). Understandable Statistics: Concepts and Methods, (10th Ed.). Boston, MA: Cengage Learning
In article      View Article
 
[44]  Scheaffer, R. L. (2011). Statistics education. In M. Lovric (Ed.), International encyclopedia of statistical science (pp. 1482-1484). Berlin: Springer.
In article      View Article
 
[45]  Chepko, S. (2016). Teacher-prep accreditation group seeks to regain traction. Retrieved from https://www.edweek.org/ew/articles.
In article      View Article
 
[46]  English, L. D. (2012). Data modelling with first-grade students. Educational Studies in Mathematics, 81(1), 15-30.
In article      View Article
 
[47]  Peck, C. A. & McDonald, M. A. (2014). What is a culture of evidence? How do you get one? … Should you want one? Teachers College Record 116 (3)
In article      View Article
 
[48]  Mertler, C. A. (2014). The data-driven classroom: How do i use student data to improve my instruction? Retrieved from www.ascd.org
In article      View Article
 
[49]  Dorow, P. F., Dávila, G., Varvakis, G., & Vallejos, R. V. (2015). Generation of ideas, ideation and idea management. NAVUS-Revista de Gestão e Tecnologia, 5(2), 51-59.
In article      View Article
 
[50]  Jones, R. C. (2013). The instructor’s challenge: Moving students beyond opinions to critical thinking. Madison, WI: Faculty Focus, Magna Publications.
In article      
 
[51]  Sandercook, I. (2015). Strategies for providing effective and efficient instructor feedback. Retrieved from https://teachonline.asu.edu/author/irma/.
In article      View Article
 
[52]  Alwehaibi, H. (2012). Novel program to promote critical thinking among higher education students: Empirical study from Saudi Arabia. Asian Social Science, 8(11), 193-204.
In article      View Article
 
[53]  Carmichael, E., & Farrell, H. (2012). Evaluation of the effectiveness of online resources in developing student critical thinking: Review of literature and case study of a critical thinking online site. Journal of University Teaching and Learning Practice, 9(1), 1-17.
In article      View Article
 
[54]  Garfield, J., & Ben‐Zvi, D. (2007). How students learn statistics revisited: A current review of research on teaching and learning statistics. International Statistical Review, 75(3), 372-396.
In article      View Article
 
[55]  Dynarski, M. (2015). Using research to improve education under the Every Student Succeeds Act. Retrieved from https://www.brookings.edu.
In article      View Article