Creation and Initial Validation of the Physical Educator Efficacy Scale for Teaching Lifetime Physic...

Kason M. O’Neil

Journal of Physical Activity Research

Creation and Initial Validation of the Physical Educator Efficacy Scale for Teaching Lifetime Physical Activities

Kason M. O’Neil

Department of Sport, Exercise, Recreation & Kinesiology, East Tennessee State University, PO Box 70671, Johnson City, TN, 37614, USA

Abstract

The purpose of this study was to develop an instrument that measures self-efficacy perceptions of physical educators towards teaching lifetime physical activities. This Physical Educator Efficacy Scale for Teaching Lifetime Physical Activities (PEES-LPA), was validated through expert review, and pilot procedures, and exploratory factor analysis (EFA). EFA revealed a six-factor model that accounted for 67.8% of the total observed score variance (PAF extraction/Varimax rotation). Additionally, results demonstrated: (a) factors showing simple structure that aligns with related literature, (b) high factor scores (>.40) with no double loadings, (c) efficacy items relating to Net/Wall activities and Target activities loading together, and (d) internal consistency showed to be very high for both the full model (.95) and each individual factor (.92-.95). The PEES-LPA appears demonstrate accptable reliability and validity, though further analysis needs be explored for items that may influence multicollinearity and normality.

Cite this article:

  • Kason M. O’Neil. Creation and Initial Validation of the Physical Educator Efficacy Scale for Teaching Lifetime Physical Activities. Journal of Physical Activity Research. Vol. 2, No. 1, 2017, pp 7-14. https://pubs.sciepub.com/jpar/2/1/2
  • O’Neil, Kason M.. "Creation and Initial Validation of the Physical Educator Efficacy Scale for Teaching Lifetime Physical Activities." Journal of Physical Activity Research 2.1 (2017): 7-14.
  • O’Neil, K. M. (2017). Creation and Initial Validation of the Physical Educator Efficacy Scale for Teaching Lifetime Physical Activities. Journal of Physical Activity Research, 2(1), 7-14.
  • O’Neil, Kason M.. "Creation and Initial Validation of the Physical Educator Efficacy Scale for Teaching Lifetime Physical Activities." Journal of Physical Activity Research 2, no. 1 (2017): 7-14.

Import into BibTeX Import into EndNote Import into RefMan Import into RefWorks

1. Introduction

The prevalence of childhood obesity and physical inactivity has brought about numerous federal initiatives to educate children and adolescents about making physical activity a lifelong commitment. Programs such as the First Lady’s Let’s Move, the Society of Health and Physical Educators’ (SHAPE America) Let’s Move Active Schools initiative, the United States Department of Health and Human Service’s (USDHHS) Healthy People 2020, and the National Physical Activity Plan all revolve around the promotion of physical activities that can be participated throughout one’s entire life, or lifetime physical activities. A lifetime physical activity is one that has the possibility of lifelong participation, and requires a minimal amount of participants, structure, organization, and equipment [2, 10, 18, 43].

One common denominator that all of the above mentioned national initiatives addressed is school-based physical education as a vital component to the promotion of lifetime physical activities. School physical education has been found to be an vital medium for the promotion of lifetime physical activities for its ability to offer classroom opportunities where students can regularly be physically active, and for creating experiences where students can participate in physical activities that can transfer outside of the school setting when transitioning into adulthood [32, 44]. In addition, the SHAPE America’s (2014) physical education national standards and grade level outcomes state that “the goal of physical education is to develop physical literate individuals who have the knowledge, skills, and confidence to enjoy a lifetime of healthful physical activity” (p. 11) As a result of the emphasis being placed on schools for the promotion of lifetime physical activities it is clear that lifetime of physical activity is the catalyst to prolonged wellness, and it then becomes essential to examine how confident physical educators are to develop and implement curriculum highlighting lifetime physical activities.

2. Defining Lifetime Physical Activities

Researchers [18], have suggested that children who find physical activity a positive experience from an early age are much more likely to sustain physical activity participation in to adulthood. As a result, there has been a strong importance placed on physical education curriculum, specifically in high school, focusing on the promotion of physical activities that aligning with lifelong participation. This is opposed to activities that involve team sports and striking and fielding games [2, 3, 18]. Compared to team or invasion games, lifetime physical activities are seen as having better carry-over value as students transition into adulthood [18, 44].

Though physical education is an excellent outlet for students to participate in lifetime physical activities, operationally defining specific lifetime physical activities is still a bit of an uncertainty within the field of physical education. Ross, et al., [43] define lifetime physical activities as any physical activity readily carried over into adulthood due to the nature of being accomplished by a minimum of one or two people, and require little structure, organization, and equipment. Though numerous activities appear to fit this definition, a 2013 publication by the American Alliance for Health, Physical Education, Recreation and Dance (now SHAPE America) helped further clarify what activities constituted being defined as a lifetime physical activity. The operational definition for this study pertaining to lifetime physical activities will align with AAHPERD’s [2] recommendation and relevant literature [10, 18, 43]. Based on those recommendations, lifetime physical activities are any activity that falls within the following seven categories:

1. Outdoor Pursuits (e.g., hiking, backpacking, mountain biking).

2. Fitness Activities (e.g., running, cycling/biking, yoga, weight/resistance training).

3. Dance & Rhythmic Activities (e.g., modern, line, social and square)

4. Aquatics (e.g., swimming, diving, water aerobics)

5. Individual Performance Activities (e.g., gymnastics, track and field, self-defense)

6. Net/Wall Games (e.g., tennis, pickleball, badminton)

7. Target Games (e.g., golf, archery, bowling)

[2, 10, 18, 43].

Prior to the AAHPERD [2] publication, there has been little consensus to the operational definition of lifetime physical activity and the specific activities that classify as such. With the continual push towards physical education curriculum emphasizing lifetime physical activities, the operational definition of what constitute appropriate physical activities for lifelong participation should hopefully continue to become clearer.

Self-efficacy theory. Bandura [7, 8] has stated that there are two major resources that a person must have to be successful in performing a task: skill/knowledge and self-efficacy. Self-efficacy is defined as a set of beliefs, or expectations, about how competent a person feels in their ability to perform a particular task with a desired outcome [8, 9]. One’s self-efficacy beliefs play a key role in human functioning because they not only affect direct behaviors, but it impacts factors such as goals, aspirations, outcome expectations, and perceptions of impediments in the social environment [9]. With this reasoning, an individual’s behaviors can often be better predicted by the beliefs that they hold about their capabilities, rather than by what they are actually capable of accomplishing [40].

In an academic context, self-efficacy research has gained increasing attention over the past 20 years, most specifically in research aimed at measuring motivation and self-regulation [40, 51]. Personal efficacy perceptions of teachers have been found to strongly influence instructional decisions as well as their orientation towards the educational process [6]. Teacher self-efficacy perceptions have also been linked to an assortment of variables in the teaching and learning process such as: student achievement [4, 39, 42], increase use of various teaching modalities [1, 23, 48], persist longer with students who are struggling [20], are less critical of student errors [4, 20], greater classroom-based decision making [26, 53], and an overall greater enthusiasm towards teaching [1, 24].

Although a great deal of research has been conducted on teacher efficacy [8], unfortunately few researchers have specifically examined, and validated instruments, measuring efficacy specific to teaching physical education [27, 32]. This is especially concerning due to the obesity epidemic placing increased emphasis on physical educators to become the gateway for student developing lifetime fitness [38], and further demand on teaching accountability and re-tooling of curricular strategies in physical education [17].

Due to the task- and situational-specific defining characteristics of self-efficacy, Bandura [8, 9] has affirmed that there can be no one all-purpose measure for perceived self-efficacy. Self-efficacy measurements must be specifically aligned to activity domains, and assess multidimensional ways in which self-efficacy beliefs operate within the selected activity, thus, linking factors that demine quality of functioning in the domain [9]. “The ‘one-measure-fits-all’ approach usually has a limited explanatory and predictive value because most of the items in an all-purpose test may have little or no relevance to the domain of functioning” ([9], p. 307). If future research is to continue to develop an understanding of the impact physical education teacher efficacy has on the learning process, content-specific instruments should be created and validated specific to the physical education pedagogy.

As a result this study was conducted in response to a need for a psychometrically sound instrument measuring physical educator self-efficacy perceptions towards instruction of lifetime physical activities.

3. Methods

The purpose of this study was to develop an instrument that measures self-efficacy perceptions of physical educators towards teaching lifetime physical activities. This instrument, the Physical Educator Efficacy Scale for Teaching Lifetime Physical Activities (PEES-LPA), was developed through expert review and numerous pilot procedures based on Bandura’s Self Efficacy Theory [5, 6]. Following IRB approval, this study was conducted in four different phases: (a) Phase I- Item generation, (b) Phase II- Prepilot review, (c) Phase III, Validation study, (d) Phase IV, Assessment of reliability and construct validity [45]. It is also important to note two major delimiting factors that helped shape the development of this instrument: (a) all reference to best practices in physical education programming were a directly result of AAHPERD recommendations [2], and (b) only current in-service physical education teachers who have affiliation with SHAPE America/AAHPERD were included in the study.

Phase I: Item generation. To align the PEES-LPA properly with self-efficacy theory, a self-efficacy instrument needs to address two major facets: task and situation specificity [9]. Task specificity in the PEES-LPA was addressed through teaching behaviors and guidelines presented by NASPE’s [33] National Initial Physical Education Teacher Education Standards, and situation specificity was addressed through the use of selected lifetime physical activity categories commonly taught in secondary physical education.

Teaching task constraints. A deductive approach was used to develop task constraints that influence self-efficacy perceptions towards instruction of lifetime physical activities that were extracted from (a) National Initial Standards for Physical Education Teacher Education [33, 36], and the (b) Physical Education Teacher Evaluation Tool [34]. The ‘Standards’ and the ‘Evaluation Tool’ [33, 34] addressed teaching task constraints in six different areas: scientific and theoretical knowledge, skill and fitness based competence, planning and implementing, instructional delivery, impact on student learning, and professionalism. Commonalities were extracted from the two source documents by the principal investigator (PI) to establish teaching task constraints that are specific to teaching physical education. The PI later used feedback provided during expert review to refine the delimited teaching task constraints.

Lifetime physical activity content. Using the AAHPERD [2] recommendations as the platform for this portion of the study, the classification of a lifetime physical activity was delimited for this study by the following criteria:

1. Lifetime physical activities represent the categories of outdoor pursuits, selected individual performance activities, aquatics, net/wall sports, & target games [2].

2. Invasion games and fielding/striking games were excluded because they require team participation and are not appropriate for lifelong participation [2].

3. To address the most wide-spread lifetime physical activities taught in the United States, activities were eliminated based on their geographical specification (e.g., ice skating, surfing, bouldering/climbing)

4. Net/Wall games were expanded to represent: badminton, table tennis, tennis, racquetball/squash, and pickleball [2].

5. Target games were expanded to represent: archery, croquet, golf, horseshoes, bocce, bowling, and disc golf. [2].

Format. The Pre-Pilot questionnaire consisted of three different sections: (a) demographic information, (b) items measuring perception towards personal ability using a 6-unit (0-5) scale with the following qualitative label descriptors: no experience, novice, advanced beginner, competent, proficient, expert [31], and (c) items measuring efficacy towards instruction of lifetime physical activities using an 11-unit (0-10) response scale was used with the following qualitative descriptors: 0 = no confidence, 5 = moderate confidence, and 10 = complete confidence [9].

Phase II: Pre-pilot review. The first draft of the instrument was then reviewed by a group of physical education pedagogy professors at doctoral-granting universities across the United States (n = 6) for expert review. Experts were asked to evaluate items and provide feedback on their representation of the task constraints a physical educator may have when teaching a specific lifetime physical activities. Experts were also asked to review the instrument based on readability, clarity, conciseness, and overall layout. The PI used both the qualitative and quantitative expert feedback to contribute to both face and content validity [16].

Phase III: Validation study. The final version of the PEES-LPA consisted of 68 items: 5 demographic, 21 perception of personal skill ability (i.e., 3 items for each of the 7 lifetime physical activity categories), and 42 self-efficacy perception towards instruction (See Table 1). Participants were recruited to participate based on either receiving a solicitation email through participating state AAHPERD email listserve, or in-person recruitment at a national SHAPE America convention. The PI contacted the Executive Directors for all 50 state-level AAHPERD organizations asking their willingness and permission to distribute an email with embedded PEES-LPA link among their listserve of members. In addition, the PI conducted in-person solicitation of secondary physical educators agreeing to participate in the study at a national SHAPE America conference. The final sample consisted of 182 in-service secondary physical education teachers.

Table 1. Abbreviated survey items from the Physical Educators Efficacy Scale Towards the Instruction of Lifetime Physical Activities (PEES-LPA)

Phase IV: Assessment of Reliability and Construct Validity. Because of the remaining uncertainty as to the number and nature of the factors underlying the items located in the PEES-LPA instrument, exploratory factor analysis (EFA) was conducted on items pertaining to physical education teacher’s (N = 182) confidence to instruct lifetime physical activities. The validation steps for the full instrument (PEES-LPA) utilized quantitative methodology to determine how many items to retain, the factor structure of the latent variables, and the reliability of internal consistency.

The number of factor to be retained was based on a fulfilling a variety of considerations, including: (a) minimal factor retention score of .40, (b) parallel analysis, (c) Kaiser's Eignevalue rule (greater than 1) (Kaiser, 1958), (d) examination of the scree plot, (e) the amount of variance accounted for by the solution that are retained (> 50%) [19], (f) no more than 5% of the items should load on to more than one factor, and (g) results should have good internal consistency reliability and interpretability [19].

4. Results

After removal of participants for either incomplete surveys, or self-identified as being elementary physical education teachers, the final sample of participants for this study comprised of 182 individuals (n=117, 64.3%, females; n=65, 35.7%, males). Participants represented 24 states in the United States and one province in Canada, with the highest number of participants from Virginia (n=102), New York (n=12), Nevada (n=10), and South Carolina (n=9). Of these teachers, 41.2% (n=75) taught high school physical education, 44.5% (n=81) taught middle school, and 14.3% (n=26) taught at a combination of both high school and middle school. Participants had an average of 15.2 years of experience teaching physical education (SD=10.8, range=0 to 43), and 36.8% (n=67) indicated having a bachelor’s degree, 62.1% (n=113) held a master’s degree, and 1.1% (n=2) held a doctorate degree.

A Missing Value Analysis (MVA) of the survey responses items showed that 24 of the 182 participants failed to self-report at least one item of the survey. A follow-up Little’s MCAR test (Little, 1988) resulted in a chi-square = 1471.99 (df=1479, p=.48), which indicates that the data were indeed missing at random (MAR), and no identifiable pattern exits to the missing data [29]. Due to the missing data showing no identifiable patterns (MAR), and the PI’s attempt to avoid ad-hoc missing data procedures that reduce sample size, the Expectation-Maximization (EM) was used to handle the imputation of missing data [29, 41].

Of the 63 total survey items, 18 variables displayed to be univariate outliers. A Shaprio-Wilk’s tests of the 18 variables with univariate outliers revealed statistically significant departure from normality (p’s < .05/63). Logarithmic (Log 10) transformations were applied to the 18 variables resulting in more normal distributions (skewness & kurtosis < 1.0). In addition, all follow-up univariate outlier analysis on these transformed variables failed to reveal any potential outliers (all Z’s < 3.29). Due to the robustness exploratory factor analysis has towards violations of normality, especially when no observed outliers influence said normality, as well as the normal distribution found in skewness and kurtosis (<1.0), the variables were left unaltered [21, 47].

A principal axis factor (PAF) extraction was conducted and both varimax and oblim rotations were considered in attempt to uncover simple structure. Prior to performing the PAF, the dataset was screened to ensure accuracy of the data, and to verify its suitability for factor analysis. The Kaiser-Meyer-Olkin (KMO) test for sampling adequacy (KMO = .890) was found meet the .60 minimal standard [19], as well as Bartlett’s test of sphericity rejected null hypothesis that the correlation matrix is an identity matrix, therefore data meets minimal standard (χ2= 15413.4, df = 1953, p < .001). Additionally, variable correlations were examined for extreme correlations and for enough correlations to warrant factor analysis (r > 0.3) [19]. A visual inspection of the correlation matrix revealed minimal low extreme correlations, and the majority of correlations meeting the minimal benchmark for factor analysis (r > 0.3). A visual inspection of the high-end extremes showed that the Identify and Present variables, as well as the Feedback and Assess variables did have extremes that possibly influenced multicollinearity (>.85). Due to the exploratory nature of the research design, as well as initial adherence to construct validity, the items were left in the sample.

Principal Axis Factor extraction was performed using SPSS MAC 22.0 on the 63 self-efficacy items in the PEES-LPA. As a preliminary step, principal component extraction revealed the presence of 10 factors with eigenvalues greater than 1.0, which accounted for 81.7% of the total observed score variance in the unroated model. Examination of the resulting structure matrix failed to reveal a clear pattern of simple structure across the 10 factors. A visual inspection of the scree plot revealed a clear point of inflexion after the sixth factor, supporting the retention of only 6 factors.

A follow-up examination using varimax rotation indicated that the six factors model accounted for 67.9% of the cumulative score variance, and appreciable amounts of variance for each of the six factors: 15.5%, 11.8%, 10.9%, 10.5%, 9.7%, 9.2%. An examination of the structure matrix of the six-factor solution revealed simple structure for the when loadings ≥ 0.40 were considered (see Table 2). Due to the simple structure being found in the varimax (orthogonal) rotation, this model was selected to use in results interpretation.

The final six-factor model did reveal two survey item categories that loaded together (Target Activities and Net/Wall Activities), with the remaining activity categories loading independent of one another. The breakdown of item factor loadings were:

• Factor 1-Items specific to Target Sports & Net/Wall Activities (N= 18)

• Factor 2-Items specific to Dance & Rhythmic Activities (N=9)

• Factor 3- Items specific to Aquatic Activities (N=9)

• Factor 4- Items specific to Outdoor Pursuits (N=9)

• Factor 5- Items specific to Fitness Activities (N=9)

• Factor 6- Items specific to Individual Sport Activities (N=9).

Table 2. Varimax Rotated Factor Matrix: Six-Factor PAFa

Cronbach’s alpha was used to measure internal consistency of the 63-item self-efficacy instrument. Reliability results for the full instrument was Cronbach’s α=.95. Internal consistency was also measured for each of the six factors (subscales) with results, which also resulted in high alpha scores overall (all factors > .92).

5. Discussion

The purpose of the present study was to provide initial validation for, and explore the factor structure of, the Physical Educator Efficacy Scale for Teaching Lifetime Physical Activities (PEES-LPA). The PEES-LPA was the first documented attempt to construct a scale specific to secondary physical educator self-efficacy perceptions towards the instruction of lifetime physical activities. The PEES-LPA was constructed and validated using the recommendations and guidelines in the provided instrument development literature, and was assessed using pilot and validation procedures to help provide evidence for face, content, and construct validity [9, 16]. Additionally, no criterion-related validation procedures were conducted in testing of the PEES-LPA instrument due to the exploratory nature of the research design, and no other instrument is presently close enough in nature to align for predictive or concurrent validity [16, 25, 30].

Content validity and face validity evidence for the PEES-LPA came from conducting an evaluative expert review process. One notable finding resulting from the expert review process was that when experts were given the specific quantitative parameters (i.e., 1-5 agree/disagree scale) to frame their evaluation, they were all generally in agreement on the content presented in the PEES-LPA items. Though an evaluation of the open-ended responses from each expert showed there was a wide variation in how they felt the PEES-LPA items could be improved to help align more closely with the selected construct, thus demonstrating some degree of disagreement. As a result of this disagreement, a follow-up technique known as cognitive testing, where potential respondents are asked to evaluate the questions empirically, would have helped clarify discrepancy [9, 16].

Evidence of construct validity in the validation of the PEES-LPA came from the review processes in the pilot procedures, as well as the statistical examination of the factor structure using exploratory factor analysis.

The final sample for this study consisted of 182 participants with 63 variables used in the factor analysis, for a final item-to-participant ratio of 1/2.9. In many researching circles this sample size would fall below recommended limits [13, 19, 50]. What these recommendations fail to take in to account is the nature of the data, and the complex dynamics of factor analysis [15]. “In general, the stronger the data, the smaller the sample can be for an accurate analysis. Strong data in factor analysis means uniformly high communalities without crossloadings, plus several variables loading strongly on each factor” ([15], p. 4). Gualdagnoli & Velicer [22] further supported this point by illustrating that when communalities are quite high (> .60), and correlation coefficients are > .80, then smaller sample sizes are acceptable.

An evaluation of the factor loadings, and communalities extracted from the EFA it is clear that simple structure was evident from the early stages in the process. Furthermore, the final model when six-factors were retained, showed (1) no crossloadings, and (2) very high factor loadings (most loadings >.50) for each factor. Though prior to participant recruitment the desired sample size was sought to be higher, the factor structure, correlations, and factor scores, demonstrate that 182 participants is deemed acceptable for this EFA.

Upon initial examination of univariate outliers in the dataset, 18 outliers were found (z’s > 3.29). Multivariate procedures are particularly sensitive to univariate outliers, so variable transformations were conducted. An examination of the historgrams and skewnewss statistics, revealed that all 18 variables showed moderate negative skew, showing participants, overall, rated their self-efficacy levels quite high for those variables. Tabachnick et al., [49], discussed that data transformation is a common practice, and should be seen more as data re-expression, rather than transformation. It should be noted here that these transformations do impact the ability for further research to descriptively interpret statistical findings, thus confirmatory evaluation is recommended.

In addition, the Shapiro-Wilk’s test for normality for all 63 self-efficacy items did show significant departure from normality (p’s < .05/63). Many multivariate researchers consider this to be a major red flag that would significantly hinder proceeding with future analyses [19, 25]. Due to the robustness exploratory factor analysis has towards violations of normality, especially when no observed outliers influence said normality, as well as the normal distribution found in skewness and kurtosis (<1.0), the variables were left unaltered [21, 47].

The final six-factor model accounted for 68.7% of total observed score variance. The factor structure revealed all of the activity areas grouping individually, excluding the Target activities and the Net/Wall activities, which factored together. Five of the final six factors were named based on activity area grouping (i.e., all fitness activity items loaded together, thus naming the factor self-efficacy to teach: (a) fitness activities, (b) outdoor pursuits, (c) individual performance activities, (d) dance/rhythmic activities, and (e) aquatic activities. Due to the fact that the Net/Wall and Target activity areas loaded together on the same factor (reducing seven activity areas, down to six), the new factor representing the grouping the two needed to be re-named. After further examination of the literature supporting lifetime physical activities in physical education [2, 11, 14, 18, 33, 34, 35], this PI chose to rename the factor as ‘Hand/Eye Activities’, with potential future research addressing the reduction of these Hand/Eye variables.

Reliability estimates for the full six-factor model revealed a high level of internal consistency, Cronbach’s alpha = .95 [37]. In addition, each of the six factors (activity groupings) were analyzed for item-total correlations and Cronbach’s alpha if deleted. Results for all six factors were additionally quite high (alpha range .924-.955). Historically speaking, alpha scores greater than .70 are considered to be acceptable for scale development, with scores between .85-.95 deemed excellent [16, 25, 37].

Though all of the reliability scores, including the full model, all fall within the excellent category [25, 37], results should be interpreted cautiously. Clark and Watson [12] address that there is such a thing as having internal consistency that is too high, thus having overly redundant items that are measuring content that is far too specific. Additionally, some researchers [12, 19, 25, 37] have noted that internal consistency can be highly sensitive to the number of variables in an instrument.

Inter-item correlations among were also evaluated to test the correlation between each of the factors within the PEES-LPA [25]. Inter-item correlations that are at the high extreme (> .85) suggested that a set of items are not contributing to something unique, and therefore are redundant [25]. Additionally, variables that have a great deal of correlations below .30, then should be considered for removal.

Results from the inter-item correlations showed the following:

• The Assess and Feedback variables appeared to have a very high correlation

• The Present and Identify variables appeared to highly correlate

• The Frequency of Participation variables did not have strong correlations with many variables

The results from these correlations suggest that future confirmatory evaluation may look to combine variables, as well as removal of the Frequency variable.

6. Limitations and Implications for Future Research

Though the psychometric results of this study provided support for the self-efficacy theory as the framework for measuring perceptions towards instruction of lifetime physical activities in physical education, instrument validation is an ongoing process. The most prominent limitation of a study concerns the generalizability of its findings, specifically due to the delimiting AAHPERD framework and sampling methods used.

For example, some participants were solicited to participate in the study by use of a listserve connected to membership in a professional organization (SHAPE America). The results might be biased by this selection factor, as the teachers who choose to participate in these organizations and activities may be more motivated to maintain and improve their teaching skills and more up-to-date with current issues in the field, and therefore may not necessarily be representative of a more global population. In addition, physical education teachers may respond on the survey in ways they feel are more socially acceptable rather than indicating their true beliefs. This being an inherent limitation of self-reporting measures. Finally, since physical educators’ self-efficacy beliefs were being explored in regard to instruction of lifetime physical activities overall, it cannot be assumed that these findings will generalize to overall instruction in physical education, or to specific activities themselves.

It is suggested that future research and validation procedures for the PEES-LPA be conducted to address: (a) testing the full instrument using confirmatory factor analysis (CFA) to assess unidimensionality of the construct, (b) replicate the study with a larger, and more diverse sample that are not dependent on a third party (state AAHPERD) to help distribute the survey and ideally obtain a larger sample size, (c) focus replication efforts on the reduction of factor variables that represent possible multicollinearity (e.g., reducing items form Net/Wall and Target, and possible re-evaluation of the Individual Performance activity variables due to the moderate factor correlation it had with three other factor groupings, (d) potentially reducing items that show stronger inter-item correlation extremes (e.g., Feedback and Assess items), and (e) continue to investigate the accuracy of self-reported teacher behavior in regard to instruction of lifetime physical activities (as well as other sport-specific critical skill elements.

7. Conclusion

The growing body of evidence and support for the promotion of lifetime physical activities through school-based physical education is more vital than ever. As a result of the value placed on these physical education programs, it becomes important to examine how confident physical educators are to deliver such skills and competencies.

As a result, the current study offers preliminary support for the psychometric properties of the PEES-LPA for validity and reliability. The major findings from this study demonstrated that: (a) the resulting factors exhibited simple structure that aligns with literature supporting the classification of lifetime physical activities [2], (b) factors were composed of items that logically relate, as well as showed high levels of internal consistency, and (c) preliminary results showed that the PEES-LPA appears to be an appropriate instrument for measuring self-efficacy perceptions of physical educators, though item reduction (while still upholding high internal consistency) may be a logical approach to future confirmatory research.

References

[1]  Allinder, R. M. (1994). The relationship between efficacy and the instructional practices of special education teachers and consultants. Teacher Education and Special Education, 17, 86-95.
In article      View Article
 
[2]  American Alliance for Health, Physical Education, Recreation, & Dance (AAHPERD). (2013). National standards & grade level outcomes in K-12 physical education. Reston, VA; author.
In article      
 
[3]  American College of Sports Medicine. (2011). ACSM’s complete guide to fitness & health. Champaign, IL: Human Kinetics.
In article      
 
[4]  Ashton, P., & Webb, R. B. (1986). Making a difference: Teachers’ sense of efficacy and student achievement. New York: Longman.
In article      
 
[5]  Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavior change. Psychological Review, 84, 191-215.
In article      View Article  PubMed
 
[6]  Bandura, A. (1982). Self-efficacy mechanism in human agency. American Psychologist, 37, 122-147.
In article      View Article
 
[7]  Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice-Hall.
In article      
 
[8]  Bandura, A. (1997). Self-efficacy: The exercise of control. New York: Freeman.
In article      
 
[9]  Bandura, A. (2006). Guide for constructing self-efficacy scales. In F. Pajares & T. Urdan (Eds.), Self-efficacy beliefs of adolescents (Vol. 5, pp. 307-337). Greenwich, CT: Information Age Publishing.
In article      
 
[10]  Caspersen C. J., Powell K. E., & Christenson G. M. (1985). Physical activity, exercise, and physical fitness: Definitions and distinctions for health-related research. Public Health, 100 (2), 126-31.
In article      PubMed  PubMed
 
[11]  Centers for Disease Control and Prevention-CDC. (1997). Guidelines for school and community programs to promote lifelong physical activity among young people. Morbidity & Mortality Weekly Report, 46, 1-36.
In article      
 
[12]  Clark, L. A., & Watson, D. (1995). Constructing validity: Basic issues in objective scale development. Psychological Assessment, 7 (3), 309.
In article      View Article
 
[13]  Comrey, A. L. (1988). Factor-analytic methods of scale development in personality and clinical psychology. Journal of consulting and clinical psychology56(5), 754.
In article      View Article  PubMed
 
[14]  Corbin, C. B. (2002). Physical activity for everyone: What every physical educator should know about promoting lifelong physical activity. Journal of Teaching in Physical Education, 21(2), 128-44.
In article      View Article
 
[15]  Costello, A. B., & Osborne, J. W. (2005). Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis. Practical Assessment, Research, & Evaluation, 10, 1-9
In article      
 
[16]  DeVellis, R. F. (2012). Scale development: Theory and applications (3rd). London: Sage Publications.
In article      
 
[17]  Edginton, C. R., Kirkpatrick, B., Schupbach, R., Phillips, C., & Chen, M. C. P. (2011). A dynamic pedagogy of physical education teacher preparation: Linking practice with theory. Asian Journal of Physical Education and Recreation 16(2), 7-25.
In article      
 
[18]  Fairclough, S., Stratton, G., & Baldwin, G. (2002). The contribution of secondary school physical education to lifetime physical activity. European Physical Education Review, 8(1), 69-84.
In article      View Article
 
[19]  Field, A. (2009). Discovering statistics using SPSS. London: Sage publications.
In article      
 
[20]  Gibson, S., & Dembo, M. (1984). Teacher efficacy: A construct validation. Journal of Educational Psychology, 76, 569-582.
In article      View Article
 
[21]  Gorsuch, R. L. (1983). Factor analysis (2nd ed.). Hillsdale, New Jersey: Erlbaum.
In article      PubMed
 
[22]  Guadagnoli, E., & Velicer, W. F. (1988). Relation to sample size to the stability of component patterns. Psychological Bulletin, 103(2), 265.
In article      View Article  PubMed
 
[23]  Guskey, T. R. (1988). Teacher efficacy, self-concept, and attitudes toward the implementation of instructional innovation. Teaching and Teacher Education, 4, 63-69.
In article      View Article
 
[24]  Hall, B., Burley, W., Villeme, M., & Brockmeier, L. (1992). An attempt to explicate teacher efficacy beliefs among first year teachers. Paper presented at the annual meeting of the American Educational Research Association, San Francisco.
In article      
 
[25]  Hinkin, T. (1995). A review of scale development practices in the study of organizations. Journal of Management, 21(5), 967-988.
In article      View Article
 
[26]  Hoy, W. K., & Woolfolk, A. E. (1993). Teachers’ sense of efficacy and the organizational health of schools. The Elementary School Journal, 93, 356-372.
In article      View Article
 
[27]  Humphries, C. A., Hebert, E., Daigle, K., & Martin, J. (2012). Development of a physical education teaching efficacy scale. Measurement in Physical Education and Exercise Science, 16(4), 284-299.
In article      View Article
 
[28]  Kaiser, H. F. (1958). The application of electronic computers to factor analysis. Educational and Psychological Measurement, 20, 141-151.
In article      View Article
 
[29]  Little, R. J. (1988). A test of missing completely at random for multivariate data with missing values. Journal of the American Statistical Association, 83(404), 1198-1202.
In article      View Article
 
[30]  Litwin, M. S. (1995). How to measure survey reliability and validity (Vol. 7). New York: Sage.
In article      View Article
 
[31]  Malinen, O. P., Savolainen, H., & Xu, J. (2013). Dimensions of teacher self-efficacy for inclusive practices among mainland Chinese pre-service teachers. Journal of International Special Needs Education, 16(2), 82-93.
In article      View Article
 
[32]  Martin, J. J., & Kulinna, P. H. (2003). The development of a physical education teachers’ physical activity self-efficacy instrument. Journal of Teaching Physical Education, 22, 219-232.
In article      View Article
 
[33]  National Association for Sport and Physical Education (NASPE). (2008). National standards & guidelines for physical education teacher education, (3rd ed.). Reston, VA: author.
In article      
 
[34]  National Association for Sport and Physical Education. (2007). Physical education teacher evaluation tool, (3rd ed.). Reston, VA: author.
In article      
 
[35]  National Association for Sport and Physical Education. (2009). Moving into the future: National standards for physical education, (3rd ed.). Reston, VA: author.
In article      
 
[36]  National Council for Accreditation of Teacher Education (NCATE). (2008). Professional standards for the accreditation of schools, colleges, and departments of education. Washington, DC: author.
In article      
 
[37]  Nunnally, J. (1978). Psychometric theory. McGraw-Hill: New York.
In article      
 
[38]  Pan, Y. (2012). The development of a teachers’ self-efficacy instrument for high school physical education teacher. World Academy of Science, Engineering, and Technology, 66, 1152-1157.
In article      
 
[39]  Pajares, F. (1996). Self-efficacy beliefs in academic settings. Review of Educational Research, 66, 533-578.
In article      View Article
 
[40]  Pajares. F. (2002). Overview of social cognitive theory and of self-efficacy. Retrieved May 05, 2013 from https://www.uky.edu/~eushe2/Pajares/eff.html.
In article      
 
[41]  Peng, C. Y. J., Harwell, M., Liou, S. M., & Ehman, L. H. (2006). Advances in missing data methods and implications for educational research. Real Data Analysis, 31-78.
In article      
 
[42]  Ross, J. A. (1992). Teacher efficacy and the effect of coaching on student achievement. Canadian Journal of Education, 17(1), 51-65.
In article      View Article
 
[43]  Ross, J. G., Dotson, C.O., Gilbert, G. G. and Katz, S. J. (1985). What are kids doing in school physical education? Journal of Physical Education, Recreation and Dance, 56(1): 73-76.
In article      View Article
 
[44]  Sallis, J. F., Simons-Morton, B. G., Stone, E. J., & Corbin, C. B. (1992). Determinants of physical activity and interventions in youth. Medicine & Science in Sports & Exercise. 248-257.
In article      View Article
 
[45]  Scrabis-Fletcher, K., & Silverman, S. (2010). Perception of competence in middle school physical education: Instrument development and validation. Research Quarterly for Exercise and Sport, 81(1), 52-61.
In article      View Article  PubMed
 
[46]  Society of Health and Physical Educators (SHAPE America). (2014). National standards & grade level outcomes in K-12 physical education. Reston, VA; author.
In article      
 
[47]  Stevens, J. P. (2009). Applied multivariate statistics for the social sciences. New York, N.Y.: Taylor & Francis.
In article      
 
[48]  Stein, M. K., & Wang, M. C. (1988). Teacher development and school improvement: The process of teacher change. Teaching and Teacher Education, 4, 171-187.
In article      View Article
 
[49]  Tabachnick, B. G., Fidell, L. S., & Osterlind, S. J. (2001). Using multivariate statistics. New York: Pearsons.
In article      PubMed
 
[50]  Tinsley, H. E., & Tinsley, D. J. (1987). Uses of factor analysis in counseling psychology research. Journal of Counseling Psychology, 34(4), 414.
In article      View Article
 
[51]  Tschannen-Moran, M., Woolfolk Hoy, A. & Hoy, W. K. (1998). Teacher efficacy: Its meaning and measure. Review of Educational Research, 68, 202-248.
In article      View Article
 
[52]  Wallhead, T. L., & Buckworth, J. (2004). The role of physical education in the promotion of youth physical activity. Quest, 56(3), 285-301.
In article      View Article
 
[53]  Ward, R. A. (2005). Impact of mentoring on teacher efficacy. Academic Exchange Quarterly, Winter, 148-154.
In article      
 
  • CiteULikeCiteULike
  • MendeleyMendeley
  • StumbleUponStumbleUpon
  • Add to DeliciousDelicious
  • FacebookFacebook
  • TwitterTwitter
  • LinkedInLinkedIn