Entry-Level Biology Courses for Majors and Non-Majors: Performance and Assessment

Mamta Singh, Sandra West

  Open Access OPEN ACCESS  Peer Reviewed PEER-REVIEWED

Entry-Level Biology Courses for Majors and Non-Majors: Performance and Assessment

Mamta Singh1,, Sandra West2

1Lamar University, Beaumont, TX

2Texas State University, San Marcos, TX

Abstract

The purpose of the study was to assess students’ performance in entry-level biology courses. The instruments used for this study were pre-post content knowledge tests to address two research questions: 1. Did students’ scores improve from pre-to post- tests and were there differences between cohort one and cohort two on the content knowledge test in Functional and Orgaismal Biology? 2. Did students correctly answer more questions at the three higher levels of Bloom’s taxonomy test from pre-to post-tests and were there differences in cohorts on the content knowledge in Functional Biology and Organismal Biology? The results indicated that students’ scores on the content knowledge tests increased from pre-to post- tests and the difference was statistically significant (P<0.05) between cohort one and cohort two on the content knowledge test in Functional and Orgaismal Biology. Furthermore, the students were able to answer higher order thinking skill questions on the post-test and pre-posttests’ scores difference were statistically significant (p<0.05).

Cite this article:

  • Singh, Mamta, and Sandra West. "Entry-Level Biology Courses for Majors and Non-Majors: Performance and Assessment." American Journal of Educational Research 3.5 (2015): 581-587.
  • Singh, M. , & West, S. (2015). Entry-Level Biology Courses for Majors and Non-Majors: Performance and Assessment. American Journal of Educational Research, 3(5), 581-587.
  • Singh, Mamta, and Sandra West. "Entry-Level Biology Courses for Majors and Non-Majors: Performance and Assessment." American Journal of Educational Research 3, no. 5 (2015): 581-587.

Import into BibTeX Import into EndNote Import into RefMan Import into RefWorks

1. Introduction

Over the past half century, poor student performance in science and the high percentage of students leaving science programs has been a great concern to science educators, professionals, and policy makers in the United States. In the late 1990s, 50% of both minority and non-minority students entering college to major in natural science fields changed their majors within two years of taking their first college science class (Seymour & Hewitt, 1997). Furthermore, dropout and failure rates are higher in college courses such as biology, chemistry, and physics as compared to non-science courses (Gainen, 1995). Additionally, low retention of science majors in the fields of biology, chemistry, and physics was reported by Wood and Gentile (2003). As a result, the science education community has been trying to understand and address these problems via various evaluation approaches. For any higher education institution, continuing evaluation of its programs is critical and should be a part of its education improvement research efforts, because institutional success depends on students’ retention and graduation rates. Program/course evaluation is essential given that the educational system is undergoing rapid reform (for e.g. implementation of College Readiness Standards) to fulfill the educational demands, particularly in the science fields. Hence, to better assess students’ success in the science program, evaluation of science programs or courses is highly recommended. “Among the educational reform efforts in the United States, national initiatives often target K-12 education to encourage all students to participate more actively in science education National Research Council (NRC, 1996). Although the rationale behind these efforts is to enhance K-12 students’ interest in science, these efforts are not as visible at the college level” (Kahveci, Southerland, and Gilmer, 2006, p 34). Kulik, Kulik, and Schwalb (1983) and Pascarella and Terenzini (1991) have reported that programs designed to improve students’ academic skills have a substantial effect on college success. The poor academic performance in college often forces students to discontinue their academic venture. This has been a focus of several studies (American Council on Education, 1996; Sadler & Tai, 2000; Selingo, 2003; Tai, Sadler, & Loehr, 2005). This problem is particularly exists with entry-level courses.

2. Research Problem and Questions

The undergraduate level biology program at the study site consists of two courses: BIO 1430 (Functional Biology) and BIO 1431 (Organismal Biology). These courses are the first required biology courses for all students who wish to pursue biology and other science-related degrees. Students who do not succeed in these courses often do not pursue more science courses. Some withdraw (W) in the middle of the semester, some make a D or an F grade, and some drop out of school entirely. Both of these courses include lecture and laboratory components and are taught in both the fall and spring semesters, each with two lecture sections. The Department of Biology designed and implemented a new program in the fall 2000 entitled “The Biology 2000 Curriculum.” The implementation of this new program was a part of national college education reform efforts. The traditional curriculum reflected an era when biology was thought to comprise two or perhaps three subjects-namely Zoology, Botany and Microbiology. This reform favored the teaching of the fundamentals of functional biology that includes molecular and cellular biology and physiology, and organismal biology that includes Mendelian and population genetics, evolution, and ecology at the entry-level. The assumption was that students will grasp higher-level concepts more quickly and with better comprehension. These majors’ courses were designed to provide rigorous training in basic principles in sufficient detail so that students could build on this knowledge in advanced courses. The rationale behind this change was to achieve certain goals such as (1) improving Biology majors’ and non-majors’ understanding of life science; (2) increasing students’ enthusiasm for science; (3) improving student retention in biology; and (4) increasing the number of students choosing to become biology majors. The Biology 2000 Curriculum replaced the traditional two entry-level courses: Botany and Zoology with Functional Biology and Organismal Biology which can be taken in any order. A brief description of entry-level biology courses is as follows:

BIOL 1430: Provides the science major with a strong foundation in cellular and molecular biology and physiology. Topics include biological chemistry, metabolism, the molecular bases of cellular functions and genetics, the molecular and human physiology and the immune system.

BIOL 1431: Provides the science major with a strong foundation in organismal biology, Mendelian and population genetics, evolution and ecology. Topics include taxonomy, patterns of diversity, ecosystems, human biology, behavior, reproductive biology, and comparative physiology. High achievement by students in entry-level courses is in the Biology Department’s best interest. These courses are gateways for the first year students in their venture towards majoring in biology. A major rationale for revising entry-level biology courses was the assumption that high performance in these courses will have a positive impact on performance in upper division biology courses. Finally, it was predicted that improved performance in the upper division courses should lead to achievement of the above stated departmental goals.

A few studies have been conducted at the course-specific level to assess student performance and knowledge skills (Goodman, Koster, & Redinius, 2005; Libarkin & Anderson, 2005). These studies have addressed student success at the undergraduate college-level using pre-and post- tests. However, these studies did not address student performance at the course-specific levels. Furthermore, no studies were found that specifically address students’ biology content knowledge skills (basics and higher order thinking skills) and identify factors that address student success in entry-level college science courses. Therefore, research about students’ performance in entry-level science courses requires additional studies. These gate-keeping courses require detailed evaluation if an institution’s goal is to increase students’ performance and success in these courses, which is, in fact, a stepping stone for increasing the number of graduates in STEM majors. There are a few evaluation studies (Goodman, Koster, & Redinius, 2005; Libarkin & Anderson, 2005) conducted at the course-specific level in the field of educational research. Therefore, the present study particularly focuses on students’ performance in the entry-level biology courses. The present study was conducted at large research institution in the southern United States in the Department of Biology. The present study examined college students’ performance in entry-level biology courses. Specifically, this study has addressed the following research questions:

1.Did students’ scores improve from pre-to post- tests and were there differences in cohorts on the content knowledge test in:

a. Biology 1430 (Functional Biology)?

b. Biology 1431 (Organismal Biology)?

2. Did students correctly answer more questions at the three higher levels of Bloom’s taxonomy (Application, Synthesis and Analysis) test from pre-to post-tests and were there differences in cohorts on the content knowledge in:

a. Biology 1430 (Functional Biology)?

b. Biology 1431 (Organismal Biology)?

3. Materials and Methods

The study used two samples to address students’ performance on entry-level biology content knowledge assessments. For clarity purposes, the two samples are shown in Table 1.

3.1. Participants

Sample comprised of student participants who were enrolled in one of the two entry-level biology courses (i.e. Functional Biology or Organismal Biology) in the fall 2007 and the spring 2008. Student participation in this study was voluntary. However, five points of extra credit on the students’ final lab were awarded to the biology student participants who volunteered to participate in the biology content knowledge pre- and post- tests. The participants in sample included students enrolled in the fall 2007 semester (cohort One) and the spring 2008 semester (cohort Two) in two entry-level biology courses (Functional and Organismal Biology). There were 16 and 18 laboratory sections in Functional and Organismal Biology respectively for both semesters. The average class size was approximately 18 students in each laboratory section. Biology content knowledge assessment data from cohorts one and two participants were used to address research questions one and two. The total number of student participants in Functional Biology and Organismal Biology in cohort one and cohort two is given in Table 2 and Table 3.

Table 2. Student Participants in the Fall 2007 Entry-Level Biology Courses

Table 3. Student Participants in the Spring 2008 Entry-Level Biology Courses

3.2. Procedures

This section briefly explains data collection procedures and discusses the three instruments used, which are the content knowledge assessments for Functional and Organismal Biology. Students included in this study were enrolled in entry-level courses. To address the above research questions, two sets of data were collected. The first set of data was collected from student participants described in sample one. These testing instruments were administered during the laboratory sections of entry-level biology courses in the first week of fall and spring semesters. The student participants were given a scan form answer sheet, test questions, and a demographic survey with directions for its completion. The directions explained the study’s purpose. The researcher read the directions aloud and explained to the student participants what needed to be filled in on the scan form and the survey. Forty-five minutes were allocated for student participants to complete all three materials. Upon completion, all the materials were collected. The scan forms were scanned and the resulting raw data files were converted to Excel data files and later exported to SPSS data files. The demographic survey information was manually entered by the researcher in Excel data file and later exported to SPSS data files. The second set of data was collected from student participants described in sample two. The pre-and post- tests’ scores were manually entered by the researcher into Excel data files and later exported to SPSS data files.

Table 4. Assessment Questions for Organismal Biology Based on Concept Area vs. Cognitive Level as Described by Bloom’s Taxonomy of Higher Order Thinking Skills

Table 5. Assessment Questions for Functional Biology Based on Concept Area vs. Cognitive Level as Described by Bloom’s Taxonomy of Higher Order Thinking Skills

3.3. The Instruments

The instruments were Functional and Organismal Biology content knowledge assessments. These instruments were developed by Hand (2002). During the development stage of these instruments, the biology content areas to be assessed were identified and cross-validated by course instructors as being reflective of the curricula of the two courses. There were 41 and 51 multiple choice questions in Functional and Organismal Biology respectively. Content Knowledge Areas were assessed using Bloom’s Taxonomy. The course-specific content areas in both Functional Biology and Organismal Biology measured on these assessments are shown in (Table 4 and Table 5). Content areas assessed in Organismal Biology were: Behavior, Diversity, Ecology, Evolution, organismal biology, and scientific reasoning. Content areas assessed in Functional Biology were: Animal cells, metabolism, energy, and enzymes, plants and photosynthesis, Animal Physiology, and Genetics. The biology content questions came from validated test banks (Schrock, 1997; Service, 1994) that covered the materials addressed in laboratory and lecture sections of both Functional and Organismal Biology courses. Pre-validated questions were selected in order to eliminate the validity issue involved with creating “internally-designed” achievement tests (Hand, 2002). Forty multiple choice questions were included on the Functional Biology test. All items had either four or five possible answers. Similarly, Organismal Biology contained 50 multiple choice questions. Higher order critical thinking skills were assessed with items written at two levels (application, analysis, and synthesis) as defined by Bloom’s taxonomy (Bloom, Englehart, Furst, Hill, & Krathwohl, 1956). The instruments used were field tested for the purposes of this study in cohort one in order to conduct the analysis of content validity and seek aberrant responses. After the field test, only minor changes were incorporated, such as answer choice numbering or the deletion of one question as suggested by a current faculty member, and the addition of two student success questions. However, the instruments retained all of the original design elements used in the field-tested version (content knowledge test questions are shown in Table 4 and Table 5).

3.4. Research Design

This study used a non-experimental exploratory research design. In non-experimental research design, control of independent variables is not possible (Kerlinger & Lee, 2000). Since this research design is commonly used in educational research, it is seldom possible to have full control over variables or to randomly select subjects (Sadler & Tai, 2000). Likewise, this research did not attempt to seek cause-effect relationships, therefore, non-experimental research design was appropriate to use. Data Analysis: A repeated measure analysis of variance (ANOVA) was used to determine if there were mean differences on the content knowledge assessments scores–one within group comparison (pre-and post- tests) and one between group comparison (cohorts one and two). Repeated Measures Analysis of Variance (ANOVA): A comparison of the content knowledge assessment means was performed using repeated measures ANOVA design, with one within group comparison and one between groups comparison for each analysis. The within group comparisons consisted of the content knowledge pre-to post-assessment. For within group content knowledge, the total assessment scores were analyzed first. Since there was a mean difference between total pre-and post- tests’ scores, the content knowledge assessments was split into two parts. The first part consisted of items that were measuring basic level thinking skills of knowledge and comprehension (Bloom et al., 1956) and the second part consisted of items that were measuring higher order thinking skills of application, analysis, and synthesis (Bloom et al., 1956). Between groups comparisons consisted of cohorts one and two. Before running the analysis, all the assumptions for repeated measures ANOVA (normality, independence, and homoscedasticity) were checked. A repeated measure ANOVA was used with one between groups (cohorts) and one within group (pre-to post- test). The dependent variables were the Functional Biology and Organismal Biology content knowledge assessments’ scores. The content knowledge assessments’ scores were categorized into three levels–total, basic thinking skills, and higher order thinking skills. All assumptions of repeated measures ANOVA (normality, independence, and homoscedasticity) were tested. The assumption of normality was tested using skewness and kurtosis values for each of the three levels. If the skewness value falls between negative one and positive one and the kurtosis value falls between negative two and positive two, this indicates that the assumption of univariate normality is met. None of the levels exceeded the guidelines indicating that the dependent variables were univariate normally distributed. The assumption of independence refers to the homogeneity of covariance matrices. This assumption was assessed using Box’s M test. If the Box’s M test is not statistically significant, it indicates that the observed covariance matrices of dependent variables are equal between the independent variables (in this case, Cohorts one and two). The Box’s M test was not statistically significant for both Functional and Organismal Biology, indicating that the assumption of homogeneity of covariance matrices of Cohorts one and two was met. The test of Homogeneity-Levene’s Test of Equality of Error Variances tested the null hypothesis that the error variance of the dependent variables (pre-and post- test scores) were equal between cohorts. This test was not statistically significant indicating homogeneity of error variances. Because none of the assumptions of ANOVA were violated, the analyses was continued.

Internal Validity: Internal validity means that observed differences on the dependent variable are directly related to the independent variable and not due to some other unintended variable (Fraenkel & Wallen, 2003) such as a poorly constructed instrument. In the present study, possible internal validity threat included changing the content knowledge assessment questions from one semester to the next. Therefore, changes were made for the sake of improving reliability and were kept at a minimum. An internal validity threat related to implementation included changes in instructors from semester one to two. This was a Biology Department decision and was beyond the researcher’s control. Another internal validity threat includes a possible negative attitude of the students towards taking the content knowledge tests. To minimize this threat, the students were awarded an incentive of five extra points in their lab grade for participating in the content knowledge pre-and post- tests. Data Screening: Educational research effort does not take place in a controlled environment where all variables and outcomes are 100% predictable. While working on the data analysis for this study, I made several decisions in order to pursue my research. These decisions, however, constrained the conclusions that could be drawn from the results. First, some of the students did not complete all items on the demographic survey. These students were eliminated from the analyses. Only students who participated in both pre-and post- tests were included in this study. All entered data were examined and checked for accuracy. The data were entered into SPSS (v. 15) and then several crosstabs were run for coding verification. In addition, data screening was also conducted by running a series of frequency distributions. A thorough review of the collected data was conducted to assess for incomplete information.

4. Results

First, two repeated measures ANOVAs which used the content knowledge assessments’ total scores as the dependent variable with one within group comparison (pre-to post- test) and one between groups comparison (Cohort one and Cohort two) were conducted for both courses–Functional and Organismal Biology. Not surprisingly, students scored higher on the content knowledge post-test than the pre-test, and the difference was statistically significant for both courses (p< 0.05) also there was statistically significant differences in the mean scores between cohorts (p < 0.05) (Table 6 and Table 7). An examination of the means (Table 8) shows that for the Functional Biology course, cohort two scored higher than cohort one. The opposite was the case for Organismal Biology with cohort one scoring higher than cohort two. Effect sizes were 0.016 for both courses; this effect size is very small, so this difference was not practically significant. The interaction between cohort and test was non-significant, meaning that both cohorts improved from pre-test to post-test at the same rate on the content knowledge test.

Table 6. Repeated Measures ANOVA Results with One within Group Comparison (pre-to post-test) and One Between Group Comparison (Cohort One and Cohort Two) Using Content Knowledge Assessment Total Score as the Dependent Variable

Table 7. Repeated Measures ANOVA Results with One within Group Comparison (pre-to post-test) and One Between Group Comparison (Cohort One and Cohort Two) Using Content Knowledge Assessment Total Score as the Dependent Variable

Table 8. Content Knowledge Assessment Total Mean Scores and SD Pre-test and Post-test by Cohort

Table 9. Repeated Measures ANOVA Results with One within Group Comparison (pre-to post-test) and One Between Group Comparison (Cohort One and Cohort Two) Using Content Knowledge Assessment Basic and Higher Order Thinking Skills Scores as the Dependent Variables

Table 10. Repeated Measures ANOVA Results with One within Group Comparison (pre-to post-test) and One Between Group Comparison (Cohort One and Cohort Two) Using Content Knowledge Assessment Basic and Higher Order Thinking Skills Scores as the Dependent Variables

Table 11. Content Knowledge Assessment Basic Thinking Skills Mean Scores and SD for Pre-test and Post-test by Cohort

Table 12. Content Knowledge Assessment Higher Order Thinking Skills Mean Scores and SD for Pre-test and Post-test by Cohort

Because the within group comparison of the total score was statistically significant, four additional repeated measures ANOVAs were performed using the content knowledge assessment items written to measure basic thinking skills and higher order thinking skills as dependent variables for both courses. Statistical interaction was non-significant at all levels except the higher-order thinking skills level in Functional Biology, which suggested that in Functional Biology the rate at which scores improved from pre-test to post-test on the higher order thinking skills level was not the same for the two cohorts. Reliability (Cronbach’s Alpha) for basic and higher order thinking skills questions were 0.61 and 0.54 for Functional Biology and 0.80 and 0.78 for Organisnal Biology. The remaining results of the pre-and post- tests administered during both fall and spring semesters in Functional and Organismal Biology courses are depicted in Table 9-Table 12.

These analyses attempt to answer research questions one and two as stated above. The results indicated that student’ scores on the content knowledge tests increased from pre-to post- tests. In addition, the students were able to answer higher order thinking skills questions on the post-test.

5. Discussion

One would expect post-test scores to be higher in both entry-level biology courses, for students will eventually learn the given content over a course of time. As expected, Hand (2002) found that there was a significant increase from pre-to post- tests at both basic and higher levels in both entry-level courses. This study, when replicated five years later, involving completely different sets of student population, in a different year of the curriculum implementation effort, however, revealed that a measure of change in findings was not dependent on the particular sample involved. The present study also showed a significant increase from pre-to post- tests at both basic and higher levels in both entry-level courses. Similar findings were reported by Lord and Rauscher (1991). There is an urgent need to focus on curriculum development, curriculum reform, improving students’ achievement, opportunities to learn, and changing pedagogical practices (Webb, 1997a, &1997b). Furthermore, Clune and Webb (1997) suggest that once a curriculum is designed and implemented, the curriculum, students’ performance and their evaluation needs to be documented. When “The Biology 2000 Curriculum” was developed at the institution, one of the objectives was to increase the number of students choosing to become biology majors. It has been eight years since the new curriculum was implemented. Following the implementation of this new curriculum, Hand (2002) conducted assessments of biology content knowledge in 2000-2002. The present study used the same content knowledge instruments as a part of a follow-up assessment of “The Biology 2000 Curriculum.” The rationale behind administering the content knowledge assessments was to measure students’ pre-college readiness, their basic biology content knowledge, and higher order thinking skills in entry-level biology courses, and how their learning evolved from the beginning to end of the semester. The investigators used a 12-item multiple choice biology assessment along with a demographic survey to assess the difference in scores. The results indicated a significant difference in pre-to post- tests’ scores which may be due to the influence of high school biology background. Cronbach’s Alpha measures a reliability of an instrument in social science research and the acceptable value is 0.70. For Functional Biology, for both basic and higher order thinking skills, the reliability was lower than the acceptable level indicating that the instrument for Functional Biology was not a reliable measure of students’ cognitive gain. However, for Organismal Biology, the reliability of the instrument was greater than 0.70 for both basic and higher order thinking skills questions. Additionally, students’ performance in the content knowledge assessments may have been influenced by several demographic (gender and ethnicity) and pre-collegiate academic factors such as high school GPA and high school AP chemistry course completion. The present findings apply to entry-level science courses as they are currently taught in college as pre-requisites for STEM or non-STEM majors, required curriculum or even as courses for interested non science majors. Furthermore, students’ performance in the content knowledge assessments may have been influenced by several demographic (gender and ethnicity) and pre-collegiate academic factors such as high school GPA.

6. Conclusions

Freshmen biology major students struggle to successfully complete entry-level biology courses. Since the post test results indicated that students can perform well at the end, early screenings of biology or STEM majors are recommended in order to provide additional academic supports such as advising them to form a study group, encourage them to attend Supplemental Instruction (SI) sessions, and directing them to Student Learning and Assessment Center (SLAC) at the beginning of the semester so that they do not fall behind.

References

[1]  American Council on Education. (1996). Remedial education: An undergraduate student profile. Washington, DC: Author.
In article      
 
[2]  Beyer, L. E., &, Liston, D. P. (1996). Curriculum in Conflict: Social Visions, Educational Agendas, and Progressive School Reform. Teachers College Press: New York.
In article      
 
[3]  Bloom, B. M., Englehart, E., Furst, E. H., Hill, W., & Krathwohl, D. (1956). Taxonomy of educational objectives: The classification of educational goals. New York: McKay.
In article      
 
[4]  Clune, W. H., & Webb, N. L. (1997). An introduction to the papers and think piece themes. In W. H. Clune, S. B. Millar, S. A. Raizen, N. L. Webb, D. C. Bowcock, E. D. Britton, R. L. Gunter, & R. Mesquita, Research on systemic reform: What have we learned? What do we need to know? (Workshop Report No. 4, Vol. 1, pp. 6-12). Madison: University of Wisconsin-Madison, National Institute for Science Education.
In article      
 
[5]  Ewell, P. T. (2002). Grading Student Learning: You Have to Start Somewhere. Retrieved May 1, 2009 from http://measuringup.highereducation.org/2002/articles/peterewell.htm.
In article      
 
[6]  Fraenkel, J. K., & Wallen, N. E. (Eds.). (2003). How to design and evaluate research in education. The McGraw-Hill Company, Inc. New York.
In article      
 
[7]  Gainen, J. (1995). Barriers to success in quantitative gatekeeper courses. In J. Gainen, & E. W. Williamson (Eds.). Fostering student success in the quantitative gateway courses (pp. 61, 5-14) San Francisco: Jossey-Bass.
In article      
 
[8]  Goodman, B. E., Koster, K. L., & Redinius, P. L. (2005). Comparing biology majors from large lecture classes with TA-facilitated laboratories to those from small lecture classes with faculty-facilitated laboratories. Advance Physiology Education, 29, 112-117.
In article      CrossRefPubMed
 
[9]  Gungor, A., Eryilmaz, A., & Fakioglu, T. (2007). The relationship of freshman’s physics achievement and their related affective characteristics. Journal of Research in Science Teaching, 44, 1036-1056.
In article      CrossRef
 
[10]  Hand, J. E. (2002). Assessment of course specific content area knowledge, writing skills, and higher order thinking skills of students participating in an entry-level biology major’s course. Unpublished Master, Texas State University-San Marcos, Texas.
In article      
 
[11]  Kahveci, A., Southerland, S. A., & Gilmer, P. J. (2006). Retaining undergraduate women in Science, Mathematics, and Engineering. Journal of College Science Teaching, 36, 34-38.
In article      
 
[12]  Kerlinger, F. N., & Lee, H. B. (Eds.). (2000). Foundations of Behavioral Research. Orlando, FL: Harcourt College Publishers.
In article      
 
[13]  Kulik, C-L., Kulik, J., & Schwalb, B. (1983). College programs for high risk and disadvantaged students: A meta-analysis of findings. Review of Educational Research, 53, 397-414.
In article      CrossRef
 
[14]  Lord, T. R., & Rauscher, C. (1991). A sampling of basic life science literacy in a college population. The American Biology Teacher, 53, 419-424.
In article      CrossRef
 
[15]  Libarkin, J. C., & Anderson, S. W. (2005). Assessment of learning in entry-level geoscience courses: Results from the geoscience concept inventory. Journal of Geoscience Education, 53, 394.
In article      
 
[16]  National Research Council (Nrc). (1996). National science education standards. No. National Academy Press. Washington: D.C.
In article      
 
[17]  Pascarella, E. T., & Terenzini, P. T. (1991). Predicting freshman persistence and voluntary dropout decisions from a theoretical model. The Journal of Higher Education, 51, 60-75.
In article      CrossRef
 
[18]  Robbins, S. B., Lauver, K., Le, H., Davis, D., Langley, R., & Carlstrom, A. (2004). Do psychosocial and study skills factors predict college outcomes? A meta-analysis. Psychological Bulletin, 130, 261-303.
In article      CrossRefPubMed
 
[19]  Sadler, P. M., & Tai, R. H. (2000). Success in introductory college physics: The role of high school preparation. Science Education, 85, 111-136.
In article      CrossRef
 
[20]  Schrock, J. R. (Eds.). (1997). Instructor's manual and test item file for sylvia s. mader inquiry into life. Dubuque, IA (non-major's text): Wm. C. Brown Publishers.
In article      
 
[21]  Service, E. T. (1994). AP biology: Free response scoring guide with multiple-choice section. New York: The College Board.
In article      
 
[22]  Selingo, J. (2003). What Americans think about higher education? The Chronicle of Higher Education. pp. A10-A17.
In article      
 
[23]  Seymour, E., & Hewitt, N. M. (1997). Talking about leaving: Why undergraduates leave the sciences. Boulder, CO: Westview Press.
In article      
 
[24]  Tai, R. H., Sadler, P. M., & Loehr, J. F. (2005). Factors influencing success in introductory college chemistry. Journal of Research in Science Teaching, 42, 987-1012.
In article      CrossRef
 
[25]  Webb, N. L. (1997a). Criteria for alignment of expectations and assessments in mathematics and science education (Research Monograph No. 6). Madison: University of Wisconsin-Madison, National Institute for Science Education.
In article      
 
[26]  Webb, N. L. (1997b). Determining alignment of expectations and assessments in Mathematics and science education (NISE Brief Vol. 1, No. 2). Madison: University of Wisconsin-Madison, National Institute for Science Education.
In article      
 
[27]  Wood, W. B., & Gentile, J. M. (2003). Teaching in a research context. Science, 302, 1510.
In article      CrossRefPubMed
 
  • CiteULikeCiteULike
  • MendeleyMendeley
  • StumbleUponStumbleUpon
  • Add to DeliciousDelicious
  • FacebookFacebook
  • TwitterTwitter
  • LinkedInLinkedIn