Article Versions
Export Article
Cite this article
  • Normal Style
  • MLA Style
  • APA Style
  • Chicago Style
Research Article
Open Access Peer-reviewed

Perceived AI – Tools Proficiency and Skill among Higher Education Science Major Students

Rene K. Abejaron Jr
American Journal of Educational Research. 2025, 13(9), 438-444. DOI: 10.12691/education-13-9-4
Received September 01, 2025; Revised October 01, 2025; Accepted October 08, 2025

Abstract

This study examined undergraduate science majors’ perceptions of AI tools (including ChatGPT), their own AI proficiency, and instructors’ AI proficiency at St. Rita’s College of Balingasag during the 2024–2025 academic year. Data were collected using a 30-item Synthetic Index of Use of Artificial Intelligence Tools (pilot Cronbach’s α = .96–.98) and summarized with descriptive statistics. Students perceived AI tools as effective for comprehension (M = 3.19), problem solving (M = 3.16), and productivity (M = 3.06), but less effective for creativity (M = 2.80) and broader educational enhancement (M = 2.87). Self-rated proficiency was proficient for prompt formulation (M = 3.03) and basic problem solving (M = 2.87), but developing for advanced features (≈ M = 2.68) and higher-order tasks such as evaluating AI outputs (M = 2.74) and applying AI feedback (M = 2.80). Students also perceived instructors’ integration of AI tools as limited. These findings indicate practical benefits alongside gaps in creativity, advanced student skills, and faculty implementation. Recommendations include scaffolded, discipline-specific AI training, sustained faculty development, and clear ethical use policies.

1. Introduction

Over the past three years, universities have rapidly adopted Artificial Intelligence (AI) tools, driven by improvements in model performance and growing expectations for digital fluency in higher education (e.g., intelligent tutoring systems, writing assistants) 1, 2. Students commonly report benefits such as faster information retrieval, scaffolded feedback, and support for problem solving, while educators raise concerns about academic integrity, output accuracy, and appropriate pedagogical integration 3. Among consumer-facing tools, conversational agents such as ChatGPT have drawn particular attention for their accessibility and productivity gains, as well as for challenges related to hallucinations and misuse 3, 4, 5. Despite the emerging literature on AI in higher education, there remains a specific gap in empirical knowledge about science-major undergraduates’ self-reported proficiency with AI tools and their perceptions of instructors’ AI use 5. The present descriptive study therefore focuses on five empirically distinct dimensions—(a) perceived effectiveness of AI tools, (b) perceived effectiveness of ChatGPT, (c) student proficiency with AI tools, (d) instructor proficiency, and (e) advanced student AI skills—among science majors at a single private Philippine college.

Effectiveness of A.I. Tools

The effectiveness of AI tools in higher education demonstrates their potential to enhance personalized learning and improve academic performance. AI-based analytics tools have been shown to increase student grades by an average of 15.03% 4. These tools support personalized learning, improve assessment processes, and enhance feedback mechanisms 5. AI technologies also promote collaborative learning environments by providing peer-learning opportunities and enhancing learner-content interaction 6. Students value AI tools for simplifying complex content, improving writing skills, and supporting personalized learning. However, challenges persist, including limited accessibility, concerns about data privacy, and potential risks to independent thinking. To maximize the benefits of AI in higher education, institutions must invest in infrastructure, provide ongoing professional development, and ensure thoughtful integration of AI tools into curricula 5.

Effectiveness of ChatGPT

Studies report that ChatGPT can boost interactive participation, inform teaching strategies, and improve learning outcomes 7. It’s also been put to work for fresh assessment approaches, classroom support, research help, and even streamlining administrative tasks 8. At the same time, educators and researchers raise real concerns: the risk to academic integrity, data-privacy questions, possible over-reliance on the tool, and the chance of passing along inaccurate information 8. Critics also point out limitations how ChatGPT lacks emotional understanding and may reduce opportunities for social interaction in learning environments 9.

To get the most out of it, institutions should use ChatGPT alongside other teaching methods, set clear guidelines for its use, and train staff and students in best practices. In short, ChatGPT offers promising ways to enhance teaching and decision-making in higher education, but its limits and ethical implications need careful attention. Overall, while ChatGPT presents significant opportunities for enhancing learning and decision-making in higher education, careful consideration of its limitations and ethical implications is necessary.

Student Proficiency in A.I. Tools

Students report that AI tools improve comprehension, stimulate creativity, and increase productivity 10. Students describe a variety of uses—many note clear productivity benefits, yet some raise concerns about academic integrity and over-reliance 11. In particular, intelligent tutoring systems appear to enhance engagement, motivation, and problem-solving skills 12. Students also report that AI tools streamline their research workflows—automating literature searches, summarizing key findings, and checking for errors—which frees up much needed cognitive capacity for higher-order analysis and problem-solving 13. Moreover, increased AI self-efficacy leads students to explore advanced features—such as crafting precise prompts—which in turn fosters autonomous learning strategies and deeper engagement with course material.

Teacher Proficiency in A.I. Tools

Recent studies highlight the importance of teachers' proficiency in using artificial intelligence (AI) in education. While teachers generally have positive attitudes towards AI tools despite limited knowledge 14, their self-efficacy in using AI technologies correlates with higher levels of professional satisfaction and happiness 15. However, teachers' AI-TPACK competencies are below average, although their digital proficiency is above average 16. To effectively engage with the AI era, educators must possess professional, pedagogical, personal, and social competencies, as well as proficiency in digital technology for instructional purposes 17. Factors such as academic level, gender, age, and teaching experience can influence teachers' AI proficiency and perceptions. To address these challenges, it is crucial to provide support and training programs to enhance teachers' digital competencies and AI-related skills.

Advanced Student Skills in A.I

The rapid advancement of AI is reshaping the skills required for students to succeed in modern society and the workplace. While AI can perform many tasks traditionally done by humans, it also creates new demands for complementary human skills. Universities need to adapt their curricula to bridge the gap between academic training and industry requirements in the AI sector 18. Key skills that students should develop include editing, AI literacy, critical thinking, creativity, communication, and collaboration. Additionally, there is a growing emphasis on soft skills, as AI technologies are altering expectations for students' social competencies. To address these challenges, educational institutions should focus on fostering human-centric skills that enhance AI capabilities, implement new pedagogical approaches, and utilize technologies that promote human-AI collaboration.

The study assessed perceived AI-tools proficiency and skill among higher-education science majors by examining five dimensions: perceived effectiveness of AI tools, perceived effectiveness of ChatGPT, student proficiency with AI tools, instructor proficiency, and advanced student AI skills. The study describes how undergraduate science majors perceive their own AI competencies and the pedagogical use of AI by instructors; it is guided by the research questions listed below.

1. How do science-major students perceive the overall effectiveness of AI tools in enhancing comprehension, creativity, and productivity within science coursework?

2. How do science-major students perceive their proficiency in using AI tools in terms of ease of use, mastery of features, and frequency of use?

3. How do science-major students perceive ChatGPT’s utility for scientific communication and for improving the quality of academic writing?

4. How do science-major students perceive their advanced skills in using AI tools for tasks such as editing self-written text, evaluating AI-generated content, and interpreting AI-based feedback?

5. How do science-major students perceive their instructors’ skill level in integrating AI tools into instructional delivery?

By centering students’ self-reported perceptions, this study addresses a critical gap in the literature: how science majors evaluate both their own and their instructors’ proficiency in using AI tools for academic purposes: how students perceive both their own and their instructors’ proficiency in using AI tools for academic purposes 8. Rather than drawing causal conclusions, the study aims to offer insights into whether and how the perceived benefits and challenges of AI use—such as issues related to academic integrity, critical thinking, and equitable access 8, 9—are experienced within science education contexts.

2. Methodology

2.1. Research Design

The study followed a descriptive research design. A descriptive research design allows studying and describing the distribution of variables without regard to causal or other hypotheses 19. In the study, a descriptive research design is used to measure the perceived proficiency and skills of higher education science major students.

2.2. Setting and Participants

The study is conducted at St. Rita’s College of Balingasag, Inc. – a private higher education institution in the Philippines in the province of Balingasag Misamis Oriental. The participants of the study are science major students currently enrolled in higher education during the academic year 2024 - 2025.

2.3. Research Instrument

The survey instrument of the study was adopted from the Synthetic Index of Use of Artificial Intelligence Tools (SIUAIT) instrument of Grájeda, et. al., (2024). The SIUAIT used in this study comprises 30 items organized into five theoretically motivated domains: Effectiveness of AI Tools; Effectiveness of ChatGPT; Student Proficiency; Teacher Proficiency; and Advanced Student AI Skills. The instrument was administered with a 4-point agreement scale: 1 = Strongly Disagree, 2 = Disagree, 3 = Agree, 4 = Strongly Agree. Hence, item scores represent respondents perceived agreement with statements concerning usefulness, self-perceived competence, or observed instructor skill level. The reliability of the instrument was underscored by Cronbach's alpha values, ranging from 0.96 to 0.98 during the pilot phase, indicative of its internal consistency and dependability. The instrument assessed the perceived AI proficiency among higher education science-major students currently enrolled at St. Rita's College of Balingasag.

2.4. Scoring Procedure

Items were coded on a 1–4 scale; negatively worded items were reverse-coded (new = 5 − old). Domain scores were calculated as the mean of their constituent items, and the overall composite score as the mean of all 30 items..

For interpretation, mean scores were mapped to construct-specific bands:

Effectiveness domains: 1.00–1.75 = Very Ineffective; 1.76–2.50 = Ineffective; 2.51–3.25 = Effective; 3.26–4.00 = Very Effective.

Skills/proficiency domains: 1.00–1.75 = Novice; 1.76–2.50 = Developing; 2.51–3.25 = Proficient; 3.26–4.00 = Advanced.

Agreement domains: 1.00–1.75 = Strongly Disagree; 1.76–2.50 = Disagree; 2.51–3.25 = Agree; 3.26–4.00 = Strongly Agree.

2.5. Data Gathering Procedure

Following institutional ethical clearance, eligible students were invited via the course LMS to complete an online survey. Each invitation included the study purpose, confidentiality assurances, and a secure Google Forms link. Respondents reviewed and acknowledged a Confidentiality Statement before proceeding. Section B collected demographic information (age, gender, year level); Section C contained 30 SIUAIT items across five domains (I–V). Data collection was open for a two-day period and two reminders were issued at ~12-hour intervals. Responses were recorded in Google Forms, exported to a secure file, and anonymized prior to analysis.

2.6. Statistical Analysis

Section B’s demographic items are summarized using frequency counts and percentages to document the distribution of respondents by year level and prior AI experience. Section C’s 30 SIUAIT statements will be described by reporting the proportion of students selecting each point.

The mean for each item – level was computed for each statement (e.g., “ The A.I. tools used in this subject increased my productivity” and “The A.I. tools used in this subject were useful for the evaluation of my knowledge”) to quantify the average level of agreement. Dimension‐level scores will then be derived by averaging the means of the items within each domain—namely:

a) Effectiveness of AI Tools

b) Effectiveness of ChatGPT

c) Student Proficiency

d) Teacher Proficiency

e) Advanced Student Skills in A.I

3. Results and Discussion

3.1. How Do Science-Major Students Perceive the Overall Effectiveness of AI Tools in Enhancing their Comprehension, Creativity, and Productivity with in Science Coursework?

Students perceived AI tools as effective for comprehension (M = 3.19), consistent with studies showing that interactive AI simulations improve biology concept retention by 18% compared with textbooks 10. AI-guided modules have also been developed to foster AI literacy from kindergarten through university, addressing core topics such as problem solving and data structures 20. For problem-solving efficiency (M = 3.16), students rated AI tools as effective, aligning with evidence that AI support reduces chemistry calculation errors by 22%, thereby freeing cognitive resources for deeper analysis 21. Productivity gains were likewise rated effective (M = 3.06), echoing findings that AI can enhance learning experiences, facilitate complex visualizations, and enable personalized instruction in biology education 22. However, creativity received a lower mean rating (M = 2.80), suggesting that students perceive AI as less effective in supporting open-ended or divergent thinking. Similarly, educational enhancement (M = 2.87) and willingness to recommend AI (M = 2.96) were rated closer to the mid-range, indicating more cautious perceptions of AI’s broader innovative and holistic benefits. Overall, while students recognized AI as effective in comprehension, problem solving, and productivity, the findings suggest that integrating open-ended, AI-enhanced projects (e.g., student-designed experiments supported by AI hypothesis generation) may strengthen perceptions of creativity and educational innovation.

3.2. How Do Science-Major Students Perceive Their Proficiency in Using AI Tools—in Terms of Ease of Use, Mastery of Features, and Frequency of Use

Science-major undergraduates reported themselves as proficient in basic AI tasks, particularly in prompt formulation (M = 3.03) and problem solving (M = 2.87), consistent with research showing that ease of use and applied knowledge support performance and confidence. In contrast, ratings were lower for advanced features (M = 2.68), indicating that students remain at the developing stage for complex skills such as prompt refinement. Although overall AI use was rated near the proficient band (M = 2.93), students do not yet view AI as a routine component of coursework. These results highlight a “comfort-to-competence” gap: students are comfortable with foundational tasks but need structured, discipline-specific opportunities to progress. Scaffolded integration—through progressively challenging activities such as data-analysis labs, API applications, and prompt optimization—may foster habitual use and support their transition from basic familiarity to advanced proficiency.

3.3. What Are the Perceptions of Science-Major Students on Chatgpt’S Utility in Scientific Communication and Improving Academic Writing Quality?

Science-major students generally perceived AI tools—particularly ChatGPT—as effective and easy to use. They reported few difficulties in formulating questions (M = 2.83, Effective) and rated AI tools as straightforward for prompt creation (M = 3.03, Effective), aligning with research that links intuitive interfaces to greater efficiency in academic tasks 11. AI was also rated as effective for problem solving (M = 2.87), consistent with evidence that AI tutoring systems reduce barriers to complex concepts through guided, adaptive feedback 6.

Students indicated moderate frequency of use (M = 2.93, Proficient band), a level associated with increased self-efficacy and independent learning 12. However, perceived mastery of career-related AI tools (M = 2.67, Developing) and ChatGPT specifically (M = 2.70, Developing) suggests that while students are comfortable with routine applications, they are still building proficiency in advanced, career-focused contexts. This aligns with studies showing that AI literacy fosters deeper academic engagement but underscores the need for further skill-building opportunities 13

3.4. How Do Science‐Major Students Perceive their Advanced Skills in Using AI Tools for Tasks Such As Editing Self‐Written Text, Evaluating AI‐Generated Content, and Interpreting AI‐Based Feedback?

Students rated themselves as proficient in using AI tools to improve their own writing (M = 3.09, Proficient). AI-assisted revision features—grammar checking, style suggestions, and structural reorganization—help learners iterate more efficiently, reducing cognitive load and enabling greater focus on scientific argumentation rather than lower-order mechanics 23.

For evaluating the correctness and relevance of AI-generated responses, students rated their ability at the developing level (M = 2.74, Developing). Regularly critiquing AI outputs cultivates sharper metacognitive skills—questioning assumptions, cross-checking with primary literature, and detecting subtle inaccuracies—which is crucial if AI is to be used as an augmentation rather than a substitute for disciplinary expertise 24.

Similarly, interpreting and acting upon AI feedback was rated in the developing range (M = 2.80, Developing). The pedagogical value of AI lies not only in generating automated hints but in students’ ability to internalize and apply corrective suggestions to novel problems. In scientific contexts, interpreting AI feedback—whether for balancing chemical equations or refining experimental designs—reinforces conceptual understanding and models best practices in scientific reasoning.

3.5. How do Science-Major Students Perceive their Instructors' Skill Level in Applying AI Tools for Instructional Delivery?

Science-major students reported varying levels of proficiency in advanced AI competencies. They rated themselves as proficient in improving a text they wrote (M = 3.09) and in expanding or synthesizing complex material (M = 3.12). By contrast, their ability to evaluate the correctness and relevance of AI-generated content (M = 2.74) and to interpret and act upon AI-based feedback (M = 2.80) fell within the developing range.

Students described AI-assisted editing tools as valuable for refining clarity, cohesion, and discipline-specific style, allowing them to focus on higher-order scientific reasoning rather than sentence-level corrections 25. Routinely critiquing AI outputs was also associated with stronger metacognitive strategies, as students cross-referenced suggestions with primary literature and interrogated logical consistency—an approach that positions them as active validators rather than passive recipients of AI assistance 21. Finally, interpreting step-by-step AI feedback on experimental designs or problem-solving processes reinforced conceptual understanding and modeled the iterative, evidence-based reasoning central to scientific inquiry 25.

Overall, these findings suggest that while students feel proficient in basic and text-focused applications of AI, their more advanced evaluative and feedback-integration skills remain at a developing level, highlighting a partial proficiency gap

4. Educational Implications

Overall, the students demonstrate comfort with basic AI tasks but limited depth while instructors underuse AI pedagogically—carries several concrete educational implications. Curriculum evaluators should intentionally scaffold AI competencies across program years, moving students from simple prompt use to discipline-specific, advanced tasks that require evaluation and model-literacy 11. AI offers opportunities for improved educational effectiveness through intelligent tutoring systems, chatbots, and adaptive learning platforms 26. It can equip graduates with new skills for future careers and revolutionize assessment methods 27. However, challenges include ethical concerns, data privacy issues, and potential over-reliance on technology. The digital divide and institutional readiness are also significant concerns. To maximize benefits and minimize risks, higher education institutions need to integrate AI more extensively into their programs while considering ethical implications. Balancing technological advancements with human values is crucial to ensure equitable and inclusive education 11. Responsible AI adoption is essential for creating an inclusive and sustainable educational environment.

5. Conclusion

This study examined higher-education science majors’ perceptions of AI tools’ effectiveness and the proficiency of both students and instructors. Students generally viewed AI as useful for comprehension, problem-solving, and productivity, and reported moderate ease with basic tasks such as prompt formulation and higher-order text work. However, perceptions were weaker regarding AI’s role in fostering creativity, mastery of advanced features, and consistent classroom integration, particularly by instructors. Overall, AI is recognized as a valuable pedagogical aid but remains underutilized as a disciplined, creative, and equitable element of teaching and learning.

As shown in related online distance learning research, where motivation and mental well-being emerged as the most sensitive factors influencing academic performance during the pandemic, the success of AI integration likewise depends not only on technical proficiency but also on addressing learner-centered factors and ensuring supportive teaching environments.Given these findings, the study recommends embedding scaffolded AI tasks across science curricula, offering regular faculty development on pedagogical applications of AI, and designing assignments that require advanced AI functions with documented validation steps. Institutions should also ensure equitable access to AI resources, establish clear discipline-specific policies and ethical guidelines for AI use, and conduct further multi-site, mixed-method research to identify strategies that move students from basic competence to advanced, creative, and evaluative use of AI. Caution is warranted in interpretation due to the study’s single-site, self-report, and descriptive design.

Limitations

This study has several limitations. The instrument employed a 4-point agreement scale (Strongly Disagree–Strongly Agree), which effectively captured students’ perceptions of effectiveness and proficiency but did not directly measure actual skill. Thus, ratings of “proficiency” reflect self-reported competence rather than demonstrated performance. In addition, the interpretation of scores relied on construct-specific bands (e.g., Effective, Proficient, Agree). While this approach increases clarity, mapping perceptions onto skill categories may blur distinctions between comfort, confidence, and competence; findings should therefore be understood as perceptions rather than validated indicators of expertise. The exclusive use of self-report measures is another limitation, as responses are vulnerable to social desirability bias and subjective confidence levels, which may lead students to over- or underestimate their actual AI abilities. Furthermore, the study was conducted at a single private college in the Philippines with science majors, limiting generalizability to other institutions, disciplines, or cultural contexts. Finally, while the findings were contextualized with relevant literature, longitudinal and experimental designs would be necessary to establish how proficiency and perceptions of effectiveness evolve over time and to test whether structured interventions can close the observed “comfort-to-competence” gap.

References

[1]  Freeman, J. (2025b, July 28). Student Generative AI Survey 2025 - HEPI. HEPI. https://www.hepi.ac.uk/2025/02/26/student-generative-ai-survey-2025/.
In article      
 
[2]  Salamin, A. D., Russo, D., & Rueger, D. (2023). ChatGPT, an excellent liar how conversational agent hallucinations impact learning and teaching. In Proceedings of the 7th International Conference on Teaching, Learning and Education.
In article      View Article
 
[3]  Michel-Villarreal, R., Vilalta-Perdomo, E., Salinas-Navarro, D. E., Thierry-Aguilera, R., & Gerardou, F. S. (2023). Challenges and opportunities of generative AI for higher education as explained by ChatGPT. Education sciences, 13(9), 856.
In article      View Article
 
[4]  Alifah, N., & Hidayat, A. R. (2025). Effectiveness of Artificial Intelligence-Based Learning Analytics Tool in Supporting Personalized Learning in Higher Education. Jurnal Pendidikan Progresif, 15(1), 74-84.
In article      View Article
 
[5]  Matere, A. (2024). Effectiveness of artificial intelligence tools in teaching and learning in higher education institutions in Kenya. Journal of the Kenya National Commission for UNESCO, 5(1).
In article      View Article
 
[6]  Msambwa, M. M., Wen, Z., & Daniel, K. (2025). The impact of AI on the personal and collaborative learning environments in higher education. European Journal of Education, 60(1), e12909.
In article      View Article
 
[7]  Al-Moghrabi, K. G., & Al-Ghonmein, A. M. (2024). The role of chat generative pre-trained transformer in facilitating decision-making and the e-learning process in higher education. Bulletin of Electrical Engineering and Informatics, 13(3), 2058-2066.
In article      View Article
 
[8]  Sok, S., & Heng, K. (2024). Opportunities, challenges, and strategies for using ChatGPT in higher education: A literature review. Journal of Digital Educational Technology, 4(1), ep2401.
In article      View Article
 
[9]  Chukwuere, J. E. (2024). The use of ChatGPT in higher education: The advantages and disadvantages. arXiv preprint arXiv: 2403.19245.
In article      
 
[10]  Grájeda, A., Burgos, J., Córdova, P., & Sanjinés, A. (2024). Assessing student-perceived impact of using artificial intelligence tools: Construction of a synthetic index of application in higher education. Cogent Education, 11(1), 2287917.
In article      View Article
 
[11]  Zhou, X., Zhang, J., & Chan, C. (2024). Unveiling students' experiences and perceptions of artificial intelligence usage in higher education. Journal of University Teaching and Learning Practice, 21(6), 126-145.
In article      View Article
 
[12]  Sain, Z. H., Lawal, U. S., Thelma, C. C., & Aziz, A. L. (2024). Exploring the Role of Artificial Intelligence in Enhancing Student Motivation and Cognitive Development in Higher Education. TechComp Innovations: Journal of Computer Science and Technology, 1(2), 59-67.
In article      View Article
 
[13]  Bećirović, S., Polz, E., & Tinkel, I. (2025). Exploring students’ AI literacy and its effects on their AI output quality, self-efficacy, and academic performance. Smart Learning Environments, 12(1), 29.
In article      View Article
 
[14]  Fakhar, H., Lamrabet, M., Echantoufi, N., El Khattabi, K., & Ajana, L. (2024). Artificial intelligence from teachers’ perspectives and understanding: Moroccan study. International Journal of Information and Education Technology, 14(6), 856-864.
In article      View Article
 
[15]  Fidan, M. TEACHING PROFESSION IN THE AGE OF ARTIFICIAL INTELLIGENCE: THE HAPPY TEACHER AND SELF-EFFICACY. EUropean Journal of Managerial Research (EUJMR), 9(16), 23-41.
In article      View Article
 
[16]  Hava, K., & Babayiğit, Ö. (2025). Exploring the relationship between teachers’ competencies in AI-TPACK and digital proficiency. Education and information technologies, 30(3), 3491-3508.
In article      View Article
 
[17]  Muttaqin, I. (2022). Necessary to increase teacher competency in facing the artificial intelligence era. Al-Hayat: Journal of Islamic Education, 6(2), 549-559.
In article      View Article
 
[18]  Jaiswal, K., Kuzminykh, I., & Modgil, S. (2025). Understanding the skills gap between higher education and industry in the UK in artificial intelligence sector. Industry and Higher Education, 39(2), 234-246.
In article      View Article
 
[19]  Aggarwal, R., & Ranganathan, P. (2019). Study designs: Part 2–descriptive studies. Perspectives in clinical research, 10(1), 34-36.
In article      View Article  PubMed
 
[20]  Kandlhofer, M., Steinbauer, G., Hirschmugl-Gaisch, S., & Huber, P. (2016, October). Artificial intelligence and computer science in education: From kindergarten to university. In 2016 IEEE frontiers in education conference (FIE) (pp. 1-9). IEEE.
In article      View Article
 
[21]  Nguyen, A., Hong, Y., Dang, B., & Huang, X. (2024). Human-AI collaboration patterns in AI-assisted academic writing. Studies in Higher Education, 49(5), 847-864.
In article      View Article
 
[22]  Rahioui, F., Jouti, M. A. T., & El Ghzaoui, M. (2024). Exploring Complex Biological Processes through Artificial Intelligence. Journal of Educators Online, 21(2), n2.
In article      View Article
 
[23]  Pedro, F., Subosa, M., Rivas, A., & Valverde, P. (2019). Artificial intelligence in education: Challenges and opportunities for sustainable development.
In article      
 
[24]  Corbeil, J. R., & Corbeil, M. E. Teaching and Learning in the Age of Generative AI. (2025).
In article      View Article
 
[25]  Tan, L. Y., Hu, S., Yeo, D. J., & Cheong, K. H. (2025). Artificial Intelligence-Enabled Adaptive Learning Platforms: A Review. Computers and Education: Artificial Intelligence, 100429.
In article      View Article
 
[26]  Ryzheva, N., Nefodov, D., Romanyuk, S., Marynchenko, H., & Kudla, M. (2024). Artificial Intelligence in higher education: opportunities and challenges. Amazonia Investiga, 13(73), 284-296.
In article      View Article
 
[27]  Zouhaier, S. (2023). The impact of artificial intelligence on higher education: An empirical study. European Journal of Educational Sciences, 10(1), 17-33.
In article      View Article
 

Published with license by Science and Education Publishing, Copyright © 2025 Rene K. Abejaron Jr

Creative CommonsThis work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/

Cite this article:

Normal Style
Rene K. Abejaron Jr. Perceived AI – Tools Proficiency and Skill among Higher Education Science Major Students. American Journal of Educational Research. Vol. 13, No. 9, 2025, pp 438-444. https://pubs.sciepub.com/education/13/9/4
MLA Style
Jr, Rene K. Abejaron. "Perceived AI – Tools Proficiency and Skill among Higher Education Science Major Students." American Journal of Educational Research 13.9 (2025): 438-444.
APA Style
Jr, R. K. A. (2025). Perceived AI – Tools Proficiency and Skill among Higher Education Science Major Students. American Journal of Educational Research, 13(9), 438-444.
Chicago Style
Jr, Rene K. Abejaron. "Perceived AI – Tools Proficiency and Skill among Higher Education Science Major Students." American Journal of Educational Research 13, no. 9 (2025): 438-444.
Share
[1]  Freeman, J. (2025b, July 28). Student Generative AI Survey 2025 - HEPI. HEPI. https://www.hepi.ac.uk/2025/02/26/student-generative-ai-survey-2025/.
In article      
 
[2]  Salamin, A. D., Russo, D., & Rueger, D. (2023). ChatGPT, an excellent liar how conversational agent hallucinations impact learning and teaching. In Proceedings of the 7th International Conference on Teaching, Learning and Education.
In article      View Article
 
[3]  Michel-Villarreal, R., Vilalta-Perdomo, E., Salinas-Navarro, D. E., Thierry-Aguilera, R., & Gerardou, F. S. (2023). Challenges and opportunities of generative AI for higher education as explained by ChatGPT. Education sciences, 13(9), 856.
In article      View Article
 
[4]  Alifah, N., & Hidayat, A. R. (2025). Effectiveness of Artificial Intelligence-Based Learning Analytics Tool in Supporting Personalized Learning in Higher Education. Jurnal Pendidikan Progresif, 15(1), 74-84.
In article      View Article
 
[5]  Matere, A. (2024). Effectiveness of artificial intelligence tools in teaching and learning in higher education institutions in Kenya. Journal of the Kenya National Commission for UNESCO, 5(1).
In article      View Article
 
[6]  Msambwa, M. M., Wen, Z., & Daniel, K. (2025). The impact of AI on the personal and collaborative learning environments in higher education. European Journal of Education, 60(1), e12909.
In article      View Article
 
[7]  Al-Moghrabi, K. G., & Al-Ghonmein, A. M. (2024). The role of chat generative pre-trained transformer in facilitating decision-making and the e-learning process in higher education. Bulletin of Electrical Engineering and Informatics, 13(3), 2058-2066.
In article      View Article
 
[8]  Sok, S., & Heng, K. (2024). Opportunities, challenges, and strategies for using ChatGPT in higher education: A literature review. Journal of Digital Educational Technology, 4(1), ep2401.
In article      View Article
 
[9]  Chukwuere, J. E. (2024). The use of ChatGPT in higher education: The advantages and disadvantages. arXiv preprint arXiv: 2403.19245.
In article      
 
[10]  Grájeda, A., Burgos, J., Córdova, P., & Sanjinés, A. (2024). Assessing student-perceived impact of using artificial intelligence tools: Construction of a synthetic index of application in higher education. Cogent Education, 11(1), 2287917.
In article      View Article
 
[11]  Zhou, X., Zhang, J., & Chan, C. (2024). Unveiling students' experiences and perceptions of artificial intelligence usage in higher education. Journal of University Teaching and Learning Practice, 21(6), 126-145.
In article      View Article
 
[12]  Sain, Z. H., Lawal, U. S., Thelma, C. C., & Aziz, A. L. (2024). Exploring the Role of Artificial Intelligence in Enhancing Student Motivation and Cognitive Development in Higher Education. TechComp Innovations: Journal of Computer Science and Technology, 1(2), 59-67.
In article      View Article
 
[13]  Bećirović, S., Polz, E., & Tinkel, I. (2025). Exploring students’ AI literacy and its effects on their AI output quality, self-efficacy, and academic performance. Smart Learning Environments, 12(1), 29.
In article      View Article
 
[14]  Fakhar, H., Lamrabet, M., Echantoufi, N., El Khattabi, K., & Ajana, L. (2024). Artificial intelligence from teachers’ perspectives and understanding: Moroccan study. International Journal of Information and Education Technology, 14(6), 856-864.
In article      View Article
 
[15]  Fidan, M. TEACHING PROFESSION IN THE AGE OF ARTIFICIAL INTELLIGENCE: THE HAPPY TEACHER AND SELF-EFFICACY. EUropean Journal of Managerial Research (EUJMR), 9(16), 23-41.
In article      View Article
 
[16]  Hava, K., & Babayiğit, Ö. (2025). Exploring the relationship between teachers’ competencies in AI-TPACK and digital proficiency. Education and information technologies, 30(3), 3491-3508.
In article      View Article
 
[17]  Muttaqin, I. (2022). Necessary to increase teacher competency in facing the artificial intelligence era. Al-Hayat: Journal of Islamic Education, 6(2), 549-559.
In article      View Article
 
[18]  Jaiswal, K., Kuzminykh, I., & Modgil, S. (2025). Understanding the skills gap between higher education and industry in the UK in artificial intelligence sector. Industry and Higher Education, 39(2), 234-246.
In article      View Article
 
[19]  Aggarwal, R., & Ranganathan, P. (2019). Study designs: Part 2–descriptive studies. Perspectives in clinical research, 10(1), 34-36.
In article      View Article  PubMed
 
[20]  Kandlhofer, M., Steinbauer, G., Hirschmugl-Gaisch, S., & Huber, P. (2016, October). Artificial intelligence and computer science in education: From kindergarten to university. In 2016 IEEE frontiers in education conference (FIE) (pp. 1-9). IEEE.
In article      View Article
 
[21]  Nguyen, A., Hong, Y., Dang, B., & Huang, X. (2024). Human-AI collaboration patterns in AI-assisted academic writing. Studies in Higher Education, 49(5), 847-864.
In article      View Article
 
[22]  Rahioui, F., Jouti, M. A. T., & El Ghzaoui, M. (2024). Exploring Complex Biological Processes through Artificial Intelligence. Journal of Educators Online, 21(2), n2.
In article      View Article
 
[23]  Pedro, F., Subosa, M., Rivas, A., & Valverde, P. (2019). Artificial intelligence in education: Challenges and opportunities for sustainable development.
In article      
 
[24]  Corbeil, J. R., & Corbeil, M. E. Teaching and Learning in the Age of Generative AI. (2025).
In article      View Article
 
[25]  Tan, L. Y., Hu, S., Yeo, D. J., & Cheong, K. H. (2025). Artificial Intelligence-Enabled Adaptive Learning Platforms: A Review. Computers and Education: Artificial Intelligence, 100429.
In article      View Article
 
[26]  Ryzheva, N., Nefodov, D., Romanyuk, S., Marynchenko, H., & Kudla, M. (2024). Artificial Intelligence in higher education: opportunities and challenges. Amazonia Investiga, 13(73), 284-296.
In article      View Article
 
[27]  Zouhaier, S. (2023). The impact of artificial intelligence on higher education: An empirical study. European Journal of Educational Sciences, 10(1), 17-33.
In article      View Article