This study explores the perceptions and concerns of teachers, administrators, and students regarding the integration of Artificial Intelligence (AI) in educational settings. The objective is to examine how these key stakeholders view AI’s potential, its impact on education, and the challenges associated with its adoption. A qualitative methodology was employed, utilizing semi-structured interviews with IT teachers, school administrators, and students to gather in-depth insights into their experiences and expectations of AI. The results indicate a general enthusiasm for AI’s ability to enhance personalized learning and improve administrative efficiency. However, teachers expressed concerns about the potential reduction of human interaction and job displacement, while administrators highlighted the need for clear policies, training, and resources to ensure effective integration. Students were more receptive to AI but raised issues regarding data privacy and over-reliance on technology. The findings underscore the need for educational institutions to develop comprehensive plans and ethical guidelines for AI adoption. It is concluded that while AI holds significant potential, its successful integration into education requires careful planning and stakeholder collaboration. Future research should focus on longitudinal studies to assess the long-term impact of AI on educational outcomes, as well as the development of teacher training programs and ethical frameworks to ensure AI is used responsibly and equitably in the classroom.
The integration of Artificial Intelligence (AI) in education has become an increasingly important topic of discussion as educational institutions around the world seek to harness the potential of these technologies to enhance teaching and learning. However, many educational systems have been alarmed by the rapid and sometimes unchecked adoption of AI tools and platforms, often implemented without sufficient planning, institutional policies, or strategies for ensuring their effective use. As AI technologies proliferate across the educational landscape, institutions find themselves caught between the excitement surrounding the benefits of AI and the uncertainties and risks associated with its implementation 1, 2, 3.
In the absence of clear, well-defined guidelines and programs, the use of AI in education has raised several concerns that educational leaders, administrators, and educators must address 3. These include the ethical implications of AI, the potential for exacerbating existing inequalities in access to technology and the possibility of AI tools inadvertently replacing the human touch in teaching. Without proper oversight, the unchecked integration of AI could lead to a fragmented educational experience, where the benefits of technology are not equitably distributed and where the complexity of technological integration is underestimated.
This study is motivated by the growing alarm within educational institutions regarding the hasty adoption of AI without accompanying plans, policies, and frameworks. The rapid rise of AI-based educational tools, such as adaptive learning platforms, automated grading systems, and AI-driven tutoring systems, has led to an environment where technology is being integrated into classrooms without comprehensive understanding or consensus among key stakeholders—teachers, administrators, and students. These groups are often left navigating the complexities of AI adoption without clear guidance, leading to feelings of uncertainty, resistance, and even fear 4.
Educational institutions, particularly in developing and underfunded regions, are particularly vulnerable to these concerns like the Philippines particularly in the countryside. Administrators may feel pressured to adopt AI technologies in response to global trends, yet lack the resources, training, and necessary infrastructure to support effective integration. As one administrator put it, "While AI could transform our classrooms, we are not prepared to handle the scale of change it brings. We need policies, training, and clear frameworks before diving in." This sentiment reflects a broader issue faced by educational institutions globally: the rush to embrace technological advances without addressing the foundational requirements needed for a smooth and responsible integration.
The urgency of developing structured plans, programs, and policies for AI adoption is paramount to a bar-none standard in education. Socio-Technical Systems Theory (STS) offers a useful lens for understanding the need for balance between the technological advancements of AI and the social systems (teachers, students, administrators) that interact with them. This theory highlights that the introduction of new technologies in educational settings should not solely focus on the capabilities of the technology itself, but also on how it fits within the social context of the institution. This includes consideration of faculty readiness, ethical concerns, data privacy, equity of access, and the potential consequences of over-reliance on AI in teaching and learning processes.
Without adequate planning and policy frameworks, the unregulated use of AI could undermine efforts to ensure equitable access to quality education. Eubanks 5 and Zuboff 6 have highlighted the risks of data exploitation, algorithmic biases, and the unequal distribution of technological resources. Educational institutions must proactively address these issues by formulating policies that protect both students and educators while maximizing the potentials of AI. Without such measures, AI becomes a tool that exacerbates rather than alleviates the existing challenges in education.
This study addresses the gap in the literature on the unregulated use of AI in education by examining how AI technologies are perceived and utilized by key stakeholders within educational institutions. It aims to explore how teachers, administrators, and students understand the role of AI, as well as their concerns regarding its integration. The findings will contribute to the growing body of research on AI in education and will provide valuable insights into how educational institutions can better plan and implement AI-driven tools and policies that align with their educational objectives, while safeguarding the interests of all stakeholders involved.
By focusing on the stakeholders' perceptions and experiences, this study seeks to identify both the promises and pitfalls of AI adoption in educational settings. It aims to guide policymakers, administrators, and educators in crafting thoughtful, effective AI integration strategies that ensure technology that serves as a tool for enhancing, rather than disrupting, the educational experience. Furthermore, it emphasizes the need for long-term planning, comprehensive training, and the establishment of clear policies to mitigate risks and ensure that AI technologies are used ethically, equitably, and effectively within educational institutions.
Artificial Intelligence has emerged as a transformative force in the education sector, offering innovative tools and solutions that enhance learning experiences, streamline administrative tasks, and promote inclusivity. The advancements in AI have reshaped how educators teach, students learn, and institutions manage educational processes.
Personalized Learning Experiences: One of the most significant contributions of AI in education is its ability to personalize learning experiences. Unlike traditional classroom settings where a one-size-fits-all approach dominates, AI adapts learning content to suit the unique needs of individual students. Adaptive learning platforms like Carnegie Learning's Cognitive Tutor and Coursera leverage AI algorithms to analyze students’ performance data, identify their strengths and weaknesses, and adjust content delivery accordingly 7. For instance, these platforms can slow down the pace for students struggling with a topic or provide advanced material for those excelling. This tailored approach fosters better engagement, reduces learning gaps, and allows students to progress at their own pace. Moreover, AI tools provide detailed feedback, enabling students to understand their mistakes and improve over time.
Virtual Assistants and Intelligent Teaching Agents: AI-powered virtual assistants, such as chatbots and conversational agents, are increasingly being deployed to support students and educators. These virtual assistants handle routine tasks, answer frequently asked questions, and guide students through complex assignments. For example, IBM Watson Assistant and Replika are used in educational settings to provide on-demand tutoring and emotional support 4. Beyond assisting students, these intelligent agents also benefit teachers by automating administrative tasks like scheduling and monitoring progress. This allows educators to focus on instructional activities and provide personalized attention to students who need it most. Furthermore, virtual assistants are capable of multilingual communication, breaking down language barriers in education and reaching diverse groups of learners.
Automated Assessment Systems: The grading and assessment process has been revolutionized by AI, which offers accurate, efficient, and unbiased evaluation tools. Automated assessment systems employ Natural Language Processing (NLP) and machine learning to evaluate assignments, quizzes, and even essays. Turnitin and GradeScope are notable examples of such platforms that help educators save time and ensure consistency in grading 8. In addition to traditional assessments, AI-powered tools can evaluate creative responses, analyze patterns in answers, and provide constructive feedback. This approach reduces human error and bias while allowing educators to focus on curriculum development and student engagement. Automated assessments also offer immediate feedback to students, enabling them to address their mistakes promptly.
Experiential Learning Through AI-Integrated Virtual Reality and Augmented Reality: AI, when combined with Augmented Reality (AR) and Virtual Reality (VR), creates immersive and interactive learning environments. These technologies provide experiential learning opportunities, making complex and abstract concepts easier to grasp. For instance, Google Expeditions and Microsoft HoloLens offer AI-integrated VR simulations that allow students to virtually explore historical sites, perform scientific experiments, or visualize mathematical models 9. Such applications bridge the gap between theoretical knowledge and practical application. In Science, Technology, Engineering, and Mathematics (STEM) education, these tools are particularly valuable as they help students visualize intricate structures, such as the human anatomy or engineering designs, in a highly engaging manner.
Enhancing Inclusivity in Education. AI technologies are playing a critical role in fostering inclusivity within education. Tools like speech-to-text and text-to-speech systems have been developed to support students with disabilities, including those with hearing impairments or visual impairments. Applications such as Microsoft Seeing AI and Google Lens enable visually impaired students to access textual and visual information by providing real-time descriptions 10. Artificial intelligence also supports inclusive classrooms by identifying students with learning disabilities early and tailoring educational interventions to their needs. Additionally, AI-enabled translation tools facilitate multilingual education, helping students from diverse linguistic backgrounds to access quality education.
Data-Driven Educational Management: AI is increasingly used to enhance the efficiency of educational administration. Predictive analytics and data-driven insights provided by AI help institutions manage resources effectively and improve student outcomes. For example, platforms like Blackboard and Edmodo use AI to track student performance, predict at-risk students, and suggest timely interventions 11. Artificial intelligence systems also streamline administrative workflows, such as scheduling classes, managing enrollments, and organizing resources. These advancements allow institutions to reduce operational burdens and focus more on delivering quality education.
Addressing Challenges and Preparing for the Future: Despite its promising applications, the integration of AI in education faces several challenges. Privacy concerns regarding the collection and use of student data are significant. Ensuring transparency and fairness in AI algorithms is another critical issue, as biases embedded in AI systems can inadvertently disadvantage certain groups of students. Moreover, the high cost of implementing AI solutions can be a barrier for underfunded schools and institutions. However, researchers and developers are actively working to address these challenges. The future of AI in education lies in its ability to complement traditional teaching methods, not replace them. By combining AI-driven tools with human empathy and creativity, educators can create an enriched learning environment that benefits all stakeholders.
The applications of AI in education are transforming the way knowledge is imparted and acquired. From personalized learning and automated assessments to inclusive classrooms and efficient management, AI has become a powerful tool for modern education systems 2, 3, 4. While challenges persist, the advancements in AI since 2020 underscore its potential to revolutionize education, making it more accessible, engaging, and effective. The journey ahead will likely see AI as an indispensable ally in shaping the future of education.
This study adopts a narratological approach to explore the integration of AI in education, focusing on the "dreams and wishes" of stakeholders and how this shape and are shaped by the emerging narratives about AI. Narratology provides a structured framework for analyzing the stories told by educators, students, and policymakers, emphasizing their structure, themes, and implicit meanings. By examining these narratives, this study seeks to uncover recurring patterns and cultural or emotional significances tied to AI's adoption in education.
To achieve this, a qualitative narratological analysis was chosen as the research design. Data were collected through multiple methods to capture diverse perspectives and contexts. Semi-structured interviews were conducted with educators, students, administrators, and policymakers, encouraging participants to share personal stories about their experiences, aspirations, and concerns regarding AI. Document analysis was also employed, focusing on policies, institutional reports, and AI integration plans to understand how institutions frame AI's role in education. Additionally, case studies from AI-integrated schools and universities served as supplementary narrative sources. Focus groups were organized to gather shared narratives among students and educators, while observational notes from AI-integrated classrooms or pilot programs provided contextual insights to enrich the data.
Data analysis followed a detailed narratological framework. First, the structure of each narrative was examined, focusing on the beginning (context of AI introduction), middle (ongoing experiences), and envisioned endings (aspirations or predictions). Recurring themes, such as empowerment, ethical concerns, and fears of replacement, were identified through thematic analysis, with special attention to symbolic language and metaphors that revealed deeper attitudes toward AI. Narratives were further categorized by perspective (e.g., educator, student, policymaker) to highlight differences in storytelling and meaning-making. Temporal analysis was also conducted to examine how past experiences, present realities, and future aspirations are interconnected in the evolving discourse on AI in education.
Ethical considerations were prioritized throughout the research process. Ethical approval was secured, and informed consent was obtained from all participants. To protect participant identities, all narratives were anonymized, and data were securely stored to maintain confidentiality.
This narratological approach was chosen because it goes beyond quantitative assessments to capture the rich, subjective stories that reveal stakeholders' values, beliefs, and emotions. By focusing on these narratives, the study provides a deeper understanding of the "dreams and wishes" that drive and contextualize AI's "dawn" in education, offering insights into both its current and potential future roles particularly in the locale of the study, the countryside of the Philippines.
The results of this study, when analyzed through the lens of Socio-Technical Systems Theory (STS), reflect the complexities and multi-dimensionality of integrating AI into educational settings. Socio-Technical Systems Theory emphasizes the interaction between social systems (e.g., teachers, students, administrators) and technical systems (e.g., AI tools, digital platforms), suggesting that the successful adoption of technology in education requires aligning both aspects. In this study, the integration of AI in education is analyzed through four critical themes: Empowerment through AI, Ethical and Equity Concerns, Fear of Replacement, and Evolving Expectations. Below, these themes are discussed in detail with an emphasis on how AI reshapes educational practices and stakeholder roles.
Empowerment through AI. AI's integration in education is widely perceived as an empowering force, capable of automating routine tasks, providing personalized learning experiences, and augmenting pedagogical methods. This finding underscores the dual role of technology—improving efficiency while enhancing educational quality. As highlighted by Zhang and Aslan 12, AI's potential to alleviate repetitive administrative burdens and personalize learning has significant implications for both teachers and students.
Teachers, for instance, reported that AI tools helped them shift their focus from routine tasks like grading or managing assessments to more high-level educational activities, such as fostering critical thinking and guiding student creativity. One teacher noted: "AI tools have revolutionized how I teach programming. Platforms like adaptive coding simulators help my students practice independently while I focus on more advanced concepts." This echoes the work of Zhang and Aslan 12 and Chen et al. 13, who argue that AI can enhance the efficiency of teachers by handling routine tasks, thus allowing them to focus more on fostering student creativity and critical thinking.
Students, too, experienced significant empowerment. For example, a student shared: "AI-assisted tutorials have been a lifesaver. When I do not understand a concept, I can replay lessons or get step-by-step examples tailored to my learning pace." This resonates with Fryer et al. 14, who suggests that AI-powered platforms offer personalized learning paths, thus improving engagement and retention. As AI provides students with on-demand resources, it offers a degree of autonomy over their learning, a powerful motivator that fosters independent exploration and enhances academic performance.
Further, administrators reported positive feedback on AI's capacity for predicting student struggles, which allows for timely interventions. As one administrator explained: "Our school uses AI to predict which students might struggle with specific subjects. This helps us intervene early and provide targeted support." This predictive ability aligns with Hagendoff 3, who found that AI-driven predictive analytics could significantly improve early intervention strategies, thus contributing to better student outcomes.
However, while AI shows great potential for empowerment, the integration of these technologies must be strategically aligned with the educational goals of the institution. The findings underscore that AI must be implemented as a complementary tool that enhances human interaction rather than replacing it, thus ensuring that the social system (teachers, students) benefits from the capabilities of the technological system (AI tools). In the STS framework, this balance between technology and human actors is crucial to achieving meaningful educational outcomes.
Ethical and Equity Concerns. Despite the positive outcomes associated with AI, the study revealed deep ethical and equity concerns among all participants. These concerns reflect the STS perspective that the technical system (AI) must align with social values, such as equity, privacy, and fairness, to be effective and accepted in an educational context. The study’s findings echo the concerns raised by Eubanks 5, Gordon 15, and Zuboff 6, who warned that AI systems, if not properly designed and regulated, could exacerbate existing inequities in education.
One of the most common concerns among teachers was the security and privacy of student data. One teacher expressed: "I worry about the data collected by AI platforms. Do they prioritize student privacy, or are we trading learning for surveillance?" This concern reflects the broader societal debate about the increasing datafication of education. Krutka et al. 2 and Krutka et al. 16 emphasized that many AI-driven educational platforms collect vast amounts of personal data, which raises questions about consent, data ownership, and surveillance. In the context of education, it is essential that AI systems be transparent in their data practices and that educators and students have clear, informed consent about how their data is used.
Equity was another key concern, particularly with respect to access to AI tools. An administrator remarked: "AI implementation is fantastic, but the digital divide is evident. Some students do not have the necessary devices or internet access to benefit equally." This finding highlights a significant challenge in education today: the digital divide. Hwang et al. 17 argued that AI’s potential to democratize education is compromised if all students do not have equal access to the technology. For AI to contribute to educational equity, institutions must ensure that students from all socio-economic backgrounds can access the necessary tools and resources. Without addressing these disparities, AI integration may inadvertently reinforce existing inequalities in education.
The social implications of AI in education also reflect the broader STS principle of balancing technical advancements with social needs. Kabudi et al. 18 suggested that for AI to be ethically integrated into education, its design and implementation must be inclusive and socially responsible. Educational policies should address concerns such as digital accessibility and data privacy to ensure that AI serves to reduce, rather than widen, educational inequalities.
Fear of Replacement. The study revealed a prevalent fear among teachers and students that AI might replace human educators, a fear that aligns with Kose’s 19 and Brynjolfsson and McAfee’s 20 discussion on technological unemployment. Teachers expressed concerns about the potential for AI to disrupt their roles. One teacher stated: "AI is great for repetitive tasks, but I am concerned it might replace hands-on teaching in IT subjects, where mentorship is key." This concern mirrors Ferdig et al. 1, who cautioned that AI is well-suited for automating routine processes but lacks the human touch required for critical educational tasks, such as mentoring, emotional support, and fostering complex problem-solving skills.
While students expressed appreciation for AI tools, they were quick to acknowledge the irreplaceable role of teachers. As one student remarked: "AI tutorials are helpful, but I still prefer learning from a teacher. Machines cannot replace the way a teacher explains complex topics or answers my questions." Higgins et al. 21 argued that AI can supplement educational experiences but cannot fully replicate the nuanced, interactive nature of human teaching. In this regard, AI should be seen as an assistant to educators, augmenting their capabilities rather than replacing them.
This theme touches on the tension in STS between the potential of technological systems to enhance human capabilities and the fear that these systems may lead to the displacement of human workers. Westerman 22 suggests that, to address such fears, AI should be framed as a tool that works in harmony with human educators, not as a replacement. Effective AI integration into educational systems requires reassurances to educators that their professional roles remain central, even as technology enhances their teaching practices.
Evolving Expectations. Despite the fear of replacement, the study revealed an evolving perception among stakeholders, who began to see AI as an educational partner rather than a competitor. Holmes et al. 4 suggests that, over time, teachers and students are likely to develop a collaborative relationship with AI, wherein technology supports, rather than disrupts, traditional educational practices. One teacher expressed this sentiment: "AI should be used as an assistant, not a substitute. It can handle administrative tasks, but the essence of teaching must remain human."
Administrators also emphasized the importance of professional development to ensure that teachers are equipped to work alongside AI. As one administrator said: "Our goal is to ensure that AI is a bridge, not a barrier. We are investing in training for teachers so they can adapt and thrive alongside these technologies." This aligns with Popenici and Kerr 23, who argue that AI's role should be to enhance the educator's role by automating routine tasks and offering personalized support to students, leaving teachers to focus on more complex and human-centered aspects of teaching.
This theme reflects the ongoing shift in educational expectations, where AI is increasingly viewed as a tool that complements and enhances the work of educators. As Dede 24 points out, the integration of AI into education should focus on how technology can support human teaching and learning rather than overshadow it.
This study highlights both the enthusiasm and concerns surrounding AI integration in education. While teachers, administrators, and students acknowledge AI’s potential to enhance learning and efficiency, concerns about its unregulated adoption persist. Teachers worry about losing autonomy and human interaction, while administrators stress the need for clear policies, training, and strategies for successful implementation. Students are open to AI but raise issues regarding data privacy and over-reliance on technology. The findings emphasize the need for educational institutions to develop comprehensive policies and frameworks to ensure AI enhances learning without undermining the human aspects of education.
Future research could expand on this study by exploring the long-term impacts of AI integration in education through longitudinal studies. Such research would provide valuable insights into how AI adoption evolves over time and its sustained effects on teaching practices, student outcomes, and institutional development. A comprehensive investigation into how AI’s role differs across educational levels—such as primary, secondary, and higher education—could further deepen understanding of its unique challenges and opportunities at each stage.
Another important direction for future research is the development and evaluation of teacher training programs tailored to AI integration. Studies could assess the effectiveness of professional development initiatives, identifying best practices and strategies for adequately preparing educators to use AI tools in their classrooms. Additionally, with growing concerns about data privacy, algorithmic bias, and equity, research focusing on the ethical implications of AI in education is crucial. Future work could examine ways to design AI systems that ensure privacy protection, reduce bias, and promote equitable access to educational opportunities, particularly in underserved communities.
Exploring AI’s impact on student engagement and learning outcomes could also be a key area for future study. By investigating how AI tools personalize learning experiences, future research could evaluate their effectiveness in fostering student motivation, participation, and academic success. Finally, research could focus on creating comprehensive policy frameworks to guide AI adoption in education. This would involve examining the role of governmental bodies, educational institutions, and the private sector in developing regulations that ensure AI is used ethically, responsibly, and effectively in the educational context. The heading of the Acknowledgment section and the References section must not be numbered.
[1] | Ferdig, R.E., Baumgartner, E., Hartshorne, R., Kaplan-Rakowski, R. & Mouza, C. (2020). Teaching, Technology, and Teacher Education during the COVID-19 Pandemic: Stories from the Field. Association for the Advancement of Computing in Education (AACE). Retrieved November 25, 2024 from . | ||
In article | |||
[2] | Krutka, D. G., Heath, M. K., & Willet, K. B. S. (2019). Foregrounding Technoethics: Toward Critical Perspectives in Technology and Teacher Education. Journal of Technology and Teacher Education, 27(4), 555-574. | ||
In article | |||
[3] | Hagendorff, T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines. Minds & Machines, 30, 99–120. | ||
In article | View Article | ||
[4] | Holmes, W., Bialik, M., & Fadel, C. (2021). Artificial intelligence in education: Promises and implications for teaching and learning. OECD Publishing. | ||
In article | |||
[5] | Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: Picador, St Martin’s Press. | ||
In article | |||
[6] | Zuboff, S. (2019). . New York: Public Affairs. | ||
In article | |||
[7] | Li, X., Ma, Q., & Huang, J. (2021). Adaptive learning platforms and their role in education. Computers & Education, 163, 104097. | ||
In article | View Article | ||
[8] | Zhai, X., Cui, L., & Wang, X. (2022). Automated assessment in the AI era. Assessment & Evaluation in Higher Education, 47(3), 331-345. | ||
In article | |||
[9] | Tang, W., Li, H., & Fu, Z. (2021). Immersive learning through AI-integrated virtual reality. Educational Technology Research and Development, 69(5), 1247–1264. | ||
In article | View Article | ||
[10] | Patel, S., Kumar, A., & Singh, R. (2020). AI for inclusive education: A case study. International Journal of Special Education, 35(3), 10-23. | ||
In article | |||
[11] | Baker, R. S., & Siemens, G. (2021). Educational data mining and learning analytics: Exploring the past, present, and future. Journal of Educational Data Science, 2(1), 1-13. | ||
In article | |||
[12] | Zhang, K., & Aslan, AB. (2021). AI technologies for education: Recent research & future directions. Computers and Education: Artificial Intelligence, 2, 100025. | ||
In article | View Article | ||
[13] | Chen, X., Xie, H., Zou, D., & Hwang, G. (2020). Application and theory gaps during the rise of AI. Computers and Education: Artificial Intelligence, 1, 100002. | ||
In article | View Article | ||
[14] | Fryer, L., Ainley, M., Thompson, A., Gibson, A., & Sherlock, Z. (2017). Stimulating and sustaining interest in a language course: An experimental comparison of Chatbot and Human task partners. Computers in Human Behavior, 75, 461-468. | ||
In article | View Article | ||
[15] | Gordon, F. (2019). Virginia Eubanks (2018) Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: Picador, St Martin’s Press. Law, Technology and Humans, 1(1). | ||
In article | View Article | ||
[16] | Krutka, D. G., Heath, M. K., & Mason, L. E. (2020). Technology won’t save us: A call for technoskepticism in social studies. Contemporary Issues in Technology and Teacher Education, 20(1), 108-120. | ||
In article | |||
[17] | Hwang, G., Sung, H., Chang, S., & Huang, X. (2020). A fuzzy expert system-based adaptive learning approach to improving students’ learning performances by considering affective and cognitive factors. Computers and Education: Artificial Intelligence, 1, 100003. | ||
In article | View Article | ||
[18] | Kabudi, T., Pappas, I., & Olsen, D. (2021). AI-enabled adaptive learning systems: A systematic mapping of the literature. Computers and Education, Artificial Intelligence, 2, 100017. | ||
In article | View Article | ||
[19] | Kose, U. (2016). The second machine age: Work, Progress, and prosperity in a time of brilliant technologies (E. Brynjolfsson & A. McAfee). Journal of Multidisciplinary Developments, 1(1) 7-8. | ||
In article | |||
[20] | Brynjolfsson, E., & McAfee, A. (2014). The Second Machine age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York, NY: WW Norton & Company. | ||
In article | |||
[21] | Higgins, S., Xiao, Z., & Katsipataki, M. (2012). The Impact of digital technology on learning: A summary for the education endowment foundation. Education, Computer Science | ||
In article | |||
[22] | Westerman, G. (2018). Why digital transformation needs a heart? in The Digital Future of Management. MIT Sloan Management Review. | ||
In article | View Article | ||
[23] | Popenici, S. A. D., & Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education. Research and Practice in Technology Enhanced Learning, 12, Article No. 22. | ||
In article | View Article PubMed | ||
[24] | Dede, T., Kankal, M., Vosoughi, AR., Grzywinski, M. (2019). Artificial intelligence applications in civil engineering. Advances in Civil Engineering, 8384523, 1-3. | ||
In article | View Article | ||
Published with license by Science and Education Publishing, Copyright © 2025 Lanie O. Corpuz, Erwin N. Lardizabal, Abigail G. Torno, Von P. Gabayan Jr., Pranay Pandey and Romiro G. Bautista
This work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of this license, visit
http://creativecommons.org/licenses/by/4.0/
[1] | Ferdig, R.E., Baumgartner, E., Hartshorne, R., Kaplan-Rakowski, R. & Mouza, C. (2020). Teaching, Technology, and Teacher Education during the COVID-19 Pandemic: Stories from the Field. Association for the Advancement of Computing in Education (AACE). Retrieved November 25, 2024 from . | ||
In article | |||
[2] | Krutka, D. G., Heath, M. K., & Willet, K. B. S. (2019). Foregrounding Technoethics: Toward Critical Perspectives in Technology and Teacher Education. Journal of Technology and Teacher Education, 27(4), 555-574. | ||
In article | |||
[3] | Hagendorff, T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines. Minds & Machines, 30, 99–120. | ||
In article | View Article | ||
[4] | Holmes, W., Bialik, M., & Fadel, C. (2021). Artificial intelligence in education: Promises and implications for teaching and learning. OECD Publishing. | ||
In article | |||
[5] | Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: Picador, St Martin’s Press. | ||
In article | |||
[6] | Zuboff, S. (2019). . New York: Public Affairs. | ||
In article | |||
[7] | Li, X., Ma, Q., & Huang, J. (2021). Adaptive learning platforms and their role in education. Computers & Education, 163, 104097. | ||
In article | View Article | ||
[8] | Zhai, X., Cui, L., & Wang, X. (2022). Automated assessment in the AI era. Assessment & Evaluation in Higher Education, 47(3), 331-345. | ||
In article | |||
[9] | Tang, W., Li, H., & Fu, Z. (2021). Immersive learning through AI-integrated virtual reality. Educational Technology Research and Development, 69(5), 1247–1264. | ||
In article | View Article | ||
[10] | Patel, S., Kumar, A., & Singh, R. (2020). AI for inclusive education: A case study. International Journal of Special Education, 35(3), 10-23. | ||
In article | |||
[11] | Baker, R. S., & Siemens, G. (2021). Educational data mining and learning analytics: Exploring the past, present, and future. Journal of Educational Data Science, 2(1), 1-13. | ||
In article | |||
[12] | Zhang, K., & Aslan, AB. (2021). AI technologies for education: Recent research & future directions. Computers and Education: Artificial Intelligence, 2, 100025. | ||
In article | View Article | ||
[13] | Chen, X., Xie, H., Zou, D., & Hwang, G. (2020). Application and theory gaps during the rise of AI. Computers and Education: Artificial Intelligence, 1, 100002. | ||
In article | View Article | ||
[14] | Fryer, L., Ainley, M., Thompson, A., Gibson, A., & Sherlock, Z. (2017). Stimulating and sustaining interest in a language course: An experimental comparison of Chatbot and Human task partners. Computers in Human Behavior, 75, 461-468. | ||
In article | View Article | ||
[15] | Gordon, F. (2019). Virginia Eubanks (2018) Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: Picador, St Martin’s Press. Law, Technology and Humans, 1(1). | ||
In article | View Article | ||
[16] | Krutka, D. G., Heath, M. K., & Mason, L. E. (2020). Technology won’t save us: A call for technoskepticism in social studies. Contemporary Issues in Technology and Teacher Education, 20(1), 108-120. | ||
In article | |||
[17] | Hwang, G., Sung, H., Chang, S., & Huang, X. (2020). A fuzzy expert system-based adaptive learning approach to improving students’ learning performances by considering affective and cognitive factors. Computers and Education: Artificial Intelligence, 1, 100003. | ||
In article | View Article | ||
[18] | Kabudi, T., Pappas, I., & Olsen, D. (2021). AI-enabled adaptive learning systems: A systematic mapping of the literature. Computers and Education, Artificial Intelligence, 2, 100017. | ||
In article | View Article | ||
[19] | Kose, U. (2016). The second machine age: Work, Progress, and prosperity in a time of brilliant technologies (E. Brynjolfsson & A. McAfee). Journal of Multidisciplinary Developments, 1(1) 7-8. | ||
In article | |||
[20] | Brynjolfsson, E., & McAfee, A. (2014). The Second Machine age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York, NY: WW Norton & Company. | ||
In article | |||
[21] | Higgins, S., Xiao, Z., & Katsipataki, M. (2012). The Impact of digital technology on learning: A summary for the education endowment foundation. Education, Computer Science | ||
In article | |||
[22] | Westerman, G. (2018). Why digital transformation needs a heart? in The Digital Future of Management. MIT Sloan Management Review. | ||
In article | View Article | ||
[23] | Popenici, S. A. D., & Kerr, S. (2017). Exploring the impact of artificial intelligence on teaching and learning in higher education. Research and Practice in Technology Enhanced Learning, 12, Article No. 22. | ||
In article | View Article PubMed | ||
[24] | Dede, T., Kankal, M., Vosoughi, AR., Grzywinski, M. (2019). Artificial intelligence applications in civil engineering. Advances in Civil Engineering, 8384523, 1-3. | ||
In article | View Article | ||