The rapid expansion of generative artificial intelligence (AI), particularly large language models (LLMs), is profoundly transforming higher education by enabling the on-demand production of instructional content, learning activities, feedback, and assessment tools. However, recent research indicates that these uses remain largely opportunistic and insufficiently embedded within systematic instructional design processes. This weak integration may compromise constructive alignment between intended learning outcomes, learning activities, and assessment methods, as well as undermine academic integrity. And this article offers a structured synthesis of the contributions, uses, and effectiveness conditions of a personalized generative AI—defined as an AI system configured through stable pedagogical instructions, institutional constraints, and disciplinary frameworks—designed to support active learning approaches in higher education. The study is based on a qualitative documentary analysis of recent international scientific publications (during 2023–2025), complemented by the examination of a university teaching resource used as an empirical case of implementation. The findings highlight three major contributions. First, personalized generative AI can support the entire instructional design cycle, from needs analysis to assessment design, by strengthening pedagogical coherence and constructive alignment. Second, the dominant uses identified primarily concern the design of active learning strategies (flipped classroom, case-based learning, structured debates, collaborative projects) and the enhancement of formative and summative assessment practices, in line with empirical evidence demonstrating the positive impact of active learning on student performance and success. Third, the effectiveness of these uses depends on key conditions: the development of AI literacy and prompt engineering skills among educators and students; the redesign of assessment systems to ensure robustness against automation; and the establishment of ethical and institutional governance grounded in recognized risk management frameworks and international guidelines for AI in education.
The digital transformation of higher education has accelerated considerably in recent years, profoundly reshaping teaching, learning, and assessment practices. Initially focused on the digitization of instructional materials and content delivery, this transformation now affects the core of instructional design, influencing curriculum structuring, pedagogical interaction patterns, and certification modalities. The emergence of generative artificial intelligence (AI) marks a new stage in this evolution. For the first time, digital systems are capable of producing structured academic texts, instructional scenarios, learning activities, detailed feedback, and complex assessment tools, thereby blurring the boundary between technical assistance and cognitive production 1.
In higher education, these technologies are both promising and a source of significant concern. Several studies highlight their pedagogical potential, including support for explanation and reformulation, differentiated instruction, assistance in designing learning activities, academic writing support, and enhanced tutoring and formative feedback 2, 3. However, the literature also underscores substantial risks, such as factual inaccuracies, algorithmic bias, opacity of model functioning, learners’ cognitive dependency, and threats to academic integrity when textual production becomes easily automated 1, 4.
These tensions explain the diversity of institutional responses observed internationally, ranging from prohibition to regulated tolerance and proactive integration. Recent systematic reviews converge toward a common conclusion: the central issue is no longer whether generative AI should be used in universities, but rather how it can be integrated in ways that are pedagogically meaningful, ethically responsible, and institutionally sustainable 5.
1.2. A Challenge of Constructive Alignment and Academic IntegrityDespite growing interest in generative AI, its integration into higher education often remains fragmented and instrumental. In many cases, AI is used as a general-purpose productivity tool—for example, to generate course outlines, summaries, multiple-choice questions, or assignment instructions—without explicit alignment with intended learning outcomes, instructional activities, and assessment strategies.
However, research in university pedagogy emphasizes that instructional effectiveness depends on coherence among these three dimensions, commonly referred to as constructive alignment 6. When learning outcomes, teaching activities, and assessment methods are not coherently aligned, learning quality and assessment validity may be compromised.
Introducing generative AI without such a coherent framework may lead to two major pitfalls. First, it may produce pedagogically appealing activities that are poorly aligned with targeted competencies. Second, it may reinforce assessment formats centered on knowledge recall or standardized textual production—formats particularly vulnerable to automation. Recent syntheses identify assessment as the principal vulnerability of pedagogical systems in the face of generative AI, effectively subjecting evaluation practices to a “stress test” 4, 5.
In this context, academic integrity can no longer be framed solely as a matter of detection and sanction. It becomes a pedagogical and institutional issue requiring the redesign of assessment tasks, explicit clarification of authorized AI uses, and the development of a structured pedagogy of academic ethics 1.
1.3. Active Learning as a Privileged Framework for AI IntegrationActive learning approaches provide a particularly relevant framework for analyzing the pedagogical integration of generative AI. Numerous empirical studies have demonstrated that teaching methods engaging students in analysis, problem-solving, discussion, or collaborative production foster deeper learning and improved academic performance, especially in science and engineering disciplines 7.
From a theoretical perspective, the ICAP framework distinguishes four levels of cognitive engagement—passive, active, constructive, and interactive—and posits that the most effective learning outcomes are associated with constructive and interactive engagement 8. Active learning methods typically aim to reach these higher levels of cognitive involvement.
Generative AI can support such methods by facilitating the design of diverse pedagogical activities, including problem-based scenarios, case studies, structured debates, and collaborative projects. It can also enhance pedagogical regulation through feedback and guided self-assessment. However, it may produce the opposite effect if it encourages superficial activity without meaningful cognitive engagement or reduces the productive struggle essential to deep learning 2. These considerations reinforce the need for a didactically grounded integration of AI centered on cognitive engagement quality rather than simple task automation.
1.4. Personalized Generative AI as an Instructional Design AssistantTo address these limitations, this study introduces the concept of personalized generative AI. Unlike generic and opportunistic uses, personalized generative AI refers to an AI system configured through explicit pedagogical instructions, institutional constraints, and clearly defined disciplinary frameworks. It incorporates predefined alignment rules, targeted cognitive levels, quality criteria for outputs, and structured assessment formats.
This personalization transforms AI into an instructional design assistant capable of strengthening rather than weakening pedagogical coherence. Research on AI literacy and prompt engineering demonstrates that the quality and relevance of generated outputs depend heavily on human capacities for framing, monitoring, and validating results 9. Consequently, AI does not replace pedagogical expertise; instead, it increases its importance by reinforcing the instructor’s role as designer, regulator, and guarantor of pedagogical meaning.
1.5. Ethical Governance and Institutional Conditions for EffectivenessThe effective integration of personalized generative AI requires explicit ethical and institutional governance. International recommendations emphasize the necessity of clear educational policies grounded in transparency, data protection, equity, and the development of human competencies 1.
Operational frameworks such as the NIST Artificial Intelligence Risk Management Framework propose structured approaches to AI risk management organized around governance, mapping, measurement, and impact management 10. Furthermore, recent regulatory developments, including the European Union’s AI Act, contribute to defining international standards that are likely to influence university policies beyond the European context 11.
1.6. Objectives and Research QuestionsBuilding on these considerations, this article pursues three main objectives:
1. To analyze the contributions of personalized generative AI to university instructional design;
2. To identify structured pedagogical uses supporting active learning;
3. To determine the pedagogical, institutional, and ethical conditions ensuring effective and responsible integration.
The study is guided by the following research questions:
• How does personalized generative AI support active learning methods in higher education?
• What structured pedagogical uses emerge from recent scientific literature?
• What conditions enable the effective and responsible integration of these technologies?
This study adopts a qualitative research design based on documentary analysis within an analytical and interpretive perspective. This methodological choice is justified by the emerging and rapidly evolving nature of the research object—namely, the pedagogical integration of personalized generative AI in higher education. Recent scientific production on this topic is heterogeneous, fragmented, and expanding quickly 5.
Documentary analysis enables the synthesis, comparison, and conceptual modeling of theoretical, empirical, and normative contributions without aiming to establish experimental causal relationships. The objective is therefore not to directly measure the impact of generative AI on academic performance, but rather to identify recurring patterns, dominant trends, and transferability conditions associated with pedagogical uses described in recent literature. This approach is consistent with methodological frameworks recommending structured literature analysis for emerging research domains 12.
2.2. Research ProcedureThe research process unfolded in four complementary stages:
1. Identification and selection of relevant scientific sources addressing generative AI in higher education, active learning methods, instructional design, and academic integrity;
2. Thematic analysis of selected publications to extract contributions, uses, limitations, and contextual conditions;
3. Alignment of findings with established instructional design principles, particularly constructive alignment 6;
4. Examination of a university pedagogical document used as an illustrative empirical artifact demonstrating practical operationalization.
This procedure enables conceptual triangulation across theoretical frameworks, empirical syntheses, and documented pedagogical practices, thereby strengthening interpretive validity 13.
2.3. Documentary CorpusThe analyzed corpus covers the period 2023–2025, corresponding to the widespread diffusion of generative AI systems in higher education following the emergence of large language models.
The corpus includes four categories of documents:
1. Peer-reviewed scientific articles and systematic reviews addressing:
- generative AI and ChatGPT in higher education 2, 3, 5;
- pedagogical and cognitive impacts 2;
- academic integrity and assessment challenges 4.
2. Foundational theoretical works in university pedagogy:
- constructive alignment 6;
- active learning effectiveness 7;
- the ICAP cognitive engagement framework 8.
3. International institutional reports and governance frameworks:
- UNESCO guidelines for generative AI in education 1;
- the NIST AI Risk Management Framework 10;
- regulatory developments such as the European AI Act 11.
4. A university pedagogical presentation entitled Active Learning Methods and AI – Integrating Personalized Generative AI into Instructional Design, analyzed as a pedagogical artifact illustrating practical implementation.
Inclusion criteria were:
• Peer-reviewed publications or recognized institutional reports;
• Explicit relevance to higher education;
• Direct focus on generative AI, instructional design, assessment, or active learning;
• Publication between 2023 and 2025 for AI-related sources.
Publications lacking pedagogical relevance or empirical/theoretical grounding were excluded.
2.4. Analytical FrameworkThe analysis was structured around five analytical axes derived from the research objectives:
1. Constructive Alignment Integration: Examination of whether generative AI uses are embedded within coherent alignment among intended learning outcomes, learning activities, and assessment methods 6.
2. Degree of AI Personalization: Distinction between:
- generic uses (isolated prompts),
- semi-structured uses (partial pedagogical framing),
- highly personalized systems (stable pedagogical instructions and institutional constraints).
3. Support for Active Learning Methods: Analysis of AI-supported activities such as flipped classrooms, case-based learning, project-based learning, structured debates, and problem-based learning, interpreted through the ICAP framework 8.
4. Transformation of the Instructor’s Role: Identification of shifts from content producer to instructional designer, regulator, and guarantor of meaning 2.
5. Ethical and Governance Considerations: Examination of institutional policies, academic integrity safeguards, bias management, and risk governance mechanisms 1, 10.
2.5. Data Analysis ProcedureSelected documents were subjected to qualitative thematic analysis inspired by reflexive thematic analysis procedures 14. The process included:
• Repeated in-depth reading of documents;
• Initial coding according to the analytical axes;
• Grouping codes into recurrent themes;
• Hierarchical organization of themes based on theoretical relevance and recurrence;
• Integration with established instructional design frameworks.
The objective was to produce a theoretically grounded interpretive synthesis rather than a descriptive inventory.
2.6. Methodological Rigor and Limitations• Transparent source selection criteria;
• Triangulation across theoretical, empirical, and institutional documents;
• Explicit theoretical anchoring of analytical categories;
• Coherence between research problem, method, and objectives 13.
However, the documentary nature of the study does not permit causal inference regarding the impact of personalized generative AI on measurable learning outcomes. This limitation is acknowledged and addressed in the discussion section.
This section presents the findings derived from the qualitative documentary analysis and from the examination of the university pedagogical artifact. The results are organized according to the analytical axes defined in the methodology: (1) contributions across the instructional design cycle; (2) pedagogical functions and limitations; (3) dominant uses in support of active learning; (4) alignment with the ICAP framework; and (5) effectiveness conditions.
3.1. Contributions of Personalized Generative AI Across the Instructional Design CycleThe analysis indicates that personalized generative AI can support all major phases of instructional design when embedded within a structured pedagogical framework grounded in constructive alignment 6.
The results show that AI contributions are maximized when instructors provide contextualized disciplinary constraints and explicitly define cognitive levels. Without such framing, outputs risk remaining generic.
3.2. Pedagogical Functions and Associated LimitationsThe analysis reveals differentiated added values and corresponding risks depending on the pedagogical function performed.
AI thus appears as a pedagogical amplifier: it strengthens instructional coherence when properly framed but may reproduce superficial structures when cognitive depth is not specified.
3.3. Dominant Pedagogical Uses Supporting Active LearningThe literature analysis highlights recurring structured uses aligned with active learning principles 7.
The findings suggest that generative AI increases diversity and accessibility of active learning activities but does not automatically guarantee high-level cognitive engagement.
Mapping AI-supported activities onto the ICAP model 8 reveals differentiated engagement levels.
These results confirm that AI effectiveness depends on explicit pedagogical constraints ensuring justification, argumentation, and traceability of learning processes.
3.5. Functional Model of Personalized Generative AIA three-layer functional model emerges from the synthesis.
This model positions AI not as an autonomous actor but as a system embedded within structured human regulation.
3.6. Conditions for EffectivenessFinally, four key effectiveness conditions were identified.
The results converge toward a central finding: personalized generative AI strengthens instructional coherence only when embedded within structured pedagogical design, explicit ethical governance, and sustained instructor engagement.
AI literacy and prompt engineering skills directly condition the pedagogical quality of generated outputs. Assessment transformation—toward process-oriented evaluation, justification, and metacognitive reflection—preserves validity and academic integrity. Clear ethical governance, grounded in transparent policies and risk management frameworks, reinforces institutional responsibility. Finally, the centrality of the instructor as designer and regulator remains essential to prevent uncritical or blind delegation to AI systems.
This discussion interprets the findings in light of recent scientific literature and situates the study’s contribution within ongoing debates on generative AI in higher education. It is structured around four major analytical dimensions: (1) personalized generative AI as an instructional design assistant; (2) its relationship with active learning and constructive alignment; (3) implications for assessment and academic integrity; and (4) ethical, institutional, and professional transformations.
4.1. Personalized Generative AI as an Instructional Design AssistantThe findings confirm and extend recent analyses suggesting that generative AI, when pedagogically structured, moves beyond its role as a productivity tool and becomes an instructional design assistant 2, 5. Unlike opportunistic uses centered on rapid content production, personalized generative AI operates as a mediating system embedded within alignment rules, cognitive objectives, and disciplinary frameworks.
This transformation is theoretically significant. Within the constructive alignment paradigm 6, instructional coherence depends on the articulation of intended learning outcomes, teaching activities, and assessment methods. The results indicate that personalized AI can reinforce this coherence by assisting instructors in clarifying learning outcomes, diversifying learning tasks, and generating assessment tools aligned with explicit criteria.
However, this reinforcement does not imply automation of pedagogical judgment. On the contrary, the deeper AI is integrated into instructional design, the more central the instructor’s role becomes. The instructor shifts from being primarily a content transmitter to acting as a designer, regulator, and epistemic guarantor. This observation aligns with broader analyses emphasizing that generative AI amplifies human expertise rather than replacing it 2.
The study therefore contributes a conceptual clarification: AI integration quality depends less on technological sophistication than on pedagogical structuration.
4.2. Active Learning, Cognitive Engagement, and the Risk of Superficial InnovationA major contribution of this study concerns the relationship between generative AI and active learning. Empirical research has demonstrated that active learning methods significantly improve academic performance compared to traditional lecturing 7. The results show that personalized generative AI can facilitate the implementation of such methods by reducing design workload and increasing activity diversification.
However, mapping AI-supported activities onto the ICAP framework 8 reveals an important nuance. While AI can generate activities labeled as “active,” cognitive engagement is not guaranteed. The distinction between active, constructive, and interactive engagement remains crucial. Without explicit constraints requiring justification, explanation, or argumentation, AI-generated activities may remain at the active level, characterized by surface processing.
This finding echoes longstanding critiques in educational technology research: technological innovation does not automatically produce pedagogical innovation. Personalized generative AI functions as a pedagogical amplifier. When guided by explicit cognitive targets, it strengthens constructive and interactive engagement. When poorly framed, it may reproduce low-level engagement patterns.
Thus, the pedagogical value of AI depends fundamentally on instructional intentionality.
4.3. Assessment Redesign and Academic Integrity in the AI EraAssessment emerges as a pivotal dimension in the integration of generative AI. The findings reinforce arguments that generative AI exposes structural vulnerabilities in traditional assessment formats, particularly those based on standardized written production 4, 5.
Rather than interpreting AI primarily as a tool facilitating misconduct, the results suggest that AI operates as a stress test revealing pre-existing weaknesses in evaluation design. Assessment tasks focused on recall or generic essay production are particularly vulnerable to automation.
However, the study also identifies constructive contributions of personalized generative AI to assessment redesign. AI can assist instructors in articulating explicit rubrics, generating scenario-based tasks, and diversifying formative assessment. These uses become educationally robust when combined with process-oriented evaluation strategies emphasizing justification, oral defense, metacognitive reflection, and traceability of learning processes.
This shift aligns with international recommendations advocating for explicit academic integrity education and transparent AI usage policies 1. Academic integrity thus becomes a pedagogical design issue rather than solely a disciplinary matter.
4.4. Ethical Governance and Institutional TransformationBeyond classroom practices, effective AI integration requires institutional governance frameworks. The findings converge with international recommendations emphasizing transparency, accountability, and risk management in AI deployment within education 1, 10.
The NIST AI Risk Management Framework 10 proposes structured governance mechanisms that universities can adapt to ensure responsible implementation. Additionally, emerging regulatory frameworks such as the European AI Act signal the increasing institutionalization of AI accountability standards 11. Even outside the European regulatory domain, these developments influence global academic expectations.
At the professional level, the study highlights the growing importance of AI literacy and prompt engineering competencies 9. Faculty development must therefore extend beyond technical training to encompass didactic structuration, ethical reflection, and epistemological vigilance.
Personalized generative AI integration consequently reshapes the professional identity of higher education instructors. It reinforces their role as designers of learning environments and guarantors of epistemic rigor in technologically mediated contexts.
4.5. Theoretical Contribution and Study LimitationsThis study contributes to the literature by proposing an integrated interpretive model positioning personalized generative AI as an instructional design assistant explicitly embedded within constructive alignment and active learning frameworks. It advances the debate beyond binary narratives of technological enthusiasm or alarmism by emphasizing conditional effectiveness.
Nevertheless, the study remains documentary in nature. It does not empirically measure learning gains associated with personalized AI integration. Future research should include experimental and quasi-experimental designs assessing impacts on cognitive engagement, performance, and long-term knowledge retention.
Despite this limitation, the study provides a theoretically grounded synthesis capable of guiding institutional policy and empirical research design.
This study set out to analyze the contributions, uses, and effectiveness conditions of personalized generative AI in university instructional design, particularly in support of active learning methods. Through a structured documentary analysis complemented by the examination of a pedagogical artifact, the findings provide a theoretically grounded synthesis of a rapidly evolving field.
First, the results demonstrate that generative AI realizes its educational potential most fully when integrated upstream within instructional design processes rather than used downstream as a content production shortcut. When configured through explicit pedagogical instructions, disciplinary frameworks, and institutional constraints, personalized generative AI strengthens constructive alignment by supporting the clarification of learning outcomes, the structuring of pedagogical sequences, and the development of coherent assessment tools.
Second, the study confirms that generative AI can act as a facilitator of active learning by reducing design workload and increasing the diversity of learning situations. However, its pedagogical value is conditional. AI-generated activities do not automatically foster high-level cognitive engagement. Without explicit instructional framing, activities may remain at a superficial level of engagement. Thus, AI functions as a pedagogical amplifier: it enhances well-structured instructional intentions but may reproduce low-engagement patterns when design principles are weakly articulated.
Third, assessment emerges as the most sensitive domain in the era of generative AI. Rather than being merely a threat to academic integrity, AI reveals structural fragilities in traditional evaluation formats. The findings support a shift toward process-oriented, justificatory, and metacognitive assessment models that preserve validity in technologically mediated environments.
Finally, effective integration of personalized generative AI depends on institutional governance, ethical regulation, and faculty capability development. AI literacy, prompt engineering competencies, and reflective pedagogical expertise become central professional requirements. Far from marginalizing instructors, generative AI redefines and reinforces their role as designers, regulators, and guarantors of epistemic rigor.
In conclusion, the key issue for higher education institutions is not whether generative AI should be integrated, but how it can be embedded within coherent pedagogical frameworks that preserve the core mission of universities: cultivating critical thinking, intellectual autonomy, and responsible knowledge production in a technologically transformed academic landscape.
| [1] | UNESCO, Guidance for generative AI in education and research, UNESCO Publishing, Paris, 2023. | ||
| In article | |||
| [2] | Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., et al., “ChatGPT for good? On opportunities and challenges of large language models for education,” Learning and Individual Differences, 103, 102274, 2023. | ||
| In article | View Article | ||
| [3] | Memarian, B., and Doleck, T., “ChatGPT in education: Methods, potentials, and limitations,” Computers & Education: Artificial Intelligence, 4, 100094, 2023. | ||
| In article | View Article | ||
| [4] | Bittle, K., and El-Gayar, O., “Generative AI and academic integrity in higher education: A systematic review and research agenda,” Information, 16(4), 296, 2025. | ||
| In article | View Article | ||
| [5] | Dos, I., “A systematic review of research on ChatGPT in higher education (January 2023–March 2025),” The European Educational Researcher, Advance online publication, 2025. | ||
| In article | View Article | ||
| [6] | Biggs, J., “Constructive alignment in university teaching,” HERDSA Review of Higher Education, 1, 5–22, 2014. | ||
| In article | |||
| [7] | Freeman, S., Eddy, S.L., McDonough, M., Smith, M.K., Okoroafor, N., Jordt, H., and Wenderoth, M.P., “Active learning increases student performance in science, engineering, and mathematics,” Proceedings of the National Academy of Sciences, 111(23), 8410–8415, 2014. | ||
| In article | View Article PubMed | ||
| [8] | Chi, M.T.H., and Wylie, R., “The ICAP framework: Linking cognitive engagement to active learning outcomes,” Educational Psychologist, 49(4), 219–243, 2014. | ||
| In article | View Article | ||
| [9] | Knoth, N., Tolzin, A., Janson, A., and Leimeister, J.M., “AI literacy and its implications for prompt engineering strategies in education,” Computers & Education: Artificial Intelligence, 6, 100225, 2024. | ||
| In article | View Article | ||
| [10] | National Institute of Standards and Technology (NIST), Artificial Intelligence Risk Management Framework (AI RMF 1.0), U.S. Department of Commerce, 2023. | ||
| In article | |||
| [11] | European Union, Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act), Official Journal of the European Union, 2024. | ||
| In article | |||
| [12] | Tranfield, D., Denyer, D., and Smart, P., “Towards a methodology for developing evidence-informed management knowledge by means of systematic review,” British Journal of Management, 14(3), 207–222, 2003. | ||
| In article | View Article | ||
| [13] | Creswell, J.W., and Poth, C.N., Qualitative Inquiry and Research Design: Choosing Among Five Approaches, 4th ed., SAGE Publications, Thousand Oaks, 2018. | ||
| In article | |||
| [14] | Braun, V., and Clarke, V., “Using thematic analysis in psychology,” Qualitative Research in Psychology, 3(2), 77–101, 2006. | ||
| In article | View Article | ||
Published with license by Science and Education Publishing, Copyright © 2026 Mulwani Makelele Basile, Sukadi Mangwa Christelle and Nzuzi Mavungu Gaël
This work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of this license, visit
http://creativecommons.org/licenses/by/4.0/
| [1] | UNESCO, Guidance for generative AI in education and research, UNESCO Publishing, Paris, 2023. | ||
| In article | |||
| [2] | Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., et al., “ChatGPT for good? On opportunities and challenges of large language models for education,” Learning and Individual Differences, 103, 102274, 2023. | ||
| In article | View Article | ||
| [3] | Memarian, B., and Doleck, T., “ChatGPT in education: Methods, potentials, and limitations,” Computers & Education: Artificial Intelligence, 4, 100094, 2023. | ||
| In article | View Article | ||
| [4] | Bittle, K., and El-Gayar, O., “Generative AI and academic integrity in higher education: A systematic review and research agenda,” Information, 16(4), 296, 2025. | ||
| In article | View Article | ||
| [5] | Dos, I., “A systematic review of research on ChatGPT in higher education (January 2023–March 2025),” The European Educational Researcher, Advance online publication, 2025. | ||
| In article | View Article | ||
| [6] | Biggs, J., “Constructive alignment in university teaching,” HERDSA Review of Higher Education, 1, 5–22, 2014. | ||
| In article | |||
| [7] | Freeman, S., Eddy, S.L., McDonough, M., Smith, M.K., Okoroafor, N., Jordt, H., and Wenderoth, M.P., “Active learning increases student performance in science, engineering, and mathematics,” Proceedings of the National Academy of Sciences, 111(23), 8410–8415, 2014. | ||
| In article | View Article PubMed | ||
| [8] | Chi, M.T.H., and Wylie, R., “The ICAP framework: Linking cognitive engagement to active learning outcomes,” Educational Psychologist, 49(4), 219–243, 2014. | ||
| In article | View Article | ||
| [9] | Knoth, N., Tolzin, A., Janson, A., and Leimeister, J.M., “AI literacy and its implications for prompt engineering strategies in education,” Computers & Education: Artificial Intelligence, 6, 100225, 2024. | ||
| In article | View Article | ||
| [10] | National Institute of Standards and Technology (NIST), Artificial Intelligence Risk Management Framework (AI RMF 1.0), U.S. Department of Commerce, 2023. | ||
| In article | |||
| [11] | European Union, Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act), Official Journal of the European Union, 2024. | ||
| In article | |||
| [12] | Tranfield, D., Denyer, D., and Smart, P., “Towards a methodology for developing evidence-informed management knowledge by means of systematic review,” British Journal of Management, 14(3), 207–222, 2003. | ||
| In article | View Article | ||
| [13] | Creswell, J.W., and Poth, C.N., Qualitative Inquiry and Research Design: Choosing Among Five Approaches, 4th ed., SAGE Publications, Thousand Oaks, 2018. | ||
| In article | |||
| [14] | Braun, V., and Clarke, V., “Using thematic analysis in psychology,” Qualitative Research in Psychology, 3(2), 77–101, 2006. | ||
| In article | View Article | ||