0% found this document useful (0 votes)
37 views11 pages

Feduc 2 1522841

This systematic review examines the early impact of artificial intelligence (AI) on higher education curriculum, instruction, and assessment, focusing on studies published shortly after the release of ChatGPT. The review identified 33 relevant studies, primarily from Asia, and highlighted four key themes: generation of new material, reduction of staff workload, automation of evaluation, and challenges for curriculum, instruction, and assessment practices. The findings suggest that while AI presents promising opportunities for enhancing educational practices, it also introduces significant challenges that need to be addressed.

Uploaded by

deyan2113
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views11 pages

Feduc 2 1522841

This systematic review examines the early impact of artificial intelligence (AI) on higher education curriculum, instruction, and assessment, focusing on studies published shortly after the release of ChatGPT. The review identified 33 relevant studies, primarily from Asia, and highlighted four key themes: generation of new material, reduction of staff workload, automation of evaluation, and challenges for curriculum, instruction, and assessment practices. The findings suggest that while AI presents promising opportunities for enhancing educational practices, it also introduces significant challenges that need to be addressed.

Uploaded by

deyan2113
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

TYPE Systematic Review

PUBLISHED 25 April 2025


DOI 10.3389/feduc.2025.1522841

A systematic review of the early


OPEN ACCESS impact of artificial intelligence on
higher education curriculum,
EDITED BY
Xinyue Ren,
Old Dominion University, United States

REVIEWED BY
Lina Montuori,
instruction, and assessment
Universitat Politècnica de València, Spain
Dennis Arias-Chávez,
Continental University, Peru Jingjing Liang*, Jason M. Stephens and Gavin T. L. Brown
*CORRESPONDENCE
Faculty of Arts and Education, The University of Auckland, Auckland, New Zealand
Jingjing Liang
[email protected]

RECEIVED 05 November 2024


ACCEPTED 31 March 2025
Introduction: The emergence of generative artificial intelligence (AI) presents
PUBLISHED 25 April 2025 many opportunities and challenges to teaching and learning in higher education.
CITATION However, compared to student- or administration-facing AI, little attention has
Liang J, Stephens JM and Brown GTL (2025) A been given to the impact of AI on faculty’s perspective or their curriculum,
systematic review of the early impact of
instruction, and assessment (CIA) practices.
artificial intelligence on higher education
curriculum, instruction, and assessment. Methods: To address this gap, we conducted a systematic review of articles
Front. Educ. 10:1522841.
published within the first nine months following the release of ChatGPT. After
doi: 10.3389/feduc.2025.1522841
screening following PRISMA statement guidelines, our review yielded 33 studies
COPYRIGHT
© 2025 Liang, Stephens and Brown. This is an
that met the inclusion criteria.
open-access article distributed under the Results: Most of these studies (n = 17) were conducted in Asia, and simulation
terms of the Creative Commons Attribution
License (CC BY). The use, distribution or
and modeling were the most frequently used methods (n = 15). Thematic
reproduction in other forums is permitted, analysis of the studies resulted in four themes about the impact of AI on
provided the original author(s) and the CIA triad: (a) generation of new material, (b) reduction of staff workload, (c)
copyright owner(s) are credited and that the
original publication in this journal is cited, in
automation/optimization of evaluation, and (d) challenges for CIA.
accordance with accepted academic practice. Discussion: Overall, this review informs the promising contribution of AI to
No use, distribution or reproduction is
permitted which does not comply with these
higher education CIA practices as well as the potential challenges and problems
terms. it introduces. Implications for future research and practices are proposed.

KEYWORDS

artificial intelligence, large language models, curriculum, instruction, assessment,


systematic review

Introduction
Large language models (LLMs) aim to simulate the natural language processing
capabilities of human beings (Cascella et al., 2023), particularly understanding, translating,
and generating texts or other content. The introduction of LLMs, such as ChatGPT
and other generative artificial intelligence (AI), has created interesting possibilities and
challenges for all educational systems. For instance, while AI can provide opportunities for
instructors to personalize learning and provide students with more immediate feedback
(Fauzi et al., 2023), it can raise concerns about academic integrity and the propagation of
biased or inaccurate information. Tensions over the legitimacy of AI in higher education
have placed significant pressure on academics and students. Much of the extant research
on AI has focused on students (e.g., Chan and Hu, 2023; Crompton and Burke, 2023)
or administrators (e.g., Nagy and Molontay, 2024; Teng et al., 2023). However, how
academics, in their role as educators, perceive, use, and adapt to AI tools is still under-
researched, particularly when many academics have reported insufficient AI literacy
(Alexander et al., 2023).
Given that AI tools are increasingly being used in higher education with a strong
potential to transform higher education teaching, learning, and assessment, it is important

Frontiers in Education 01 frontiersin.org


Liang et al. 10.3389/feduc.2025.1522841

to systematically synthesize early empirical evidence regarding AI’s noticing some threats can remind academics to be prepared for
impact, identify trends and patterns in the literature, and further negative impacts on college students’ engagement and learning.
inform AI policy, research, and practices. Therefore, this study
aims to fill the gap through a systematic review driven by the
overarching question: How has AI affected the teaching, curriculum Method
design, or assessment practices of academics in higher education
(HE)? Specifically, this systematic review aimed to explore what A systematic review of the literature was carried out by the
the first wave of research following the release of ChatGPT in first author in three databases: Scopus, Web of Science (WoS), and
November 2022 had focused on and found with respect to the EBSCOhost. These databases are major research databases, varying
impact of AI tools in HE. In particular, we wanted to understand in coverage content, disciplines, and languages (Stahlschmidt
how AI technologies were affecting curriculum, instruction, and and Stephen, 2020). They can complement each other and
assessment processes to identify pros and cons that might inform provide us with high-quality and relevant literature. To establish
promising pathways as well as potential challenges and problems. trustworthiness, the research team made agreements on search
To complement those insights, we also wanted to identify where terms and initial inclusion and exclusion criteria before the first
this early research was being conducted, what methods were used author identified the literature. To answer the research question,
by researchers, and which aspects of AI were of concern. We hope search terms were trialed iteratively to retrieve relevant literature
this contextual information helps readers better understand the on how AI has influenced curriculum, instruction, and assessment
applicability of results to their own jurisdictions or situations. By in higher education (HE). Synonyms for “AI” (e.g., ChatGPT),
doing so, we provide an overview of how the field is handling these “teaching” (e.g., instruction), “curriculum” (e.g., planning), or
new technologies to change or adapt academics’ work in terms of “assessment” (e.g., evaluation) were searched within the title,
curriculum, instruction, and assessment. abstract, keywords, or anywhere in the record. Search terms were
then finalized and used identically in each database: (“artificial
intelligence” OR “generative artificial intelligence” OR “generative
AI” OR “Gen-AI” OR “ChatGPT” OR “GPT∗ ”) AND ((“higher
The higher education education”) AND (“teaching” OR “assessment” OR “evaluation”
OR “feedback” OR “curriculum” OR “instruction∗ ” OR “lesson”
curriculum-instruction-assessment (CIA)
OR “planning” OR “delivery” OR “implementation”)). A total of
triad 2,810 articles were identified.
Filters were set only to include peer-reviewed journal articles
All educational systems must make decisions concerning what
published in English from December 2022 to the end of the search
they teach (i.e., curriculum), how they teach it (i.e., instruction),
in August 2023. The first 9 months of literature could capture
and how they evaluate student learning (i.e., assessment). Normally,
the critical early phase, when educators and researchers started
curriculum decisions (e.g., what to teach and the order in which
to publish their responses to newly released AI tools, such as
to teach it) lead to instructional decisions (e.g., how the material
ChatGPT. Filtering only to include peer-reviewed journal articles
is to be introduced, and which methods might best help students
helped ensure the quality of literature in the search phases. The
learn it), and culminate in assessment and evaluation decisions
time frame was chosen to return the earliest possible exploration
(e.g., how many assessments of what type and when those
of the impact of AI, immediately following the release of a demo of
assessments will take place). Thus, curriculum, instruction, and
ChatGPT on 30 November 2022.
assessment comprise the essential triad of all educational practices
Moreover, articles in this review were limited to empirical
(Pellegrino, 2006). Higher education systems give academics
articles on AI’s impact on HE curriculum, instruction, and
considerable autonomy over these decisions based on their higher
assessment (see Table 1). To be included, articles had to report a
research degrees and contribution to research outputs within
relationship between AI and any one or more of three aspects of
their disciplines. While professional certifying bodies have some
HE curriculum, instruction, or assessment. Articles regarding the
control over what must be covered, universities give academics
responsibility for deciding how to organize, teach, and assess
learning in their courses. TABLE 1 Inclusion and exclusion criteria.
The CIA triad has been demonstrated to be highly related to the
quality of specific programs and the college students they prepare Inclusion criteria Exclusion criteria
for the future (Merchant et al., 2014; Sadler, 2016). However, 1. Articles present an analysis of 1. Articles about HE curriculum,
HE settings are likely to shift considerably in the AI era—the empirical data, written in English instruction, and assessment but not
curriculum might not just reflect the logic of specific disciplines but and published in peer-reviewed related to how AI impacts them.
journal articles. 2. Articles about broad perspectives
also include AI-related content; instructional practices may need to 2. Articles about how AI influences on AI (e.g., benefits, weaknesses,
adapt to the co-existence of AI teachers; and assessment practices any one or more of three aspects of preparation) rather than its impact
might include students’ understanding and competencies related to HE curriculum, instruction, and on HE curriculum, instruction,
assessment (e.g., curriculum and assessment.
AI use. In this light, understanding the benefits that AI brings to design, instructional planning, 3. Articles about the impact of AI
HE curriculum, instruction, and assessment could help academics delivery, assessment, evaluation). on non-HE curriculum,
make full use of the technology to reduce workloads (Holmes et al., instruction, or assessment (e.g.,
school contexts).
2023; Pereira et al., 2023) and improve productivity. Meanwhile,

Frontiers in Education 02 frontiersin.org


Liang et al. 10.3389/feduc.2025.1522841

FIGURE 1
PRISMA flowchart of the literature search process.

impact of AI on curriculum, instruction, and assessment in non-HE introduced AI or HE curriculum, instruction, and assessment
contexts were excluded. but did not actually explore the relationship between them were
excluded (n = 32). Other articles were removed because they (a) did
not have empirical evidence (n = 4), (b) were in a non-HE context
Search process (n = 1), (c) were not available as full text (n = 2), and (d) were not
in English (n = 1). Consequently, a total of 33 articles were included
After removing duplications, 279 records were obtained for for review.
screening following the Preferred Reporting Items for Systematic During the screening stage, either author was unsure if a
Reviews and Meta-analyses (PRISMA) guidelines (see Figure 1; specific article should be included, and then the content of this
Moher et al., 2009). PRISMA guidelines provide a structured article was discussed against the research question and focus of
framework for searching, identifying, and selecting articles, as well this review. These discussions resulted in refining the inclusion and
as extracting, analyzing, and synthesizing data to address specific exclusion criteria and a consensus on included articles.
research questions. These guidelines help ensure the quality of the
review, minimize bias, and maintain transparency and replicability
(Moher et al., 2009) for researchers. Data extraction and analysis
Specifically, the screening process involved title and abstract
screening and full-text screening. The titles and abstracts of these Due to the exploratory nature of this research, an inductive
records were assessed using the agreed inclusion and exclusion thematic analysis (Braun and Clarke, 2006) was conducted to
criteria (see Table 1), resulting in the exclusion of 206 records. identify key patterns of the impact of AI on HE curriculum,
These records were excluded because their titles and abstracts instruction, and assessment. The first author read the 33 articles
showed that (a) they did not investigate how AI affected HE thoroughly and extracted key information from each paper,
curriculum, instruction, and assessment (n =135), (b) they lacked including citations, context, sample size, data collection method,
empirical evidence (n = 63), or (c) they did not focus on university measurement, and the impact on HE curriculum, instruction, and
contexts (n = 8). assessment. With an eye to finding answers to the research question,
The remaining 73 records were downloaded for full-text meaningful segments, such as “AI tools allow educators to/provide
screening. The articles were read and evaluated against the students with. . . ” and “the challenge is,” were used to identify
inclusion and exclusion criteria. Ones that did not meet descriptive codes regarding how AI influences HE curriculum,
the inclusion criteria were removed. Specifically, studies that instruction, and assessment.

Frontiers in Education 03 frontiersin.org


Liang et al. 10.3389/feduc.2025.1522841

TABLE 2 Study characteristics: number of publications by region,


Twenty-five initial descriptive codes (e.g., improve teaching
methods, and Foci.
effectiveness, challenge the role of educators, assess teaching effect)
were captured. Then, the similarities and differences between each Characteristic n
code were iteratively compared to identify high-level categories.
The region where the study was conducted
For instance, codes such as “challenge instructors’ AI teaching
competencies,” “ethical consideration,” and “lack of support in Asia (i.e., Mainland China, Hong Kong, India) 17

AI teaching” were integrated into a category named “challenge Europe 8


existing teaching.” Based on the raw data, research questions, and
North America 5
conceptual framework, similar categories were further reviewed
Latin America (i.e., Brazil, not specified) 3
and merged into four key themes. Articles could be arranged into
more than one theme because of the presence of multiple themes. Middle East (i.e., Oman, Turkey) 2
Please see Appendix A for complete details of themes, categories, Australia 2
and codes.
Methods
During the data extraction and analysis stage, the first author
coded the key information from each study to address the research Modeling/simulation 15
questions. The other authors critically read and reviewed the coding Experiment 7
results, final synthesis, and interpretation of the themes. Any
Interview 7
uncertainty on internal homogeneity and external heterogeneity
(Patton, 2003) among codes, categories, and potential themes were Survey 6

discussed at regular meetings. Others (e.g., discussion, workshop, open-ended questions, observation) 6

Case study 3

Results Mixed methods 2

Foci
Nature of studies
Technology 16

Table 2 shows the characteristics of the regions where the 33 Human experience 10

studies were conducted, as well as the methods utilized to explore Use of AI in class 7
the impact of AI on HE curriculum, instruction, and assessment.
Education dimension
Details of which papers are in each category are provided in
Appendix B. There are 16 countries around the world contributing Curriculum 9

to this field. Asia, predominantly China, accounted for 17 of the 33 Instruction 21


studies. As Table 2 shows, the balance was distributed widely across Assessment 17
the world.
The number of included studies is more than 33 because some were conducted in cross-
Regarding research methods, 15 of the studies used modeling national contexts, used multiple research methods, and/or focused on multiple dimensions.
or simulation methods to design, implement, and test the accuracy
and effect of AI tools. For instance, Shi (2023) designed a teaching
mode based on the neural network model to provide students with
science research. These articles examined how university teachers
personalized resources and assignments in moral education. This
perceived the impact of AI on their curriculum, instruction,
intelligent mode was then tested by simulating different teaching
and assessment. Just seven studies highlighted how AI supported
scenarios, and its accuracy and practical effect were confirmed.
curriculum, instruction, and assessment.
Each of the following methods was used in six or seven studies,
The focus of AI in higher education was classified according to
(a) experimental designs to compare AI with an intervention group
the CIA triad. As shown in Figure 2, 22 of the studies addressed
and a control group, (b) surveys, or (c) interviews. For instance,
just one of the three aspects, with most being in instruction and
Farazouli et al. (2024) conducted blinded Turing test experiments
assessment. Just 11 studies attempted an integration between two
by inviting instructors to examine AI-generated texts and student-
or more of the three aspects. Of the 33 studies, taking into account
written texts, and interviewed instructors for their perceptions of
all overlapping categories, 21 (64%) papers had something to do
the quality of assessed texts and whether they were worried that
with instruction, about half had something to do with assessment
AI had written the text. A small number of studies used one of
(17, 52%), and about a quarter focused on curriculum (9, 27%).
a set of diverse methods (e.g., case study, workshop, observation,
discussions, etc.).
Three distinct foci of AI were examined. The most common
focus in 16 studies was the technological dimensions of AI, Thematic analysis
such as designing and modeling an AI tool for HE curriculum,
instruction, and assessment and testing the accuracy of this tool Based on thematic analysis of the articles (their purposes
itself. Computer science and engineering researchers tended to and findings), four key themes were identified: (a) generation
focus on these technological aspects. The human dimension of AI of new material, (b) reduction of staff workload, (c)
experience was the focus of 10 studies and seen mostly in social automation/optimization of evaluation, and (d) challenges

Frontiers in Education 04 frontiersin.org


Liang et al. 10.3389/feduc.2025.1522841

Provide an immersive learning environment


AI technology, such as smart classroom, enables the simulation
of the atmosphere of a “real” classroom, practicum, or internship,
in which students could better understand and practice what they
had learned (Wang, 2023; Zhang et al., 2023). For instance, Wang
(2023) stated that AI could make teaching content visualizable; that
is, students could practice key communication competencies in a
virtual community of practice, which improves teaching efficiency.
Additionally, Zhang et al. (2023) designed and experimented with
an intelligent classroom for English language and literature courses
in China, and found that this AI tool provided the experimental
group with a good learning environment and enhanced students’
language proficiency.

Offer a new teaching mode


A large body of research has designed and implemented an AI
tool (e.g., speech recognition, ChatGPT) in HE teaching, providing
a new teaching mode with good accuracy and effectiveness (Al-
Shanfari et al., 2023; Chen et al., 2023; Guo, 2023; Pisica et al.,
2023; Pretorius, 2023; Shi, 2023; Yang, 2023; Zhu, 2023; Li and
FIGURE 2
AI’s impact on the CIA triad in HE: a Venn diagram of the number of
Zhang, 2023). Guo’s (2023) study, conducted in the Chinese
published articles. context, showed that a newly designed speech recognition method,
based on a recurrent neural network algorithm, had a better
accuracy rate and faster convergence, and could replace the
previous method and effectively address issues of the low speech
for curriculum, instruction, and assessment. While we analytically recognition rate caused by noisy environments. In addition, two
identify specific aspects, it needs to be remembered that mentions studies in multimedia teaching or moral education (Shi, 2023; Yang,
of curriculum or instruction or assessment separately, many of 2023) conducted simulation experiments, suggesting that the new
those studies have connections with one or more of the other AI-powered teaching mode stimulated students’ multiple senses,
topics. For example, reference to curriculum is usually related improved learning and teaching efficiency, and appeared to be
to how instruction could be done, while reference to assessment much more effective than traditional teaching modes, which to
is linked with how AI resources can be used for instruction or some extent hindered students’ originality and interest in learning.
curriculum, and so on. The simulation results also suggested that AI-powered teaching
mode had the potential to be implemented in real classrooms.

Generation of new material Reduction of staff workload


Ten studies described the ample new material AI provides
Ten studies have demonstrated that AI could support
for curriculum preparation and instruction implementation.
staff in curriculum, instruction, and assessment, by reducing
Attributes mentioned include providing various resources and
their logistical workloads, especially in terms of labor related
generating new teaching content, building an immersive learning
to curriculum design, interactions with students, delivering
environment, and improving or replacing existing teaching modes
personalized instruction, and preparing adapted or personalized
with a new teaching approach (Al-Shanfari et al., 2023; Chen et al.,
assignments (e.g., Holmes et al., 2023; Pereira et al., 2023; Sajja et al.,
2023; Guo, 2023; Pisica et al., 2023; Pretorius, 2023; Shi, 2023;
2023; Devi and Rroy, 2023).
Wang, 2023; Yang, 2023; Li and Zhang, 2023; Zhu, 2023).
Work as a curriculum assistant
Generate new curriculum content AI could work as a virtual curriculum assistant that helps
Two studies examined how academics perceived the influences address students’ time-consuming and repetitive questions about
of AI on specific subject-related curricula and teaching, one in curriculum (e.g., content, time, deadline), reduce instructors’
data science and one in English translation (Chen et al., 2023; logistical workloads and give them more time to improve teaching
Wang, 2023). Both studies conducted focus group interviews, and quality and support students’ development (Sajja et al., 2023). For
revealed that AI, at curriculum levels, could provide instructors and example, Sajja et al. (2023) used the syllabus and other teaching
students with new, rich, and personalized materials, contributing materials to design a curriculum-oriented intelligent assistant and
to curriculum design and development and facilitation of course found that this virtualTA effectively provided accurate course
preparation. According to Pisica et al. (2023), 18 academics from information and improved students’ course engagement.
Romanian universities reported the benefits of AI in curriculum, Additionally, AI has been demonstrated to help instructors
which included generating new content for existing courses and reflect on curriculum and content difficulty. One study investigated
developing new curricula or disciplines. using an AI toolkit to collect students’ assessment data and further

Frontiers in Education 05 frontiersin.org


Liang et al. 10.3389/feduc.2025.1522841

support teachers’ reflections on curriculum design (Phillips et al., teachers’ assessment workload and facilitating their intervention
2023). The study evaluated the reading demand (using skip-gram based on the quality of posts written by students. A new assessment
word embedding) of passages in assessments (e.g., exams) against method driven by AI tools (i.e., a backward propagation neural
the demand of texts and lectures used to support instruction, network) could automatically evaluate teaching, learning, and
on the assumption that reading in an assessment should not grading in an experiential online course in agriculture (Kumar et al.,
be harder than that used in instruction. The AI tool predicted 2023).
the difficulty of course materials, including recorded lectures and Using experiments with small-samples, Zhu et al. (2023)
assessment materials, in a similar way to lecturers’ self-reported developed in China an AI tool to predict students’ performance
material difficulty. Not only would this tool ensure the alignment based on their classroom behavior and previous performance.
of assessment reading materials with course reading materials, but They suggested that this tool could be used to adjust instructors’
also provide valid evidence for the assessment materials. teaching strategies and improve teaching quality. Similarly, Tang
et al. (2023) discussed how a designed intelligent evaluation system
Personalized instruction could better recognize voices, face, postures, and teaching skills in
Applying AI technologies can facilitate analyzing students’ microteaching skill training, accurately assess preservice teachers’
learning procedures, performance, and needs, providing teaching performance, and provide accurate guidance. Moreover,
instructors with timely feedback, and assisting them in delivering Saad and Tounkara (2023) used students’ information, including
adaptive instruction. Consequently, teaching and learning effects class participation frequency and quality, absence rate, contribution
were somewhat improved (Al-Shanfari et al., 2023; Firat, 2023; to online group work, and utilization of learning resources, in
Kohnke et al., 2023; Li L. et al., 2023; Li Q. et al., 2023; Pisica et al., distance learning, to establish a preference model for instructors
2023; Wang, 2023; Li and Wu, 2023). By implementing embedded that could quickly recognize students at risk of dropping out
glasses in real classrooms, Li L. et al. (2023) showed that this and leader students who could help their peers. They found that
device helped instructors recognize and process students’ real-time this model correctly assigned 85% of students to the correct
images and emotions and keep abreast of their learning status, and clusters (i.e., at risk or leader), and assisted instructors in making
this information further provided timely feedback to instructors correct decisions.
to change their teaching strategies. Therefore, compared to Besides evaluating students’ cognitive-related outcomes,
the control group, the teaching effect of the experiment group researchers have also used AI to assess students’ non-cognitive
increased by 9.44%, and students reported more satisfaction with outcomes (e.g., emotions, attitudes, and values). For instance,
teaching. Similarly, a new piano teaching mode powered by a Novais et al. (2023) designed an evaluation fuzzy expert system
vocal music singing learning system has been demonstrated to and employed it to build profiles of students’ soft skills (e.g.,
be relatively successful: it not only made piano teaching more communication and innovation skills, management skills, and
personalized and intelligent, increased teaching efficacy by 7.31% social skills). AI-generated scores were compared with real scores,
compared to the traditional teaching mode, but also motivated providing reliable feedback to instructors and students.
students to engage more in piano practice time and classroom
participation (Li Q. et al., 2023). Assess teaching effect
Wang et al. (2025) combined human-computer interaction and
Prepare personalized assignments deep learning algorithm to design an intelligent evaluation system
A new assessment method driven by AI tools could help for innovation and entrepreneurship. The system could detect
instructors prepare personalized assignments. Pereira et al. (2023) students’ attitudes and behaviors and assess teachers’ teaching
described how an emerging recommender system generated preparation, language expression, content mastery, and teaching
equivalent questions for assignments and exams, to enhance the design. The operability of this system was further supported by
variation of assignments and support instructors in preparing assessing the teaching quality and effect of two classes, and the AI
individualized assignments and minimizing plagiarism. They results showed that both classes’ teaching quality scored almost 7
also indicated that this recommender system was confirmed out of 10, suggesting a need to improve.
to be accurate after instructors evaluated the equivalence (e.g.,
interchangeability, topic, and coding effort) of AI-created questions
to the questions instructors had provided. Challenges for CIA
Besides the above advantages, some challenges brought by
AI in HE curricula, instruction, and assessment are described in
Automation/optimization of evaluation six studies.
Many scholars have investigated the potential of using AI in HE
assessment and evaluation. Challenge existing curricula
AI is found to bring many challenges to curriculum developers
Assess students’ learning process and outcomes and existing curricula, especially in deciding what content is
AI is found to accurately assess students’ learning process and more valuable, how to integrate AI into the current curriculum,
outcomes, and further determine teaching effect (Novais et al., and how to prepare students with digital literacy. In order to
2023; Saad and Tounkara, 2023; Wang et al., 2025; Zhu et al., 2023). address these questions, Lopezosa et al. (2023) interviewed 32
For instance, Archibald et al. (2023) showed that an AI-enabled journalism faculties from Spain and Latin America about how they
discussion platform accurately calculated students’ curiosity scores perceived this new technology; however, no consensus on whether
to present their engagement in discussion, further reducing to integrate AI into the curriculum was identified. Although most

Frontiers in Education 06 frontiersin.org


Liang et al. 10.3389/feduc.2025.1522841

faculties embraced AI technology and suggested establishing AI as AI challenged the current assessment system, as instructors were
a standalone subject, some stated that challenges, limitations, and worried that AI tools are too convenient for students making it easy
uncertainty about AI in education should be thoroughly researched to cheat and not work independently.
before incorporating it into the curriculum. Some individuals Moreover, it is hard for humans or AI detectors to identify AI-
suggested a compromise idea of integrating AI into communication generated texts or assignments, which in turn challenges existing
subjects as a preliminary step (Lopezosa et al., 2023). assessment practices and strategies. A case study conducted in an
Australian Master’s program for Geographic Systems and Science
Challenge existing instruction found that ChatGPT, acting as a fictional student, effectively
There are some concerns about using AI in HE instruction, completed most assignments (e.g., coding; Stutz et al., 2023).
including challenging teacher’s AI teaching competencies, ethical Although AI detectors identified it, lecturers did not recognize
considerations, and lack of teaching support. Chan (2023) indicated AI had generated the answers and gave a grade of “satisfactory.”
that AI may cause overdependence on technology and weaken Stutz et al. (2023) also discussed the challenge ChatGPT poses
social connections between teachers and students. In this light, Firat to traditional evaluation methods and called on researchers
(2023) indicated that implementing AI may require educators to and practitioners to rethink learning objectives, content, and
change their role from being instructors to guides or facilitators. assessment approaches. Assessments relying on oral exams or video
Furthermore, based on interviews with 12 university teachers conferences were suggested as alternatives that were resistant to
in Hong Kong, Kohnke et al. (2023) found that AI challenged AI dishonesty. In a similar study, both AI-generated and student-
participants’ teaching competencies about teaching students how to written texts were assessed by AI detectors and six English as a
judge AI-generated text critically, use AI tools ethically, and foster Second Language (ESL) lecturers from Cyprus (Alexander et al.,
digital citizenship. 2023). It was found that AI detectors worked more effectively
Ethical concerns in instruction include incorrect or fabricated in identifying AI-generated texts than humans, and AI, to some
information, accessibility, and algorithm biases (Firat, 2023). extent, challenged lecturers’ previous evaluation criteria and
According to a teaching reflection of an educator from Monash strategies. Lecturers seemed to conduct deficit assessment strategies
University, Pretorius (2023) taught postgraduate students how and considered that AI-generated texts were characterized as
to use generative AI effectively by giving them examples of having fewer grammar errors and more accurate expressions.
communicating with generative AI to brainstorm and design Therefore, the authors recommended improving instructors’ digital
research questions. Consequently, her course achieved good literacy and rethinking assessment policies and practices in the
teaching feedback. However, Pretorius realized that incorrect or AI era. Similar findings were shown in Sweden, where Farazouli
biased information produced by ChatGPT, as well as unequal access et al. (2024) conducted a Turing test among 24 university teachers
to AI caused by distinct socioeconomic status, required educators in humanities and social sciences. They found that teachers
to shift their ability to prepare students with AI literacy for using tended to be critical about students’ texts, underestimated students’
AI professionally and ethically. Firat (2023) also mentioned over- performance, and doubted that some student texts had been
reliance on AI, data privacy, and unequal access to AI tools finished by GPT. These concerns negatively influenced the trust
as challenges. relationship between teachers and students.
Another concern centers on inadequate technical support and
training in integrating AI into teaching. For instance, Al-Shanfari
et al. (2023) utilized a mixed-method study to understand how Discussion
aware, prepared, and challenged instructors were in integrating
intelligent tutoring systems (ITS) in Omani universities. They This study examined how AI influences HE curriculum,
found that most participants considered ITS effective in providing instruction, and assessment by reviewing 33 recent articles. We
customized instruction; however, the lack of support and guidance summarize the review within a SWOT analysis (Gurl, 2017)
in using ITS brought the instructors substantial challenges. As one framework to provide a structured framework about the strengths,
participant said, “Teaching approaches at my university are not weaknesses, opportunities, and threats of AI in terms of higher
supporting the use of ITS” (p. 956). Similarly, Chen et al. (2023) education curriculum, instruction, and assessment.
interviewed 16 faculty members in data science and revealed that
inconsistent definitions of data science, inadequate team support,
and lack of collaboration platforms were major challenges. Benefits of AI in higher education
Challenge existing assessment methods and strategies The analysis of 33 recent studies provides empirical evidence
While there are various opportunities for HE assessment, as to the geographical distribution of research, research methods,
several challenges exist and need to be addressed. The most research foci, and the impact of AI on the CIA triad in higher
frequently mentioned challenge is that AI has been proven to education. Our results showed that most research was conducted
pass many examinations and assignments. Consequently, some in Asia, Europe, or North America. Consistent with findings
students may use it to cheat or plagiarize. For instance, Chan indicating a rapid trend in Chinese research on AI in higher
(2023) stated that new concerns in HE assessment have emerged, education (Crompton and Burke, 2023), China accounted for most
as most students and teachers are worried that some students use studies in this review. One possible reason is that AI has been
AI tools to cheat and plagiarize, and teachers could not identify considered a priority in the Chinese government’s agenda (State
such dishonesty correctly. Similarly, Kohnke et al. (2023) found that Council of PRC, 2017) and is thus highly emphasized in education.

Frontiers in Education 07 frontiersin.org


Liang et al. 10.3389/feduc.2025.1522841

This review also indicated that simulation and modeling were the regions, especially developing countries, is poorly represented. The
most frequently used methods to assess the potential impact of AI currently available research has been conducted largely in Western,
in the HE context (e.g., Phillips et al., 2023; Saad and Tounkara, Educated, Industrialized, Rich, and Democratic (WEIRD; Henrich
2023; Sajja et al., 2023; Shi, 2023). This finding might be related et al., 2010) societies. This means that there is a bias in what we
to research foci, as more attention has been given to testing the can know since participants from other regions of the world are
effectiveness of AI tools rather than to academics’ perceptions and excluded. To the degree that cultural, historical, and developmental
practices of AI tools in the real world. factors impinge upon the practice of higher education, more work
Several benefits were identified in this review, such as with such populations is needed. Such research would enhance
generating new material, reducing staff workload, and evaluating our understanding of how academics perceive the threats and
automatically or optimally (e.g., Kumar et al., 2023; Pretorius, opportunities of AI.
2023; Shi, 2023). This review first reveals that AI can create Another gap in the literature is the absence of research into
new courses and resources, promote curriculum development, the real world of higher education classroom pedagogical activities,
address time-consuming workloads concerning curriculum (e.g., course development, and assessment design. Comparatively, few
questions about syllabi, time, and deadline), and evaluate the studies have focused on the human experience of using AI,
material difficulty and quality (Chen et al., 2023; Lopezosa et al., especially in classrooms (e.g., Al-Shanfari et al., 2023; Archibald
2023; Pisica et al., 2023; Wang, 2023). These findings reinforce et al., 2023; Farazouli et al., 2024). Related to this is the lack of cross-
earlier findings that the implementation of AI (e.g., ChatGPT) disciplinary collaborative research between computer scientists and
could contribute to generating a lesson plan and course objectives social scientists. If AI tools are meant to make a difference to
(Kiryakova and Angelova, 2023; Rahman and Watanobe, 2023) classroom teaching, learning, and evaluation, researchers from
and to assessing general resources and textbooks (Koć-Januchta different backgrounds will need to collaboratively explore how AI
et al., 2022). AI has also been found to provide an immersive technology could be used in educational practice.
learning environment and a new teaching mode, where instructors Based on this review, future research will need to explore the
facilitate students to conduct “trial-error” strategies and practice following questions:
specific competencies in simulated scenes (e.g., Wang, 2023;
Zhang et al., 2023). Meanwhile, AI, as virtual teachers, could take • How does AI influence the teaching, curriculum design,
up logistical workloads (e.g., reinforce students’ mastery of key or assessment practices of academics in higher education
concepts) and provide instructors time and energy to conduct in the Global South contexts? How does it differ from
personalized instruction and satisfy students’ distinct needs (Al- research conducted in the Global North? How can AI tools,
Shanfari et al., 2023; Firat, 2023; Kohnke et al., 2023). These policies, and practice become more culture-sensitive based on
findings are in line with previous studies: AI, in most cases, this comparison?
worked well in sharing instructors’ tutoring tasks, providing • What are the best practices of academics in teaching students
students with immediate and unique feedback, and reducing to use AI ethically and responsibly?
instructors’ workload (Chou et al., 2011; Zawacki-Richter et al.,
2019). Additionally, AI seems to benefit assessments by generating
personalized assignments (Pereira et al., 2023), effectively assessing
and predicting students’ academic achievement (Wang et al., 2025) Opportunities of AI in higher education
and non-cognitive outcomes (e.g., soft skills, Novais et al., 2023),
identifying disadvantaged students (Saad and Tounkara, 2023), and The presence of AI seems to create opportunities for academics
assessing teaching effectiveness (Wang et al., 2025). This review in terms of revisions to existing courses and freeing up time
finds evidence that AI-empowered assessment can effectively assess to focus on improving existing curriculum, instruction, and
students’ learning and teachers’ teaching (Hooda et al., 2022; assessment quality. These opportunities point to the development
Zawacki-Richter et al., 2019). of interdisciplinary courses with the help of AI, especially in
Thus, AI has been found to bring benefits to HE curriculum, terms of course content and assessment design. One way to
instruction, and assessment, including generating new materials, implement interdisciplinary approaches would be to integrate
alleviating faculty workloads, and automating or optimizing ethical considerations of using or relying on AI in philosophy or
assessment, in alignment with progressive literature (Chou et al., research methods courses. Another way is to use AI to bridge
2011; Rahman and Watanobe, 2023). These findings pave the way the intersections of different disciplines (e.g., Arts-Arts disciplines,
for future studies to ascertain the generalizability of the early Science-Science disciplines, and Arts-Sciences disciplines). An
promising results and the identification of conditions in which the example in the Science-Science disciplinary intersection could be
early benefits actually occur. The benefits identified here suggest using AI to predict how air pollution (environmental science)
directions in which HE policy could go, provided appropriate affects health outcomes (healthcare).
infrastructure and training are given to academics. Given the benefits AI brings to academics’ instruction
by providing an immersive learning environment and a new
teaching mode, it may be feasible to establish a collaborative
teaching system, where virtual teachers (i.e., AI) share intensive
Weaknesses in the research and repetitious teaching workloads (e.g., immediate feedback,
knowledge reinforcement), and where human teachers pay
This early research, however, is potentially problematic because attention to student’s personal, emotional, and development needs
of its narrowness. Specifically, research conducted in many and conduct one-to-one adaptive instruction. For instance, AI

Frontiers in Education 08 frontiersin.org


Liang et al. 10.3389/feduc.2025.1522841

teachers could automatically grade and constantly offer targeted workers is to ensure they develop generic competencies rather
practice for students, which would provide adaptive support to than disciplinary specific knowledge and ability (Chickering and
teachers. Consequently, developing AI-empowered student and Ehrmann, 1996; Cuban, 2001). Consequently, faculty members
teacher assessment models could be important research and need to consider the intersection of disciplinary structure
practice directions. and AI affordances and constraints in terms of integrating
Additionally, we suppose that student-facing AI assessment contemporary capabilities with long-standing traditions
models can be implemented in three steps. Before the classroom, of knowledge.
AI can be used to diagnose students’ knowledge bases and help The threat of AI applies also to instructors’ role and their
instructors better understand students’ learning preferences, teaching abilities. Most academics have little understanding of
motivations, and needs. During the classroom, AI techniques how AI tools are designed and what large language models can
(e.g., speech recognition, facial recognition) can be combined do. Thus, few have thought constructively about how to integrate
to collect students’ facial expressions, emotions, gestures, AI into their teaching. The question is how AI tools, with their
classroom dialogue, and so on, and promptly analyze their capacity to translate text, analyze it, and compose fluent but
learning engagement, behaviors, strategies, and difficulties. This potentially meaningless text, can or should be integrated into
information can inform instructors about students in need, diverse fields such as engineering, medicine, studio art, laboratory
possible changes in teaching strategies, and early advice on where science, and so on. Application within humanities may be much
to intervene. After the classroom, AI, working as a teaching more feasible with the current capacities of GenAI, but still
assistant, could provide students with targeted assignments, academics have to learn how AI can be an adjunct to teaching
facilitate individualized learning, and predict future performance rather than potentially a substitute for the instructor’s knowledge
based on current performance. Similarly, instructors’ information and skill. Enthusiasm of technologists for using machines to
(e.g., preparing lessons and teaching) could be collected into a replace the labor of humans (Brown, 2020) is clearly a threat to
digital profile for each instructor, informing assessments of their the human-in-the-loop. This is all the more important because
teaching performance, abilities, and professional development currently AI cannot identify fabrication or error in the text that
needs. It could inform faculty professional development programs. it assembles.
Nevertheless, caution is still needed when embracing AI- The most important challenge centers around assessment and
generated assessment results, as some indicators (e.g., instructors’ evaluation of learning. With the free access students have to
professional ethics) cannot be assessed effectively or, depending on powerful AI language models, it is difficult to ensure that the
programming, or could even be overlooked. Therefore, combining work submitted by students is their own genuine intellectual
AI-generated and human-based assessments is necessary, contribution. The fear and possibility of non-detectable academic
respecting human beings’ values and educational principles. The dishonesty will require substantial efforts to ensure the integrity
challenge of students’ unsanctioned use of AI within assessment and social warrant (Brown, 2022) of course grades and academic
processes will require higher education to find valid ways of qualifications. A possible response to generative AI capabilities
implementing or managing AI. is to impose invigilated in-person examinations without access
to digital resources and without bring-your-own-devices. Another
way to ensure the integrity of evaluation is to require students
to participate in an oral examination of their learning; a solution
Threats AI brings to higher education that will have a large impact on workloads, efficiency, validity of
sampling, and accuracy of scoring. It is clear generative AIs will
Indeed, an important threat AI brings to education is the force academics to rethink the purpose of assessment (e.g., student-
requirement that all teaching and learning has to happen in an ICT centered or knowledge-based learning), the content and format of
environment, which could be seen as antithetical to the human in what is assessed, the design of assessments (e.g., process evaluation,
the human experience of learning (Brown, 2020). While AI seems outcome evaluation, or value-added evaluation), and the formative
to be able to do many things, it is simply programming and thus use of assessed performances.
not human. Given the interactive and integrated nature of curriculum,
The literature reported here makes clear substantial challenges instruction, and assessment processes, there simply is little research
to curriculum, instruction, and assessment. Despite the importance on AI’s impact on their intersection. Indeed, only three papers
of curriculum, this review found less research into AI’s integration attempted to address all three legs of the CIA triad. Future research
into HE curriculum than on the two other aspects of the CIA triad. will need to examine the integration of AI impact, rather than
In terms of existing curricula, there is considerable debate as to studying each aspect of the triad in isolation.
what students need to be taught about or with AI and how it could
be integrated (Lopezosa et al., 2023). AI creates the possibility
that skill with large language models (e.g., to analyze data, to Limitations
compose communication) is what students might need in the
future. Considerable enthusiasm exists for the integration of AI Although this review explored three major education databases
skills with other graduate attributes such as the 4C skills (i.e., to minimize selection bias, the recent articles were published
communication, collaboration, critical thinking, and creativity). in English rather than in other languages, such as Chinese and
This is an extension of the long-standing arguments advanced Spanish. Therefore, the generalizability of these findings needs to
by technologists that the best way to prepare future citizens and be taken with caution for use in non-English contexts. Considering

Frontiers in Education 09 frontiersin.org


Liang et al. 10.3389/feduc.2025.1522841

that Asia accounted for a large number of studies and that an editing. JS: Conceptualization, Writing – review & editing. GB:
emerging number of studies were conducted in South America and Supervision, Writing – review & editing, Funding acquisition.
the Middle East, multi-lingual or culture-responsive studies should
be conducted in the future. More importantly, this review was
limited to the first 9 months following the release of ChatGPT on Funding
30 November 2022; hence, it is very much a preliminary exploration
of how AI has impacted higher education. In light of how quickly The author(s) declare that financial support was received for the
AI systems are being developed and changed, new research is being research and/or publication of this article. The authors would like to
published constantly. Hence, the findings presented in this review acknowledge financial assistance from the University of Auckland
have probably been superseded already. Open Access Support Fund.

Conclusion Conflict of interest


The authors declare that the research was conducted in the
This review contributes to a better understanding of the benefits
absence of any commercial or financial relationships that could be
and threats of AI that recent research has identified in the higher
construed as a potential conflict of interest.
education context. It also identifies challenging opportunities
The author(s) declared that they were an editorial board
for higher education institutions and faculty members. This
member of Frontiers, at the time of submission. This had no impact
paper offers a first step toward understanding the impact AI
on the peer review process and the final decision.
on the CIA triad in higher education. While the future remains
uncertain, several of the trends found in the study are likely to
continue for some time to come. In particular, it seems very Generative AI statement
likely that China will continue to lead the way in research
outputs and that studies using stimulations/modeling are likely The author(s) declare that no Gen AI was used in the creation
to remain the most common method, perhaps because they are of this manuscript.
relatively easy to conduct. It is also likely that the challenges
associated with meaningful integration of AI into curriculum,
instruction, and assessment will remain difficult for years Publisher’s note
to come.
All claims expressed in this article are solely those of the
authors and do not necessarily represent those of their affiliated
Data availability statement organizations, or those of the publisher, the editors and the
reviewers. Any product that may be evaluated in this article, or
The original contributions presented in the study are included claim that may be made by its manufacturer, is not guaranteed or
in the article/Supplementary material, further inquiries can be endorsed by the publisher.
directed to the corresponding author.

Supplementary material
Author contributions
The Supplementary Material for this article can be found
JL: Conceptualization, Formal analysis, Investigation, online at: https://www.frontiersin.org/articles/10.3389/feduc.2025.
Methodology, Writing – original draft, Writing – review & 1522841/full#supplementary-material

References

Alexander, K., Savvidou, C., and Alexander, C. (2023). Who wrote this essay? Brown, G. T. L. (2022). The past, present and future of educational assessment: a
Detecting AI-generated writing in second language education in higher education. transdisciplinary perspective. Front. Educ. 7:1060633. doi: 10.3389/feduc.2022.1060633
Teach. Engl. Technol. 23, 25–43. doi: 10.56297/BUKA4060/XHLD5365
Cascella, M., Montomoli, J., Bellini, V., and Bignami, E. (2023). Evaluating the

Al-Shanfari, L., Abdullah, S., Fstnassi, T., and Al-Kharusi, S. (2023). Instructors’ feasibility of ChatGPT in healthcare: an analysis of multiple clinical and research
perceptions of intelligent tutoring systems and their implications for studying scenarios. J. Med. Syst. 47, 1–5. doi: 10.1007/s10916-023-01925-4
computer programming in Omani higher education institutions. Int. J. Membr. Sci. ∗
Chan, C. K. Y. (2023). A comprehensive AI policy education framework
Technol. 10, 947–967. doi: 10.15379/ijmst.v10i2.1395
for university teaching and learning. Int. J. Educ. Technol. Higher Educ. 20.

Archibald, A., Hudson, C., Heap, T., Thompson, R. R., Lin, L., DeMeritt, doi: 10.1186/s41239-023-00408-3
J., et al. (2023). A validation of AI-enabled discussion platform metrics and
Chan, C. K. Y., and Hu, W. (2023). Students’ voices on generative AI: perceptions,
relationships to student efforts. TechTrends 67, 285–293. doi: 10.1007/s11528-022-
benefits, and challenges in higher education. Int. J. Educ. Technol. Higher Educ. 20:43.
00825-7
doi: 10.1186/s41239-023-00411-8
Braun, V., and Clarke, V. (2006). Using thematic analysis in psychology. Qual. Res. ∗
Chen, H., Wang, Y., Li, Y., Lee, Y., Petri, A., and Cha, T. (2023). Computer
Psychol. 3, 77–101. doi: 10.1191/1478088706qp063oa
science and non-computer science faculty members’ perception on teaching data
Brown, G. T. L. (2020). Schooling beyond Covid-19: an unevenly distributed future. science via an experiential learning platform. Educ. Inf. Technol. 28, 4093–4108.
Front. Educ. 5:82. doi: 10.3389/feduc.2020.00082 doi: 10.1007/s10639-022-11326-8

Frontiers in Education 10 frontiersin.org


Liang et al. 10.3389/feduc.2025.1522841

Chickering, A. W., and Ehrmann, S. C. (1996). Implementing the seven principles: ∗


Novais, A. S. d., Matelli, J. A., and Silva, M. B. (2023). Fuzzy soft skills
technology as lever. AAHE Bull. 49, 3–6. assessment through active learning sessions. Int. J. Artif. Intell. Educ. 34, 416–451.
doi: 10.1007/s40593-023-00332-7
Chou, C.-Y., Huang, B.-H., and Lin, C.-J. (2011). Complementary machine
intelligence and human intelligence in virtual teaching assistant for tutoring program Patton, M. Q. (2003). Qualitative evaluation checklist. Eval. Checkl. Proj. 21, 1–13.
tracing. Comput. Educ. 57, 2303–2312. doi: 10.1016/j.compedu.2011.06.005
Pellegrino, J. W. (2006). Rethinking and Redesigning Curriculum, Instruction
Crompton, H., and Burke, D. (2023). Artificial intelligence in higher and Assessment: What Contemporary Research and Theory Suggests. Chicago, IL:
education: the state of the field. Int. J. Educ. Technol. Higher Educ. 20:22. Commission on the Skills of the American Workforce, 1–15.
doi: 10.1186/s41239-023-00392-8 ∗
Pereira, F. D., Rodrigues, L., Henklain, M. H. O., Freitas, H., Oliveira, D. F., Cristea,
Cuban, L. (2001). Oversold and Underused: Computers in the Classroom. Cambridge, A. I., et al. (2023). Toward human–AI collaboration: a recommender system to support
MA: Harvard University Press. CS1 instructors to select problems for assignments and exams. IEEE Trans. Learn.
Technol. 16, 457–472. doi: 10.1109/TLT.2022.3224121

Devi, D., and Rroy, A. D. (2023). Role of artificial intelligence (AI) in sustainable
education of higher education institutions in Guwahati City: teacher’s perception. Int. ∗
Phillips, T. M., Saleh, A., and Ozogul, G. (2023). An AI toolkit to support teacher
Manage. Rev. 111–116. reflection. Int. J. Artif. Intell. Educ. 33(3), 635–658. doi: 10.1007/s40593-022-00295-1

Farazouli, A., Cerratto-Pargman, T., Bolander-Laksov, K., and McGrath, C. (2024). ∗
Pisica, A. I., Edu, T., Zaharia, R. M., and Zaharia, R. (2023). Implementing artificial
Hello GPT! Goodbye home examination? An exploratory study of AI chatbots impact intelligence in higher education: pros and cons from the perspectives of academics.
on university teachers’ assessment practices. Assess. Eval. Higher Educ. 49, 363–375. Societies 13:118. doi: 10.3390/soc13050118
doi: 10.1080/02602938.2023.2241676 ∗
Pretorius, L. (2023). Fostering AI literacy: a teaching practice reflection. J. Acad.
Fauzi, F., Tuhuteru, L., Sampe, F., Ausat, A., and Hatta, H. (2023). Analysing the Lang. Learn. 17, T1–T8. Available online at: https://journal.aall.org.au/index.php/jall/
role of ChatGPT in improving student productivity in higher education. J. Educ. 5, article/view/891
14886–14891. doi: 10.31004/joe.v5i4.2563
Rahman, M. M., and Watanobe, Y. (2023). ChatGPT for education and research:

Firat, M. (2023). What ChatGPT means for universities: perceptions of scholars opportunities, threats, and strategies. Appl. Sci. 13:5783. doi: 10.3390/app13095783
and students. J. Appl. Learn. Teach. 6, 57–63. doi: 10.37074/jalt.2023.6.1.22 ∗
Saad, I., and Tounkara, T. (2023). Artificial intelligence-based group decision
Guo, J. (2023). Innovative application of sensor combined with speech recognition

making to improve knowledge transfer: the case of distance learning in higher
technology in college english education in the context of artificial intelligence. J. Sens. education. J. Decis. Syst. 1–16. doi: 10.1080/12460125.2022.2161734
2023:9281914. doi: 10.1155/2023/9281914
Sadler, D. R. (2016). Three in-course assessment reforms to improve
Gurl, E. (2017). SWOT Analysis: A Theoretical Review. higher education learning outcomes. Assess. Eval. Higher Educ. 41, 1081–1099.
doi: 10.1080/02602938.2015.1064858
Henrich, J., Heine, S. J., and Norenzayan, A. (2010). The weirdest people in the
world? Behav. Brain Sci. 33, 61–135. doi: 10.1017/S0140525X0999152X ∗
Sajja, R., Sermet, Y., Cwiertny, D., and Demir, I. (2023). Platform-independent and
curriculum-oriented intelligent assistant for higher education. Int. J. Educ. Technol.
Holmes, W., Iniesto, F., Anastopoulou, S., and Boticario, J. G. (2023). Stakeholder

Higher Educ. 20:42. doi: 10.1186/s41239-023-00412-7
perspectives on the ethics of AI in distance-based higher education. Int. Rev. Res. Open
Distance Learn. 24, 96–117. doi: 10.19173/irrodl.v24i2.6089 ∗
Shi, X. (2023). Exploring an innovative moral education cultivation model in
higher education through neural network perspective: a preliminary study. Appl. Artif.
Hooda, M., Rana, C., Dahiya, O., Rizwan, A., and Hossain, M. S. (2022). Artificial
Intell. 37:2214767. doi: 10.1080/08839514.2023.2214767
intelligence for assessment and feedback to enhance student success in higher
education. Math. Probl. Eng. 2022, 1–19. doi: 10.1155/2022/5215722 Stahlschmidt, S., and Stephen, D. (2020). Comparison of web of science, scopus and
dimensions databases. KB forschungspoolprojekt 2020:37.
Kiryakova, G., and Angelova, N. (2023). ChatGPT—a challenging tool
for the university professors in their teaching practice. Educ. Sci. 13:1056. State Council of PRC (2017). Notice of the State Council on Issuing the Development
doi: 10.3390/educsci13101056 Plan for the New Generation of Artificial Intelligence. In Chinese. Available online
at: https://www.gov.cn/zhengce/content/2017-07/20/content_5211996.htm (accessed
Koć-Januchta, M. M., Schönborn, K. J., Roehrig, C., Chaudhri, V. K., Tibell, L.
March 1, 2025).
A., and Heller, H. C. (2022). “Connecting concepts helps put main ideas together”:
cognitive load and usability in learning biology with an AI-enriched textbook. Int. J. ∗
Stutz, P., Elixhauser, M., Grubinger-Preiner, J., Linner, V., Reibersdorfer-
Educ. Technol. Higher Educ. 19:11. doi: 10.1186/s41239-021-00317-3 Adelsberger, E., Traun, C., et al. (2023). Ch(e)atGPT? An anecdotal approach
addressing the impact of ChatGPT on teaching and learning GIScience. GI_Forum 11,

Kohnke, L., Moorhouse, B. L., and Zou, D. (2023). Exploring generative artificial
140–147. doi: 10.1553/giscience2023_01_s140
intelligence preparedness among university language instructors: a case study. Comput.
Educ. Artif. Intell. 5:100156. doi: 10.1016/j.caeai.2023.100156 ∗
Tang, J., Zhang, P., and Zhang, J. (2023). Design and implementation of intelligent
evaluation system based on pattern recognition for microteaching skills training. Int. J.
Kumar, M. G. V., Veena, N., Cepov,á, L., Raja, M. A. M., Balaram, A.,

Innov. Comput. Inf. Control 19, 153–162. doi: 10.24507/ijicic.19.01.153
and Elangovan, M. (2023). Evaluation of the quality of practical teaching of
agricultural higher vocational courses based on BP neural network. Appl. Sci. 13:1180. Teng, Y., Zhang, J., and Sun, T. (2023). Data-driven decision-making model based
doi: 10.3390/app13021180 on artificial intelligence in higher education system of colleges and universities. Expert
Li, F., and Zhang, X. (2023). Artificial intelligence facial recognition and voice
∗ Syst. 40:e12820. doi: 10.1111/exsy.12820
anomaly detection in the application of English MOOC teaching system. Soft Comput. ∗
Wang, D., Han, L., Cong, L., Zhu, H., and Liu, Y. (2025). Practical evaluation
27, 6855–6867. doi: 10.1007/s00500-023-08119-7 of human-computer interaction and artificial intelligence deep learning algorithm in

Li, L., Chen, C. P., Wang, L., Liang, K., and Bao, W. (2023). Exploring artificial innovation and entrepreneurship teaching evaluation. Int. J. Hum. Comput. Interact.
intelligence in smart education: real-time classroom behavior analysis with embedded 41, 1742–1750. doi: 10.1080/10447318.2023.2199632
devices. Sustainability 15:7940. doi: 10.3390/su15107940 ∗
Wang, Y. (2023). Artificial intelligence technologies in college English translation
Li, Q., Liu, H., and Zhao, X. (2023). IoT networks-aided perception vocal music

teaching. J. Psycholinguistic Res. 52, 1525–1544. doi: 10.1007/s10936-023-09
singing learning system and piano teaching with edge computing. Mob. Inf. Syst. 2023, 960-5
1–9. doi: 10.1155/2023/2074890 ∗
Yang, X. (2023). Higher education multimedia teaching system based on the
Li, Y., and Wu, F. (2023). Design and application research of embedded voice

artificial intelligence model and its improvement. Mob. Inf. Syst. 2023:8215434.
teaching system based on cloud computing. Wireless Commun. Mob. Comput. 2023, doi: 10.1155/2023/8215434
1–10. doi: 10.1155/2023/7873715
Zawacki-Richter, O., Marín, V. I., Bond, M., and Gouverneur, F. (2019).

Lopezosa, C., Codina, L., Pont-Sorribes, C., and Vállez, M. (2023). Use of generative
Systematic review of research on artificial intelligence applications in higher
artificial intelligence in the training of journalists: challenges, uses and training
education–where are the educators?. Int. J. Educ. Technol. Higher Educ. 16, 1–27.
proposal. El Profes. Inf. 32, 1–12. doi: 10.3145/epi.2023.jul.08
doi: 10.1186/s41239-019-0171-0
Merchant, Z., Goetz, E. T., Cifuentes, L., Keeney-Kennicutt, W., and Davis,
T. J. (2014). Effectiveness of virtual reality-based instruction on students’ learning

Zhang, X., Sun, J., and Deng, Y. (2023). Design and application of intelligent
outcomes in K-12 and higher education: a meta-analysis. Comput. Educ. 70, 29–40. classroom for English language and literature based on artificial intelligence
doi: 10.1016/j.compedu.2013.07.033 technology. Appl. Artif. Intell. 37. doi: 10.1080/08839514.2023.2216051
Moher, D., Liberati, A., Tetzlaff, J., and Altman, D. G. (2009). Preferred reporting ∗
Zhu, K. (2023). Application of multimedia service based on
items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. artificial intelligence and real-time communication in higher education.
6:e1000097. doi: 10.1371/journal.pmed.1000097 Comput. Aided Design Appl. 20, 116–131. doi: 10.14733/cadaps.2023.S12.
Nagy, M., and Molontay, R. (2024). Interpretable dropout prediction: towards 116-131
XAI-based personalized intervention. Int. J. Artif. Intell. Educ. 34, 274–300. ∗
Zhu, L., Liu, G., Lv, S., Chen, D., Chen, Z., and Li, X. (2023). An intelligent
boosting and decision-tree-regression-based score prediction (BDTR-SP) method in
the reform of tertiary education teaching. Information 14:317. doi: 10.3390/info140

Included in the systematic review. 60317

Frontiers in Education 11 frontiersin.org

You might also like