Feduc 2 1522841
Feduc 2 1522841
REVIEWED BY
Lina Montuori,
instruction, and assessment
Universitat Politècnica de València, Spain
Dennis Arias-Chávez,
Continental University, Peru Jingjing Liang*, Jason M. Stephens and Gavin T. L. Brown
*CORRESPONDENCE
Faculty of Arts and Education, The University of Auckland, Auckland, New Zealand
Jingjing Liang
[email protected]
KEYWORDS
Introduction
Large language models (LLMs) aim to simulate the natural language processing
capabilities of human beings (Cascella et al., 2023), particularly understanding, translating,
and generating texts or other content. The introduction of LLMs, such as ChatGPT
and other generative artificial intelligence (AI), has created interesting possibilities and
challenges for all educational systems. For instance, while AI can provide opportunities for
instructors to personalize learning and provide students with more immediate feedback
(Fauzi et al., 2023), it can raise concerns about academic integrity and the propagation of
biased or inaccurate information. Tensions over the legitimacy of AI in higher education
have placed significant pressure on academics and students. Much of the extant research
on AI has focused on students (e.g., Chan and Hu, 2023; Crompton and Burke, 2023)
or administrators (e.g., Nagy and Molontay, 2024; Teng et al., 2023). However, how
academics, in their role as educators, perceive, use, and adapt to AI tools is still under-
researched, particularly when many academics have reported insufficient AI literacy
(Alexander et al., 2023).
Given that AI tools are increasingly being used in higher education with a strong
potential to transform higher education teaching, learning, and assessment, it is important
to systematically synthesize early empirical evidence regarding AI’s noticing some threats can remind academics to be prepared for
impact, identify trends and patterns in the literature, and further negative impacts on college students’ engagement and learning.
inform AI policy, research, and practices. Therefore, this study
aims to fill the gap through a systematic review driven by the
overarching question: How has AI affected the teaching, curriculum Method
design, or assessment practices of academics in higher education
(HE)? Specifically, this systematic review aimed to explore what A systematic review of the literature was carried out by the
the first wave of research following the release of ChatGPT in first author in three databases: Scopus, Web of Science (WoS), and
November 2022 had focused on and found with respect to the EBSCOhost. These databases are major research databases, varying
impact of AI tools in HE. In particular, we wanted to understand in coverage content, disciplines, and languages (Stahlschmidt
how AI technologies were affecting curriculum, instruction, and and Stephen, 2020). They can complement each other and
assessment processes to identify pros and cons that might inform provide us with high-quality and relevant literature. To establish
promising pathways as well as potential challenges and problems. trustworthiness, the research team made agreements on search
To complement those insights, we also wanted to identify where terms and initial inclusion and exclusion criteria before the first
this early research was being conducted, what methods were used author identified the literature. To answer the research question,
by researchers, and which aspects of AI were of concern. We hope search terms were trialed iteratively to retrieve relevant literature
this contextual information helps readers better understand the on how AI has influenced curriculum, instruction, and assessment
applicability of results to their own jurisdictions or situations. By in higher education (HE). Synonyms for “AI” (e.g., ChatGPT),
doing so, we provide an overview of how the field is handling these “teaching” (e.g., instruction), “curriculum” (e.g., planning), or
new technologies to change or adapt academics’ work in terms of “assessment” (e.g., evaluation) were searched within the title,
curriculum, instruction, and assessment. abstract, keywords, or anywhere in the record. Search terms were
then finalized and used identically in each database: (“artificial
intelligence” OR “generative artificial intelligence” OR “generative
AI” OR “Gen-AI” OR “ChatGPT” OR “GPT∗ ”) AND ((“higher
The higher education education”) AND (“teaching” OR “assessment” OR “evaluation”
OR “feedback” OR “curriculum” OR “instruction∗ ” OR “lesson”
curriculum-instruction-assessment (CIA)
OR “planning” OR “delivery” OR “implementation”)). A total of
triad 2,810 articles were identified.
Filters were set only to include peer-reviewed journal articles
All educational systems must make decisions concerning what
published in English from December 2022 to the end of the search
they teach (i.e., curriculum), how they teach it (i.e., instruction),
in August 2023. The first 9 months of literature could capture
and how they evaluate student learning (i.e., assessment). Normally,
the critical early phase, when educators and researchers started
curriculum decisions (e.g., what to teach and the order in which
to publish their responses to newly released AI tools, such as
to teach it) lead to instructional decisions (e.g., how the material
ChatGPT. Filtering only to include peer-reviewed journal articles
is to be introduced, and which methods might best help students
helped ensure the quality of literature in the search phases. The
learn it), and culminate in assessment and evaluation decisions
time frame was chosen to return the earliest possible exploration
(e.g., how many assessments of what type and when those
of the impact of AI, immediately following the release of a demo of
assessments will take place). Thus, curriculum, instruction, and
ChatGPT on 30 November 2022.
assessment comprise the essential triad of all educational practices
Moreover, articles in this review were limited to empirical
(Pellegrino, 2006). Higher education systems give academics
articles on AI’s impact on HE curriculum, instruction, and
considerable autonomy over these decisions based on their higher
assessment (see Table 1). To be included, articles had to report a
research degrees and contribution to research outputs within
relationship between AI and any one or more of three aspects of
their disciplines. While professional certifying bodies have some
HE curriculum, instruction, or assessment. Articles regarding the
control over what must be covered, universities give academics
responsibility for deciding how to organize, teach, and assess
learning in their courses. TABLE 1 Inclusion and exclusion criteria.
The CIA triad has been demonstrated to be highly related to the
quality of specific programs and the college students they prepare Inclusion criteria Exclusion criteria
for the future (Merchant et al., 2014; Sadler, 2016). However, 1. Articles present an analysis of 1. Articles about HE curriculum,
HE settings are likely to shift considerably in the AI era—the empirical data, written in English instruction, and assessment but not
curriculum might not just reflect the logic of specific disciplines but and published in peer-reviewed related to how AI impacts them.
journal articles. 2. Articles about broad perspectives
also include AI-related content; instructional practices may need to 2. Articles about how AI influences on AI (e.g., benefits, weaknesses,
adapt to the co-existence of AI teachers; and assessment practices any one or more of three aspects of preparation) rather than its impact
might include students’ understanding and competencies related to HE curriculum, instruction, and on HE curriculum, instruction,
assessment (e.g., curriculum and assessment.
AI use. In this light, understanding the benefits that AI brings to design, instructional planning, 3. Articles about the impact of AI
HE curriculum, instruction, and assessment could help academics delivery, assessment, evaluation). on non-HE curriculum,
make full use of the technology to reduce workloads (Holmes et al., instruction, or assessment (e.g.,
school contexts).
2023; Pereira et al., 2023) and improve productivity. Meanwhile,
FIGURE 1
PRISMA flowchart of the literature search process.
impact of AI on curriculum, instruction, and assessment in non-HE introduced AI or HE curriculum, instruction, and assessment
contexts were excluded. but did not actually explore the relationship between them were
excluded (n = 32). Other articles were removed because they (a) did
not have empirical evidence (n = 4), (b) were in a non-HE context
Search process (n = 1), (c) were not available as full text (n = 2), and (d) were not
in English (n = 1). Consequently, a total of 33 articles were included
After removing duplications, 279 records were obtained for for review.
screening following the Preferred Reporting Items for Systematic During the screening stage, either author was unsure if a
Reviews and Meta-analyses (PRISMA) guidelines (see Figure 1; specific article should be included, and then the content of this
Moher et al., 2009). PRISMA guidelines provide a structured article was discussed against the research question and focus of
framework for searching, identifying, and selecting articles, as well this review. These discussions resulted in refining the inclusion and
as extracting, analyzing, and synthesizing data to address specific exclusion criteria and a consensus on included articles.
research questions. These guidelines help ensure the quality of the
review, minimize bias, and maintain transparency and replicability
(Moher et al., 2009) for researchers. Data extraction and analysis
Specifically, the screening process involved title and abstract
screening and full-text screening. The titles and abstracts of these Due to the exploratory nature of this research, an inductive
records were assessed using the agreed inclusion and exclusion thematic analysis (Braun and Clarke, 2006) was conducted to
criteria (see Table 1), resulting in the exclusion of 206 records. identify key patterns of the impact of AI on HE curriculum,
These records were excluded because their titles and abstracts instruction, and assessment. The first author read the 33 articles
showed that (a) they did not investigate how AI affected HE thoroughly and extracted key information from each paper,
curriculum, instruction, and assessment (n =135), (b) they lacked including citations, context, sample size, data collection method,
empirical evidence (n = 63), or (c) they did not focus on university measurement, and the impact on HE curriculum, instruction, and
contexts (n = 8). assessment. With an eye to finding answers to the research question,
The remaining 73 records were downloaded for full-text meaningful segments, such as “AI tools allow educators to/provide
screening. The articles were read and evaluated against the students with. . . ” and “the challenge is,” were used to identify
inclusion and exclusion criteria. Ones that did not meet descriptive codes regarding how AI influences HE curriculum,
the inclusion criteria were removed. Specifically, studies that instruction, and assessment.
discussed at regular meetings. Others (e.g., discussion, workshop, open-ended questions, observation) 6
Case study 3
Foci
Nature of studies
Technology 16
Table 2 shows the characteristics of the regions where the 33 Human experience 10
studies were conducted, as well as the methods utilized to explore Use of AI in class 7
the impact of AI on HE curriculum, instruction, and assessment.
Education dimension
Details of which papers are in each category are provided in
Appendix B. There are 16 countries around the world contributing Curriculum 9
support teachers’ reflections on curriculum design (Phillips et al., teachers’ assessment workload and facilitating their intervention
2023). The study evaluated the reading demand (using skip-gram based on the quality of posts written by students. A new assessment
word embedding) of passages in assessments (e.g., exams) against method driven by AI tools (i.e., a backward propagation neural
the demand of texts and lectures used to support instruction, network) could automatically evaluate teaching, learning, and
on the assumption that reading in an assessment should not grading in an experiential online course in agriculture (Kumar et al.,
be harder than that used in instruction. The AI tool predicted 2023).
the difficulty of course materials, including recorded lectures and Using experiments with small-samples, Zhu et al. (2023)
assessment materials, in a similar way to lecturers’ self-reported developed in China an AI tool to predict students’ performance
material difficulty. Not only would this tool ensure the alignment based on their classroom behavior and previous performance.
of assessment reading materials with course reading materials, but They suggested that this tool could be used to adjust instructors’
also provide valid evidence for the assessment materials. teaching strategies and improve teaching quality. Similarly, Tang
et al. (2023) discussed how a designed intelligent evaluation system
Personalized instruction could better recognize voices, face, postures, and teaching skills in
Applying AI technologies can facilitate analyzing students’ microteaching skill training, accurately assess preservice teachers’
learning procedures, performance, and needs, providing teaching performance, and provide accurate guidance. Moreover,
instructors with timely feedback, and assisting them in delivering Saad and Tounkara (2023) used students’ information, including
adaptive instruction. Consequently, teaching and learning effects class participation frequency and quality, absence rate, contribution
were somewhat improved (Al-Shanfari et al., 2023; Firat, 2023; to online group work, and utilization of learning resources, in
Kohnke et al., 2023; Li L. et al., 2023; Li Q. et al., 2023; Pisica et al., distance learning, to establish a preference model for instructors
2023; Wang, 2023; Li and Wu, 2023). By implementing embedded that could quickly recognize students at risk of dropping out
glasses in real classrooms, Li L. et al. (2023) showed that this and leader students who could help their peers. They found that
device helped instructors recognize and process students’ real-time this model correctly assigned 85% of students to the correct
images and emotions and keep abreast of their learning status, and clusters (i.e., at risk or leader), and assisted instructors in making
this information further provided timely feedback to instructors correct decisions.
to change their teaching strategies. Therefore, compared to Besides evaluating students’ cognitive-related outcomes,
the control group, the teaching effect of the experiment group researchers have also used AI to assess students’ non-cognitive
increased by 9.44%, and students reported more satisfaction with outcomes (e.g., emotions, attitudes, and values). For instance,
teaching. Similarly, a new piano teaching mode powered by a Novais et al. (2023) designed an evaluation fuzzy expert system
vocal music singing learning system has been demonstrated to and employed it to build profiles of students’ soft skills (e.g.,
be relatively successful: it not only made piano teaching more communication and innovation skills, management skills, and
personalized and intelligent, increased teaching efficacy by 7.31% social skills). AI-generated scores were compared with real scores,
compared to the traditional teaching mode, but also motivated providing reliable feedback to instructors and students.
students to engage more in piano practice time and classroom
participation (Li Q. et al., 2023). Assess teaching effect
Wang et al. (2025) combined human-computer interaction and
Prepare personalized assignments deep learning algorithm to design an intelligent evaluation system
A new assessment method driven by AI tools could help for innovation and entrepreneurship. The system could detect
instructors prepare personalized assignments. Pereira et al. (2023) students’ attitudes and behaviors and assess teachers’ teaching
described how an emerging recommender system generated preparation, language expression, content mastery, and teaching
equivalent questions for assignments and exams, to enhance the design. The operability of this system was further supported by
variation of assignments and support instructors in preparing assessing the teaching quality and effect of two classes, and the AI
individualized assignments and minimizing plagiarism. They results showed that both classes’ teaching quality scored almost 7
also indicated that this recommender system was confirmed out of 10, suggesting a need to improve.
to be accurate after instructors evaluated the equivalence (e.g.,
interchangeability, topic, and coding effort) of AI-created questions
to the questions instructors had provided. Challenges for CIA
Besides the above advantages, some challenges brought by
AI in HE curricula, instruction, and assessment are described in
Automation/optimization of evaluation six studies.
Many scholars have investigated the potential of using AI in HE
assessment and evaluation. Challenge existing curricula
AI is found to bring many challenges to curriculum developers
Assess students’ learning process and outcomes and existing curricula, especially in deciding what content is
AI is found to accurately assess students’ learning process and more valuable, how to integrate AI into the current curriculum,
outcomes, and further determine teaching effect (Novais et al., and how to prepare students with digital literacy. In order to
2023; Saad and Tounkara, 2023; Wang et al., 2025; Zhu et al., 2023). address these questions, Lopezosa et al. (2023) interviewed 32
For instance, Archibald et al. (2023) showed that an AI-enabled journalism faculties from Spain and Latin America about how they
discussion platform accurately calculated students’ curiosity scores perceived this new technology; however, no consensus on whether
to present their engagement in discussion, further reducing to integrate AI into the curriculum was identified. Although most
faculties embraced AI technology and suggested establishing AI as AI challenged the current assessment system, as instructors were
a standalone subject, some stated that challenges, limitations, and worried that AI tools are too convenient for students making it easy
uncertainty about AI in education should be thoroughly researched to cheat and not work independently.
before incorporating it into the curriculum. Some individuals Moreover, it is hard for humans or AI detectors to identify AI-
suggested a compromise idea of integrating AI into communication generated texts or assignments, which in turn challenges existing
subjects as a preliminary step (Lopezosa et al., 2023). assessment practices and strategies. A case study conducted in an
Australian Master’s program for Geographic Systems and Science
Challenge existing instruction found that ChatGPT, acting as a fictional student, effectively
There are some concerns about using AI in HE instruction, completed most assignments (e.g., coding; Stutz et al., 2023).
including challenging teacher’s AI teaching competencies, ethical Although AI detectors identified it, lecturers did not recognize
considerations, and lack of teaching support. Chan (2023) indicated AI had generated the answers and gave a grade of “satisfactory.”
that AI may cause overdependence on technology and weaken Stutz et al. (2023) also discussed the challenge ChatGPT poses
social connections between teachers and students. In this light, Firat to traditional evaluation methods and called on researchers
(2023) indicated that implementing AI may require educators to and practitioners to rethink learning objectives, content, and
change their role from being instructors to guides or facilitators. assessment approaches. Assessments relying on oral exams or video
Furthermore, based on interviews with 12 university teachers conferences were suggested as alternatives that were resistant to
in Hong Kong, Kohnke et al. (2023) found that AI challenged AI dishonesty. In a similar study, both AI-generated and student-
participants’ teaching competencies about teaching students how to written texts were assessed by AI detectors and six English as a
judge AI-generated text critically, use AI tools ethically, and foster Second Language (ESL) lecturers from Cyprus (Alexander et al.,
digital citizenship. 2023). It was found that AI detectors worked more effectively
Ethical concerns in instruction include incorrect or fabricated in identifying AI-generated texts than humans, and AI, to some
information, accessibility, and algorithm biases (Firat, 2023). extent, challenged lecturers’ previous evaluation criteria and
According to a teaching reflection of an educator from Monash strategies. Lecturers seemed to conduct deficit assessment strategies
University, Pretorius (2023) taught postgraduate students how and considered that AI-generated texts were characterized as
to use generative AI effectively by giving them examples of having fewer grammar errors and more accurate expressions.
communicating with generative AI to brainstorm and design Therefore, the authors recommended improving instructors’ digital
research questions. Consequently, her course achieved good literacy and rethinking assessment policies and practices in the
teaching feedback. However, Pretorius realized that incorrect or AI era. Similar findings were shown in Sweden, where Farazouli
biased information produced by ChatGPT, as well as unequal access et al. (2024) conducted a Turing test among 24 university teachers
to AI caused by distinct socioeconomic status, required educators in humanities and social sciences. They found that teachers
to shift their ability to prepare students with AI literacy for using tended to be critical about students’ texts, underestimated students’
AI professionally and ethically. Firat (2023) also mentioned over- performance, and doubted that some student texts had been
reliance on AI, data privacy, and unequal access to AI tools finished by GPT. These concerns negatively influenced the trust
as challenges. relationship between teachers and students.
Another concern centers on inadequate technical support and
training in integrating AI into teaching. For instance, Al-Shanfari
et al. (2023) utilized a mixed-method study to understand how Discussion
aware, prepared, and challenged instructors were in integrating
intelligent tutoring systems (ITS) in Omani universities. They This study examined how AI influences HE curriculum,
found that most participants considered ITS effective in providing instruction, and assessment by reviewing 33 recent articles. We
customized instruction; however, the lack of support and guidance summarize the review within a SWOT analysis (Gurl, 2017)
in using ITS brought the instructors substantial challenges. As one framework to provide a structured framework about the strengths,
participant said, “Teaching approaches at my university are not weaknesses, opportunities, and threats of AI in terms of higher
supporting the use of ITS” (p. 956). Similarly, Chen et al. (2023) education curriculum, instruction, and assessment.
interviewed 16 faculty members in data science and revealed that
inconsistent definitions of data science, inadequate team support,
and lack of collaboration platforms were major challenges. Benefits of AI in higher education
Challenge existing assessment methods and strategies The analysis of 33 recent studies provides empirical evidence
While there are various opportunities for HE assessment, as to the geographical distribution of research, research methods,
several challenges exist and need to be addressed. The most research foci, and the impact of AI on the CIA triad in higher
frequently mentioned challenge is that AI has been proven to education. Our results showed that most research was conducted
pass many examinations and assignments. Consequently, some in Asia, Europe, or North America. Consistent with findings
students may use it to cheat or plagiarize. For instance, Chan indicating a rapid trend in Chinese research on AI in higher
(2023) stated that new concerns in HE assessment have emerged, education (Crompton and Burke, 2023), China accounted for most
as most students and teachers are worried that some students use studies in this review. One possible reason is that AI has been
AI tools to cheat and plagiarize, and teachers could not identify considered a priority in the Chinese government’s agenda (State
such dishonesty correctly. Similarly, Kohnke et al. (2023) found that Council of PRC, 2017) and is thus highly emphasized in education.
This review also indicated that simulation and modeling were the regions, especially developing countries, is poorly represented. The
most frequently used methods to assess the potential impact of AI currently available research has been conducted largely in Western,
in the HE context (e.g., Phillips et al., 2023; Saad and Tounkara, Educated, Industrialized, Rich, and Democratic (WEIRD; Henrich
2023; Sajja et al., 2023; Shi, 2023). This finding might be related et al., 2010) societies. This means that there is a bias in what we
to research foci, as more attention has been given to testing the can know since participants from other regions of the world are
effectiveness of AI tools rather than to academics’ perceptions and excluded. To the degree that cultural, historical, and developmental
practices of AI tools in the real world. factors impinge upon the practice of higher education, more work
Several benefits were identified in this review, such as with such populations is needed. Such research would enhance
generating new material, reducing staff workload, and evaluating our understanding of how academics perceive the threats and
automatically or optimally (e.g., Kumar et al., 2023; Pretorius, opportunities of AI.
2023; Shi, 2023). This review first reveals that AI can create Another gap in the literature is the absence of research into
new courses and resources, promote curriculum development, the real world of higher education classroom pedagogical activities,
address time-consuming workloads concerning curriculum (e.g., course development, and assessment design. Comparatively, few
questions about syllabi, time, and deadline), and evaluate the studies have focused on the human experience of using AI,
material difficulty and quality (Chen et al., 2023; Lopezosa et al., especially in classrooms (e.g., Al-Shanfari et al., 2023; Archibald
2023; Pisica et al., 2023; Wang, 2023). These findings reinforce et al., 2023; Farazouli et al., 2024). Related to this is the lack of cross-
earlier findings that the implementation of AI (e.g., ChatGPT) disciplinary collaborative research between computer scientists and
could contribute to generating a lesson plan and course objectives social scientists. If AI tools are meant to make a difference to
(Kiryakova and Angelova, 2023; Rahman and Watanobe, 2023) classroom teaching, learning, and evaluation, researchers from
and to assessing general resources and textbooks (Koć-Januchta different backgrounds will need to collaboratively explore how AI
et al., 2022). AI has also been found to provide an immersive technology could be used in educational practice.
learning environment and a new teaching mode, where instructors Based on this review, future research will need to explore the
facilitate students to conduct “trial-error” strategies and practice following questions:
specific competencies in simulated scenes (e.g., Wang, 2023;
Zhang et al., 2023). Meanwhile, AI, as virtual teachers, could take • How does AI influence the teaching, curriculum design,
up logistical workloads (e.g., reinforce students’ mastery of key or assessment practices of academics in higher education
concepts) and provide instructors time and energy to conduct in the Global South contexts? How does it differ from
personalized instruction and satisfy students’ distinct needs (Al- research conducted in the Global North? How can AI tools,
Shanfari et al., 2023; Firat, 2023; Kohnke et al., 2023). These policies, and practice become more culture-sensitive based on
findings are in line with previous studies: AI, in most cases, this comparison?
worked well in sharing instructors’ tutoring tasks, providing • What are the best practices of academics in teaching students
students with immediate and unique feedback, and reducing to use AI ethically and responsibly?
instructors’ workload (Chou et al., 2011; Zawacki-Richter et al.,
2019). Additionally, AI seems to benefit assessments by generating
personalized assignments (Pereira et al., 2023), effectively assessing
and predicting students’ academic achievement (Wang et al., 2025) Opportunities of AI in higher education
and non-cognitive outcomes (e.g., soft skills, Novais et al., 2023),
identifying disadvantaged students (Saad and Tounkara, 2023), and The presence of AI seems to create opportunities for academics
assessing teaching effectiveness (Wang et al., 2025). This review in terms of revisions to existing courses and freeing up time
finds evidence that AI-empowered assessment can effectively assess to focus on improving existing curriculum, instruction, and
students’ learning and teachers’ teaching (Hooda et al., 2022; assessment quality. These opportunities point to the development
Zawacki-Richter et al., 2019). of interdisciplinary courses with the help of AI, especially in
Thus, AI has been found to bring benefits to HE curriculum, terms of course content and assessment design. One way to
instruction, and assessment, including generating new materials, implement interdisciplinary approaches would be to integrate
alleviating faculty workloads, and automating or optimizing ethical considerations of using or relying on AI in philosophy or
assessment, in alignment with progressive literature (Chou et al., research methods courses. Another way is to use AI to bridge
2011; Rahman and Watanobe, 2023). These findings pave the way the intersections of different disciplines (e.g., Arts-Arts disciplines,
for future studies to ascertain the generalizability of the early Science-Science disciplines, and Arts-Sciences disciplines). An
promising results and the identification of conditions in which the example in the Science-Science disciplinary intersection could be
early benefits actually occur. The benefits identified here suggest using AI to predict how air pollution (environmental science)
directions in which HE policy could go, provided appropriate affects health outcomes (healthcare).
infrastructure and training are given to academics. Given the benefits AI brings to academics’ instruction
by providing an immersive learning environment and a new
teaching mode, it may be feasible to establish a collaborative
teaching system, where virtual teachers (i.e., AI) share intensive
Weaknesses in the research and repetitious teaching workloads (e.g., immediate feedback,
knowledge reinforcement), and where human teachers pay
This early research, however, is potentially problematic because attention to student’s personal, emotional, and development needs
of its narrowness. Specifically, research conducted in many and conduct one-to-one adaptive instruction. For instance, AI
teachers could automatically grade and constantly offer targeted workers is to ensure they develop generic competencies rather
practice for students, which would provide adaptive support to than disciplinary specific knowledge and ability (Chickering and
teachers. Consequently, developing AI-empowered student and Ehrmann, 1996; Cuban, 2001). Consequently, faculty members
teacher assessment models could be important research and need to consider the intersection of disciplinary structure
practice directions. and AI affordances and constraints in terms of integrating
Additionally, we suppose that student-facing AI assessment contemporary capabilities with long-standing traditions
models can be implemented in three steps. Before the classroom, of knowledge.
AI can be used to diagnose students’ knowledge bases and help The threat of AI applies also to instructors’ role and their
instructors better understand students’ learning preferences, teaching abilities. Most academics have little understanding of
motivations, and needs. During the classroom, AI techniques how AI tools are designed and what large language models can
(e.g., speech recognition, facial recognition) can be combined do. Thus, few have thought constructively about how to integrate
to collect students’ facial expressions, emotions, gestures, AI into their teaching. The question is how AI tools, with their
classroom dialogue, and so on, and promptly analyze their capacity to translate text, analyze it, and compose fluent but
learning engagement, behaviors, strategies, and difficulties. This potentially meaningless text, can or should be integrated into
information can inform instructors about students in need, diverse fields such as engineering, medicine, studio art, laboratory
possible changes in teaching strategies, and early advice on where science, and so on. Application within humanities may be much
to intervene. After the classroom, AI, working as a teaching more feasible with the current capacities of GenAI, but still
assistant, could provide students with targeted assignments, academics have to learn how AI can be an adjunct to teaching
facilitate individualized learning, and predict future performance rather than potentially a substitute for the instructor’s knowledge
based on current performance. Similarly, instructors’ information and skill. Enthusiasm of technologists for using machines to
(e.g., preparing lessons and teaching) could be collected into a replace the labor of humans (Brown, 2020) is clearly a threat to
digital profile for each instructor, informing assessments of their the human-in-the-loop. This is all the more important because
teaching performance, abilities, and professional development currently AI cannot identify fabrication or error in the text that
needs. It could inform faculty professional development programs. it assembles.
Nevertheless, caution is still needed when embracing AI- The most important challenge centers around assessment and
generated assessment results, as some indicators (e.g., instructors’ evaluation of learning. With the free access students have to
professional ethics) cannot be assessed effectively or, depending on powerful AI language models, it is difficult to ensure that the
programming, or could even be overlooked. Therefore, combining work submitted by students is their own genuine intellectual
AI-generated and human-based assessments is necessary, contribution. The fear and possibility of non-detectable academic
respecting human beings’ values and educational principles. The dishonesty will require substantial efforts to ensure the integrity
challenge of students’ unsanctioned use of AI within assessment and social warrant (Brown, 2022) of course grades and academic
processes will require higher education to find valid ways of qualifications. A possible response to generative AI capabilities
implementing or managing AI. is to impose invigilated in-person examinations without access
to digital resources and without bring-your-own-devices. Another
way to ensure the integrity of evaluation is to require students
to participate in an oral examination of their learning; a solution
Threats AI brings to higher education that will have a large impact on workloads, efficiency, validity of
sampling, and accuracy of scoring. It is clear generative AIs will
Indeed, an important threat AI brings to education is the force academics to rethink the purpose of assessment (e.g., student-
requirement that all teaching and learning has to happen in an ICT centered or knowledge-based learning), the content and format of
environment, which could be seen as antithetical to the human in what is assessed, the design of assessments (e.g., process evaluation,
the human experience of learning (Brown, 2020). While AI seems outcome evaluation, or value-added evaluation), and the formative
to be able to do many things, it is simply programming and thus use of assessed performances.
not human. Given the interactive and integrated nature of curriculum,
The literature reported here makes clear substantial challenges instruction, and assessment processes, there simply is little research
to curriculum, instruction, and assessment. Despite the importance on AI’s impact on their intersection. Indeed, only three papers
of curriculum, this review found less research into AI’s integration attempted to address all three legs of the CIA triad. Future research
into HE curriculum than on the two other aspects of the CIA triad. will need to examine the integration of AI impact, rather than
In terms of existing curricula, there is considerable debate as to studying each aspect of the triad in isolation.
what students need to be taught about or with AI and how it could
be integrated (Lopezosa et al., 2023). AI creates the possibility
that skill with large language models (e.g., to analyze data, to Limitations
compose communication) is what students might need in the
future. Considerable enthusiasm exists for the integration of AI Although this review explored three major education databases
skills with other graduate attributes such as the 4C skills (i.e., to minimize selection bias, the recent articles were published
communication, collaboration, critical thinking, and creativity). in English rather than in other languages, such as Chinese and
This is an extension of the long-standing arguments advanced Spanish. Therefore, the generalizability of these findings needs to
by technologists that the best way to prepare future citizens and be taken with caution for use in non-English contexts. Considering
that Asia accounted for a large number of studies and that an editing. JS: Conceptualization, Writing – review & editing. GB:
emerging number of studies were conducted in South America and Supervision, Writing – review & editing, Funding acquisition.
the Middle East, multi-lingual or culture-responsive studies should
be conducted in the future. More importantly, this review was
limited to the first 9 months following the release of ChatGPT on Funding
30 November 2022; hence, it is very much a preliminary exploration
of how AI has impacted higher education. In light of how quickly The author(s) declare that financial support was received for the
AI systems are being developed and changed, new research is being research and/or publication of this article. The authors would like to
published constantly. Hence, the findings presented in this review acknowledge financial assistance from the University of Auckland
have probably been superseded already. Open Access Support Fund.
Supplementary material
Author contributions
The Supplementary Material for this article can be found
JL: Conceptualization, Formal analysis, Investigation, online at: https://www.frontiersin.org/articles/10.3389/feduc.2025.
Methodology, Writing – original draft, Writing – review & 1522841/full#supplementary-material
References
∗
Alexander, K., Savvidou, C., and Alexander, C. (2023). Who wrote this essay? Brown, G. T. L. (2022). The past, present and future of educational assessment: a
Detecting AI-generated writing in second language education in higher education. transdisciplinary perspective. Front. Educ. 7:1060633. doi: 10.3389/feduc.2022.1060633
Teach. Engl. Technol. 23, 25–43. doi: 10.56297/BUKA4060/XHLD5365
Cascella, M., Montomoli, J., Bellini, V., and Bignami, E. (2023). Evaluating the
∗
Al-Shanfari, L., Abdullah, S., Fstnassi, T., and Al-Kharusi, S. (2023). Instructors’ feasibility of ChatGPT in healthcare: an analysis of multiple clinical and research
perceptions of intelligent tutoring systems and their implications for studying scenarios. J. Med. Syst. 47, 1–5. doi: 10.1007/s10916-023-01925-4
computer programming in Omani higher education institutions. Int. J. Membr. Sci. ∗
Chan, C. K. Y. (2023). A comprehensive AI policy education framework
Technol. 10, 947–967. doi: 10.15379/ijmst.v10i2.1395
for university teaching and learning. Int. J. Educ. Technol. Higher Educ. 20.
∗
Archibald, A., Hudson, C., Heap, T., Thompson, R. R., Lin, L., DeMeritt, doi: 10.1186/s41239-023-00408-3
J., et al. (2023). A validation of AI-enabled discussion platform metrics and
Chan, C. K. Y., and Hu, W. (2023). Students’ voices on generative AI: perceptions,
relationships to student efforts. TechTrends 67, 285–293. doi: 10.1007/s11528-022-
benefits, and challenges in higher education. Int. J. Educ. Technol. Higher Educ. 20:43.
00825-7
doi: 10.1186/s41239-023-00411-8
Braun, V., and Clarke, V. (2006). Using thematic analysis in psychology. Qual. Res. ∗
Chen, H., Wang, Y., Li, Y., Lee, Y., Petri, A., and Cha, T. (2023). Computer
Psychol. 3, 77–101. doi: 10.1191/1478088706qp063oa
science and non-computer science faculty members’ perception on teaching data
Brown, G. T. L. (2020). Schooling beyond Covid-19: an unevenly distributed future. science via an experiential learning platform. Educ. Inf. Technol. 28, 4093–4108.
Front. Educ. 5:82. doi: 10.3389/feduc.2020.00082 doi: 10.1007/s10639-022-11326-8