Usage and Knowledge of Online Tools and Generative
Usage and Knowledge of Online Tools and Generative
Bentley University
Lisette Gonzalez
Bentley University
Sophia Guerra
Bentley University
Natasha Boos
Bentley University
Research Article
Keywords: ChatGPT, AI, Generative AI, Online Tools, Higher Education, Students
DOI: https://doi.org/10.21203/rs.3.rs-4882673/v1
License: This work is licensed under a Creative Commons Attribution 4.0 International License.
Read Full License
Lisette Gonzalez
[email protected]
Sophia Guerra
[email protected]
Natasha Boos
[email protected]
∗ Corresponding Author
1
Abstract
Artificial Intelligence (AI) tools like ChatGPT are poised to transform
student and educator workflows in higher education. However, there is less
documentation on the range of tools students in higher education use, how
they use them and in coordination with other online tools for learning,
and their expertise using AI tools. We present a mixed-method analysis of
a survey conducted at a doctoral-granting university in the United States
investigating the adoption of AI tools in the context of other technologies.
The findings include how the students used GenAI tools in light of other
on-line technologies, their perception of expertise on the topic, and how
they gain expertise in using AI for educational work.
Keywords: ChatGPT, AI, Generative AI, Online Tools, Higher Ed-
ucation, Students
2
1 Introduction and Literature
Artificial Intelligence (AI) researchers have developed applications for education
for a long time as evidenced by several journals, publications, and focused con-
ferences on the topic of AI in Educaton or AIEd. However, the introduction of
ChatGPT and other Large Models (LMs) have changed the landscape for two
reasons. One, Large Models (LMs,) also known as Generative AI (GenAI), are
broad tools that can perform several tasks across a huge spectrum of education-
related activity including writing assignments, grading assignments, analyzing
data, etc. with non-trivial quality and incredible speed (Brown et al., 2020).
This is in contrast to the prior state in which each AIEd research was focused on
narrow tasks such as automatic assessments of some types of essays (Ghosh &
Beigman Klebanov, 2024), Intelligent Tutoring Systems for narrow subject ma-
terials (Mousavinasab et al., 2021), Conversational roleplay for language learn-
ing in specific scenarios (Divekar et al., 2022), among others. Two, these tools
are no longer living only in research labs with limited access to students and
practitioners; rather, many are easily accessible for free to anyone across the
world via a simple internet browser. (Duha, 2023) have noted that this is not
the first time technology has transformed education and that we have seen at
least two such transformations before with Google and Wikipedia. However,
when a new technology like this is released and becomes popular within the
education community, it is likely to disrupt existing technologies and workflows.
For example, when search engines like Google.com or online encyclopedias like
Wikipedia first became popular, it augmented and replaced (in some cases)
traditional information sources like a library or subject-matter experts in uni-
versities in the sense that a student no longer had to go to the library or to
an expert for every question, rather they looked up online (Brindley, 2006).
This prompted libraries to pivot to the integration of new technology and begin
efforts to increase digital literacy.
We are at a similar crossroads again, with AI technology becoming a new
source of information for university students. The education community must
decide what changes we have to make in curriculum, teaching, learning, tech-
nical resources, etc. (Črček & Patekar, 2023) provided to students to create a
future generation of experts that can use artificial intelligence to their advan-
tage. However, we do not know whether AI technology is being used by students
in addition or in substitution to the sources of information of the previous rev-
olution like Google and Wikipedia. We also do not yet know what it means
to have expertise in using AI tools like ChatGPT from a students’ perspective.
We hope to provide initial answers to that through our paper by discussing how
students use AI technologies in relation to other online resources, thereby pro-
viding the groundwork to move the conversation forward in this next iteration
in technology-enabled education.
3
1.1 Generative AI
AI technologies have been in the making since the 1950s (Toosi et al., 2021).
The primary and simplified concept behind AI is that first a model is provided
with some amount of task-specific numerical data. An AI model is then able to
find the patterns that exist in the data. Then, it can apply the found pattern
to new data and create predictions, outcomes, etc. Natural Language Process-
ing (NLP), a subset of the AI research community, applies a similar concept
except that they treat natural language (e.g., English, Hindi) words as num-
bers and perform computations on them to identify patterns and generate new
words. In 2017, a Transformer model showed promised to find patterns in large
amounts of text and generate new words based on that pattern in the con-
text of language translation (Vaswani et al., 2017). OpenAI significantly scaled
up the Transformer model to release the the first iteration of the Generative
Pre-trained Transformer (GPT) (Radford et al., 2018) and then a second more
capable GPT-2 (Radford et al., 2019). At a huge cost of upwards of 4 million
US dollars (Leswing, 2023), they eventually released its iteration called GPT-
3 (Brown et al., 2020) that powered ChatGPT, the first landmark successful
interface and Large Language Model (LLM) that was good at many language
tasks and went far beyond what the initial version of GPT promised. Other
technology companies followed suit. At the time of writing this paper, we now
have a variety of LLMs that are similar to ChatGPT, e.g., Copilot1 , LLaMA2 ,
Gemini 3 , Claude 4 , etc. and their subsequent versions. Similar advances were
made in graphics, image, and audio processing science that allowed models like
Dall-35 , Midjourney6 , Stable Diffusion 7 to generate or edit images and audio
based on text or vice versa. Collectively, these image, sound, and language
models are called large models (LMs).
Other software developers and AI startups used these Large Models (LM)
as a foundation for new products. Specific to education, scite.ai augmented
LLMs with the ability to search academic papers providing an interactive way
to conduct research. Perplexity.ai infused LLMs with the ability to search the
Internet providing a new way to consume information. Elicit helps summarize
research papers, while Writefull and Grammarly help edit text for better flow
and grammar.
In addition, LLMs are not restricted to generating natural language. They
are also able to generate programming languages. Subsequently, companies
like OpenAI and Julius.ai generated programming code on a user’s request and
run that code behind the scenes on data that user uploaded. Basically, their
solutions could allow the user to process and analyze data without knowing any
programming language such as R or Python.
1 https://copilot.microsoft.com/
2 https://llama.meta.com/
3 https://gemini.google.com/
4 https://claude.ai/new
5 https://openai.com/index/dall-e-3/
6 https://docs.midjourney.com/
7 https://stability.ai/news/stable-diffusion-3
4
Although there is no research on how many AI tools exist today, at the time
of writing this paper, the authors are aware of several AI tools that can be used
for educational purposes like writing essays, data analysis, creating slides, etc.
that are now also available to students and anyone else. In the discussion above,
we provided a large but non-exhaustive number of AIEd usecases and products.
5
rejection and vigilance towards students using ChatGPT, and general panic over
academic dishonesty (Črček & Patekar, 2023).
(Kiryakova & Angelova, 2023) presented a survey to university professors at
Trakia University and found more concerns from faculty members that learners
will unquestioningly accept the output from ChatGPT to be correct especially
when ChatGPT has a tendency to be incorrect (Islam & Islam, 2024; Rawte
et al., 2023). However, the university professors also showed a positive outlook
while using ChatGPT towards their teaching e.g., to provoke interest in the
subject, stimulate critical thinking, etc (Kiryakova & Angelova, 2023). Further,
(Firat, 2023) found in their qualitative analysis that evolution of learning and
educational systems and changing role of educators were the most frequent code
in interviews from scholars and students from four countries. In a large scale
analysis of X (Formerly, Twitter), (Mamo et al., 2024) found that within the
Higher-education faculty community, while sentiment towards ChatGPT moved
positively overtime, negative sentiments such as anger, fear, disgust, and sadness
were also found. Plagiarism, job security, bias, threats to writing, disruption,
assessment threats and disinformation were some of the factors contributing to
the negative sentiments. Some of these factors are directly related to students
using ChatGPT for university work.
While educators worry about the learning outcomes for their students given
the use of AI, they themselves have not shied away from it, as AI has received
tremendous attention from education researchers recently. (Crompton & Burke,
2023) conducted a PRISMA review of 138 articles in AIEd and found an almost
three-fold increase in the number of articles published in 2021-2022 compared
to previous years where research on AI assistants was prominent. Furthermore,
(Mahapatra, 2024) showed that, with some training, ChatGPT can give good
feedback in writing classes and is especially useful in English as Second Language
(ESL) student writing.
Given the various positive and negative reactions from students and fac-
ulty, (Črček & Patekar, 2023) strongly encourage educators to embrace technol-
ogy, although with clear policies and guidelines on its use to optimize learning.
However, doing so requires knowing how students are using AI. Although most
previous studies have identified attitudes and adoption towards ChatGPT from
students, there is not enough literature to show other AI tools that students use
and how students use AI in context of already existing technology. This infor-
mation is crucial to get a snapshot of the online technology stack that students
use to accomplish their goals in universities. Further, we also recognize that AI
will be a tool in the future of work and that teaching students to be experts
at using AI in their domain is likely going to be a similar wave of education as
digital literacy a few years ago. However, it is not clear what it means to be an
expert in using AI. We bring these two perspectives through our study in this
paper. We uniquely contribute to the existing body of literature by identifying
how students use AI tools in the context of other online tools and what they
consider an expert in the field.
We present a mixed-method analysis of a survey of 68 students at a doctoral-
granting university in the United States. In the results, we show that ChatGPT
6
is still the most popular AI tool. However, we report that AI tools have not fully
replaced ‘older’ tools such as search engines, Wikipedia, etc. In fact, search en-
gines are still slightly more popular than AI tools within the student community.
In addition, specific to AI tools, we note various uses of AI technology reported
by students such as writing, brainstorming, and creative uses that we detail in
our report and discuss the implications of it. We discuss what students see as
expertise while using AI in their field and how students gain that expertise.
2 Methodology
How do you currently learn new topics on the Multi-option with ability to
Internet? (Select all that apply) fill in manually
Which AI tools have you used in the past? Multi-option with ability to
fill in manually
Please explain your expertise level and why you Fill in, large text box
believe what you picked (e.g., if you have taken a
course on prompting, read articles or tips/tricks
related, feel like you always get the answer you
want, etc.)
a [Tool] was automatically replaced with name of each AI tool they submitted in previous
question
7
students at a doctoral-degree-granting university in the United States. The
email invitation stated the eligibility and ineligibility criteria mentioning that
students currently enrolled or who wanted to enroll in classes with the project
PI were not eligible to avoid biased responses. In addition, they were required
to have some exposure to AI tools such as ChatGPT. Given that our survey
also had a significant qualitative aspect, we set a target size of approximately
60 students with a higher compensation of $5 gift card to encourage detailed
qualitative responses. The survey was approved by the Institutional Review
Board (IRB). Table 1 shows the list of questions relevant to the study. The
responses to all except the last question were required to complete the survey.
For the sake of brevity, we do not include sections and questions related to
informed consent, eligibility criteria, logistics, and demographics, as they do
not differ from the standards seen in other surveys.
3.1 Demographics
Table 2 presents a quick view of the age range, enrolled study level, gender, and
grade point average (GPA) of the 68 students who participated in the survey.
From table 3, we see the count of students who represented majors, minors,
or areas of subject matter expertise. Note that a student typically can major
in more than one subject area alongside one or more minors. All data were
self-reported by participants.
8
Description Number of Students
9
Majors and Minors Number of Students
Finance 26
Accounting 13
Management 9
Business Analytics 7
Computer Info. Systems 7
Data Analytics 7
Economics 7
Information Design and Corporate Communication 5
Marketing 4
Spanish 2
Business Administration 1
CFA 1
Film and Media Studies 1
Finance and Technology 1
Health studies 1
Human Factors in Information Design 1
International Affairs 1
Mathematical Sciences 1
Public Policy 1
Sociology 1
Sports Business Management 1
Undecided 1
10
while learning on the Internet. Google is the most used tool followed closely by
ChatGPT, YouTube, and Wikipedia.
Although Fig. 2 shows the frequency of each tool that students indicated
they use, there are multiple tools that can be used similarly and toward the
same goal. For example, Bing and Google are both search engine alternatives
to each other. Therefore, we categorized these tools into types as shown in Table
4. Readers might note that the table shows Search and Multimedia Learning
separate from AI even if search and multimedia tools use AI algorithms in
their back-ends for recommendations and better search results. To clarify, here
on, when we say AI for the purpose of this paper, we mean broad-application
Generative AI tools like ChatGPT. Based on the categorization shown in Table
4, we counted the frequency of students who indicated using each type of tool
after removing duplicates. This process allowed us to count unique mentions by
tool types; for example, if one student mentioned that they used WSJ and NYT,
we counted as one mention of News type. The popularity of types of tools that
students use is seen in fig. 3 where Search, AI, and Multimedia Learning are
the top-3 types of tools students indicated using while learning on the Internet.
However, we also wanted to capture the number of tools students use per
tool type, for example, if a student said that they use WSJ and NYT, that was
two counts of tools for the tool type of News. For each type of tool, on average,
each student used close to one tool as shown in Table 5.
11
Fig. 2 Popularity of Tools
12
Tool Type Mean±Std. dev.
Search 1.09±0.29
Al 1.03±0.19
Multimedia Learning 1±0
Encyclopedia 1±0
Guided learning 1.11±0.3
Social Media 1.33±0.6
News N/A as only one person reported us-
ing news articles
from our own familiarity. However, the landscape of AI tools is rapidly changing
and AI tools may be different than what we have established as their description
by the time of publication.
In addition, we asked them how often they used AI tools for each tool they
said they currently use. Fig. 4 shows responses for how often they use ChatGPT.
Few people who use other AI tools like Perplexity (1) said 2-3 times a week. (3)
rare usages of other tools. And one person mentioned they use Co-pilot/Dall-E
as often and together with ChatGPT
13
Fig. 4 Frequency of use of ChatGPT
14
Fig. 5 Expertise with AI tools
One participant mentioned that they liked ChatGPT because “... it skips
the part of me scrolling different sites and gathering information” alluding to
ChatGPT’s information distilling capabilities. Several other participants also
mentioned that they use ChatGPT to summarize information or “...get basic
information on a topic” or even organize thinking e.g., “... get a summary from
chatgpt to help me format the text ideas in my head”, “... help think through
ideas”, “... getting a step by step to solving economics problems”. Some even
went to the extent of saying that they use chatgpt “...as a tutor”
15
Writing Outline Some students also mentioned they use ChatGPT to ”...
create outlines for papers I need to write.”
3.3.3 Brainstorming
Students mentioned that they used chatgpt to brainstorm university-related
work such as “... get ideas for essays”, “...suggestions for topics”, “...how to
start writing about something”, “generate ideas...”, “brainstorming ideas...”,
when they “... want some direction”, “...create outlines for papers”, “... to
know the answer for questions which are interesting for me” or even “... help
me organize thoughts”.
3.3.5 Entertainment
Some students mentioned that they used ChatGPT for “...games, entertain-
ment”, “... writing songs”, or for “... hobbies where it helps me find new music
and ideas”
16
analysis section. Here, we discuss their qualitative responses to why they self-
rated their expertise as they did and to whether they had any thoughts on what
it means to be an expert in AI. We thematically analyze those responses and
present them in this section.
17
3.4.4 Conceptualization of Technology
We found two metaphors about how people describe their own expertise using
relationship metaphors such as calling it “... a student-teacher [relationship]
where I have to teach the teacher to teach me in the most efficient way possible”
and the other mentioned “... just talk to Chatgpt(prompt) like [it is] a child
and specify the environment and the answer I am looking for.”
4 Discussion
4.1 Demographics
We conducted a survey in which 68 students participated. We know from their
demographics that most of these student participants were between 18 and 23
years old enrolled in undergraduate and graduate courses at Bentley University.
We had slightly more participation in the survey from people who identified
as female rather than male, with no participation from any other gender. We
know that most of these participants were in good academic shape as almost
everyone had a GPA above 3. Furthermore, most of the participants came from
nontechnical business-related areas such as Finance and Accounting. These de-
mographic facts are important, as the results might change if the demographic of
the participant changes. For example, we already know that technical students
might be using AI tools differently than non-technical users (Zamfirescu-Pereira
18
et al., 2023). In addition, our inclusion criteria involved students who were
at least familiar with AI tools. We acknowledge that the findings synthesized
below might not apply in the same way to a broader population of university
students who have never tried an AI tool. Therefore, the rest of the discussion is
in the context of our participants’ background. Tables 2 and 3 show a detailed
view of the demographics data.
19
4.3 AI Tool Usage
4.3.1 Popularity and Frequency of Use
For AI tools specifically, ChatGPT is more known than any other tool, with all
participants mentioning that they have used this tool in the past. In general,
the participants indicated mainly familiarity with multi-purpose large-language
models. However, some participants also indicated familiarity with AI tools
designed specifically for narrower usecases, e.g., writing assistants, presentation
assistance. An exhaustive list of the AI tools indicated as used by the partici-
pants is found in table 6.
(Sanasintani, 2023)’s survey found that higher education program students
are not only familiar with AI concepts but prefer it over traditional educational
methods and believe that including it in curriculum is important as it can im-
prove the quality of learning in higher education. However, in the results section
we reported that only 13 of 68 students selected an option other than ChatGPT.
Out of the 13 only 4 selected more than one additional option with one person
writing in 4 option in addition to ChatGPT. We notice that the familiarity of
tools beyond the most popular ChatGPT rests within only a small subset of
the sample. This indicates AI familiarity and education is not equal and that
familiarity with more than the most popular tool exists only within a very small
sliver of the sample.
Further, we also note that while many tools were listed under “tools used in
the past” (see table 6), much fewer AI tools and only multipurpose LLMs were
listed when asked about current tool usage (see fig. 2). This might indicate that
narrow-usecase AI tools either did not meet students’ needs or the multipurpose
LLMs did a good job at such a wide range of tasks that narrow usecase tools
were not necessary. Especially in light of each tool having a price associated with
long-term volume use of it, we see a churn in our sample from narrow-use-case
AI tools to ChatGPT.
When using AI tools specifically, in contrast to (Singh et al., 2023) who
noted in their survey that students did not use chatgpt for academic purposes
frequently, we saw most of the participants use it daily or 2-3 times a week. We
believe that the popularity and quality of the tool have increased since previous
research was published in the student community and led to more adoption.
20
an outline and helping students brainstorm interactively. In addition, students
who do not speak English as their first language might find a great help from
tools like ChatGPT (Yan, 2023). The positive impact on language acquisition
has also been seen in our data, where students mentioned that ChatGPT helped
expand their vocabulary and help with the tone of their writing.
In addition to regular writing, we note that students use AI tools for writing
software code and even executing the code to perform data analysis.
Learning Many participants mentioned ChatGPT’s ability to distill infor-
mation into coherent texts. They mention that this ability can break down a
topic so it can be explained at an easier level than how it was first introduced to
the students. Some even say that ChatGPT can act as their tutor. However, we
note that in the specific case of a math tutor, (Bastani et al., 2024) found that
ChatGPT can actually harm education by acting as a “crutch”. They found
that while ChatGPT can make tasks easier for humans, when its access was
taken away from students, students performed worse than students who never
had access to ChatGPT. (Crawford et al., 2024) also warn against substituting
AI for humans (e.g., in the case of learning complex topics, brainstorming, etc.)
as it can be detrimental to the student from a social perspective.
Other Our open-ended responses revealed students are using ChatGPT for
various personal, academic, and professional purposes. Given the holistic nature
of university life, students reported using ChatGPT for non-academic issues like
mental health, relationship advice, date ideas, entertainment. This highlights
the diverse range of university activities where ChatGPT is being utilized.
4.3.3 AI Expertise
How students gained expertise Our finding that most people gained
expertise just by using ChatGPT points towards three things. One, that the
technology itself and the user experience is simple enough to use; similar to
(Romero-Rodrı́guez et al., 2023) who found that the user experience was the
fundamental determinant of ChatGPT acceptance and that ChatGPT has an
easy-to-use interface (Turmuzi et al., 2024). Two, it goes against what some
professors have thought that the technology must be studied before being used
(Kiryakova & Angelova, 2023). Three, several course offerings from universities,
MOOCs, etc. in Prompt Engineering or LLM might not resonate with this
demographic of students who think they can gain expertise by only using it
unless a clear additional value can be demonstrated. In addition to usage, we
also noted that few students learned how to use ChatGPT through research or
classes at universities. Few students also mentioned that they learned on social
media.
We note that the difficulty in using ChatGPT is not the graphical user in-
terface itself but rather crafting the right set of words and providing enough
context to get the appropriate response, especially for advanced use cases. Fur-
thermore, since ChatGPT can generate incorrect information, one must also
have sufficient expertise in the domain to identify responses that may be wrong.
Domain expertise remains crucial as ChatGPT does not provide sources and
21
students using the tool must be able to identify good and bad information by
themselves or in augmentation with other tools or experts. In (Ngo, 2023)’s
study, students were aware of this issue; however, since not many mentioned
the need for domain expertise in our study, we show a possible deviation from
the finding.
Metaphors for Learning
Metaphors and anthropomorphizations are often used to conceptualize new
technology (Hurtienne & Blessing, 2007), especially one such as ChatGPT that
is sufficiently complex in its working and can perform a variety of tasks at scale
with non-trivial quality. We found two metaphors without specifically asking
the students to identify them. They were mentioned in the context of expertise
while using AI. In both metaphors, participants used anthropomorphizations
and called ChatGPT a ‘Child’ and a ‘Teacher’ that needs to be taught. In both
cases, the participants indicated that their relationship with AI was similar to
that of the human guide rather than the other way around.
22
7 Conclusion
Our research reveals that while AI tools like ChatGPT are rapidly gaining pop-
ularity among university students, they have not replaced traditional online
learning tools such as search engines and multimedia platforms. On average,
students use 3.64 tools to learn online, with search and AI tools like Google and
ChatGPT being the top-2 tools. Students are actively integrating AI into their
workflows, particularly for tasks such as writing and learning complex topics.
In addition, some are also using it to meet their entertainment, mental health,
social and other personal needs on campus. We have documented various use
cases and listed technology tools, including several AI tools, that are now a part
of students’ learning toolkit.
The concept of expertise in AI remains multifaceted, with students attribut-
ing it to both technical knowledge and practical experience, and most students
saying that they gained expertise just by using the tool rather than any formal
introduction or education. In addition, we find that knowledge about AI tools
beyond the most popular ChatGPT may be concentrated only within a few
students.
References
Albadarin, Y., Saqr, M., Pope, N., & Tukiainen, M. (2024). A systematic liter-
ature review of empirical research on ChatGPT in education. Discover
Education, 3 (1). https://doi.org/10.1007/s44217-024-00138-2
Almaraz-López, C., Almaraz-Menéndez, F., & López-Esteban, C. (2023). Com-
parative Study of the Attitudes and Perceptions of University Students
in Business Administration and Management and in Education toward
Artificial Intelligence. Education Sciences, 13 (6). https://doi.org/10.
3390/educsci13060609
23
Al-Zahrani, A. M., & Alasmari, T. M. (2024). Exploring the impact of artificial
intelligence on higher education: The dynamics of ethical, social, and
educational implications. Humanities and Social Sciences Communica-
tions, 11 (1), 1–12.
Bastani, H., Bastani, O., Sungu, A., Ge, H., Kabakcı, Ö., & Mariman, R. (2024).
Generative ai can harm learning. Available at SSRN 4895486.
Brindley, L. (2006). Re-defining the library. Library hi tech, 24 (4), 484–495.
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Nee-
lakantan, A., Shyam, P., Sastry, G., Askell, A., et al. (2020). Language
models are few-shot learners. Advances in neural information processing
systems, 33, 1877–1901.
Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: perceptions,
benefits, and challenges in higher education. International Journal of
Educational Technology in Higher Education, 20 (1). https://doi.org/
10.1186/s41239-023-00411-8
Chauke, T. A., Mkhize, T. R., Methi, L., & Dlamini, N. (2024). Postgraduate
Students’ Perceptions on the Benefits Associated with Artificial Intel-
ligence Tools for Academic Success: The Use of the ChatGPT AI Tool.
Journal of Curriculum Studies Research, 6 (1), 44–59. https://doi.org/
10.46303/jcsr.2024.4
Crawford, J., Allen, K. A., Pani, B., & Cowling, M. (2024). When artificial in-
telligence substitutes humans in higher education: the cost of loneliness,
student success, and retention. Studies in Higher Education, 49 (5), 883–
897. https://doi.org/10.1080/03075079.2024.2326956
Črček, N., & Patekar, J. (2023). Writing with AI: University Students’ Use of
ChatGPT. Journal of Language and Education, 9 (4), 128–138. https:
//doi.org/10.17323/jle.2023.17379
Crompton, H., & Burke, D. (2023). Artificial intelligence in higher education:
the state of the field. International Journal of Educational Technology in
Higher Education, 20 (1). https://doi.org/10.1186/s41239-023-00392-8
Crompton, H., & Burke, D. (2024). The Educational Affordances and Challenges
of ChatGPT: State of the Field. TechTrends, 68 (2), 380–392. https :
//doi.org/10.1007/s11528-024-00939-0
Divekar, R. R., Drozdal, J., Chabot, S., Zhou, Y., Su, H., Chen, Y., Zhu, H.,
Hendler, J. A., & Braasch, J. (2022). Foreign language acquisition via
artificial intelligence and extended reality: Design and evaluation. Com-
puter Assisted Language Learning, 35 (9), 2332–2360.
Duha, M. S. U. (2023). ChatGPT in Education: An Opportunity or a Challenge
for the Future? TechTrends, 67 (3), 402–403. https://doi.org/10.1007/
s11528-023-00844-y
Firat, M. (2023). What ChatGPT means for universities: Perceptions of scholars
and students. Journal of Applied Learning and Teaching, 6 (1), 57–63.
https://doi.org/10.37074/jalt.2023.6.1.22
Ghosh, D., & Beigman Klebanov, B. (2024, July). Automatic evaluation of ar-
gumentative writing by young students [US Patent 12,046,155].
24
Hurtienne, J., & Blessing, L. (2007). Metaphors as tools for intuitive interaction
with technology. Metaphorik. de, 12 (2), 21–52.
Islam, I., & Islam, M. N. (2024). Exploring the opportunities and challenges of
ChatGPT in academia. Discover Education, 3 (1). https://doi.org/10.
1007/s44217-024-00114-w
Jakesch, M., Bhat, A., Buschek, D., Zalmanson, L., & Naaman, M. (2023).
Co-writing with opinionated language models affects users’ views. Pro-
ceedings of the 2023 CHI conference on human factors in computing
systems, 1–15.
Kiryakova, G., & Angelova, N. (2023). ChatGPT—A Challenging Tool for the
University Professors in Their Teaching Practice [ChatGPT: Una Her-
ramienta Desafiante para los Profesores Universitarios en su Práctica
Docente]. Education Sciences, 13 (1056), 1–19. https://doi.org/10.3390/
%20educsci13101056
Leswing, K. (2023, April). Chatgpt and generative ai are booming, but the costs
can be extraordinary. https://www.cnbc.com/2023/03/13/chatgpt-
and-generative-ai-are-booming-but-at-a-very-expensive-price.html
Mahapatra, S. (2024). Impact of ChatGPT on ESL students’ academic writing
skills: a mixed methods intervention study. Smart Learning Environ-
ments, 11 (1). https://doi.org/10.1186/s40561-024-00295-9
Mamo, Y., Crompton, H., Burke, D., & Nickel, C. (2024). Higher education
faculty perceptions of chatgpt and the influencing factors: A sentiment
analysis of x. TechTrends, 68 (3), 520–534.
Maziriri, E. T., Gapa, P., & Chuchu, T. (2020). Student perceptions towards
the use of youtube as an educational tool for learning and tutorials.
International Journal of Instruction, 13 (2), 119–138.
Melina, G., Panton, A. J., Pizzinelli, C., Rockall, E., & Tavares, M. M. (2024).
Gen-ai: Artificial intelligence and the future of work.
Mousavinasab, E., Zarifsanaiey, N., R. Niakan Kalhori, S., Rakhshan, M., Keikha,
L., & Ghazi Saeedi, M. (2021). Intelligent tutoring systems: A system-
atic review of characteristics, applications, and evaluation methods. In-
teractive Learning Environments, 29 (1), 142–163.
Ngo, T. T. A. (2023). The perception by university students of the use of chat-
gpt in education. International Journal of Emerging Technologies in
Learning (Online), 18 (17), 4.
Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al. (2018). Improv-
ing language understanding by generative pre-training.
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.
(2019). Language models are unsupervised multitask learners. OpenAI
blog, 1 (8), 9.
Rawte, V., Chakraborty, S., Pathak, A., Sarkar, A., Tonmoy, S., Chadha, A.,
Sheth, A. P., & Das, A. (2023). The troubling emergence of halluci-
nation in large language models–an extensive definition, quantification,
and prescriptive remediations. arXiv preprint arXiv:2310.04988.
Romero-Rodrı́guez, J. M., Ramı́rez-Montoya, M. S., Buenestado-Fernández, M.,
& Lara-Lara, F. (2023). Use of ChatGPT at University as a Tool for
25
Complex Thinking: Students’ Perceived Usefulness. Journal of New Ap-
proaches in Educational Research, 12 (2), 323–339. https://doi.org/10.
7821/naer.2023.7.1458
Sallam, M., Elsayed, W., Al-Shorbagy, M., Barakat, M., Khatib, S. E., Ghach,
W., Alwan, N., Hallit, S., & Malaeb, D. (2024). ChatGPT Usage and
Attitudes are Driven by Perceptions of Usefulness, Ease of Use, Risks,
and Psycho-Social Impact: A Study among University Students in the
UAE, 1–17. https://www.researchsquare.com/article/rs-3905717/latest
Sanasintani, S. (2023). Revitalizing The Higher Education Curriculum Through
An Artificial Intelligence Approach: An Overview. Journal of Social
Science Utilizing Technology, 1 (4), 239–248. https://doi.org/10.55849/
jssut.v1i4.670
Singh, H., Tayarani-Najaran, M.-H., & Yaqoob, M. (2023). Exploring computer
science students’ perception of chatgpt in higher education: A descrip-
tive and correlation study. Education Sciences, 13 (9), 924.
Toosi, A., Bottino, A. G., Saboury, B., Siegel, E., & Rahmim, A. (2021). A brief
history of ai: How to prevent another winter (a critical review). PET
clinics, 16 (4), 449–469.
Turmuzi, M., Suharta, I. G. P., Astawa, I. W. P., & Suparta, I. N. (2024).
Perceptions of primary school teacher education students to the use of
chatgpt to support learning in the digital era. International Journal of
Information and Education Technology, 14 (5).
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N.,
Kaiser, L., & Polosukhin, I. (2017). Attention is all you need. Advances
in neural information processing systems, 30.
Yan, D. (2023). Impact of chatgpt on learners in a l2 writing practicum: An ex-
ploratory investigation. Education and Information Technologies, 28 (11),
13943–13967.
Zamfirescu-Pereira, J., Wong, R. Y., Hartmann, B., & Yang, Q. (2023). Why
johnny can’t prompt: How non-ai experts try (and fail) to design llm
prompts. Proceedings of the 2023 CHI Conference on Human Factors
in Computing Systems, 1–21.
26