Inbound 4660569299285574647
Inbound 4660569299285574647
2
3
Chapter 2
Princesa National High School, examining the impact of AI on student learning outcomes
can provide valuable insights into how these technologies are transforming education at a
new level. Classroom Management AI tools can help in managing classroom behavior by
predicting potential issues and providing strategies to address them. This can create a more
Artificial Intelligence (AI) in education by analyzing its impact on student learning outcomes.
Through a comprehensive literature review, the research synthesizes current findings on the
presents. The study explores into AI's role in personalizing learning experiences, enhancing
data privacy and algorithmic bias are also assessed. This research also identifies existing
gaps in the literature and suggests avenues for future inquiry, contributing to a deeper
optimize student success. (AL) Powered adaptive learning system can adjust the difficulty
level of course materials to suit individual students ' needs motivating them to learn more,
Academic performance can help student indentify knowledge gaps and provide.
4
Intelligence (AI) in education has sparked significant debate regarding its impact on student
learning outcomes. While AI has the potential to enhance educational experiences, several
challenges and concers arise like, Equity and Access, not all students have equal access to
technology and AI resources.The Data Privacy and Security the use of AI often involves
collecting and analyzing large amounts of student data, raising concerns about privacy and
the potential misuse of information. Skill Development Over-reliance on AI tools may hinder
Artificial Intelligence (AI) and robotics are likely to have a significant long-term
impact on higher education (HE). The scope of this impact is hard to grasp partly because
the literature is siloed, as well as the changing meaning of the concepts themselves. Design
fictions that vividly imagine future scenarios of AI or robotics in use offer a means both to
explain and query the technological possibilities. The paper describes the use of a wide-
ranging narrative literature review to develop eight such design fictions that capture the
range of potential use of AI and robots in learning, administration and research. They prompt
wider discussion by instantiating such issues as how they might enable teaching of high
order skills or change staff roles, as well as exploring the impact on human agency and the
nature of datafication. The potential of Artificial Intelligence (AI) and robots to reshape our
future has attracted vast interest among the public, government and academia in the last
few years. As in every other sector of life, higher education (HE) will be affected, perhaps in
a profound way (Bates et al., 2020; DeMartini and Benussi, 2017). HE will have to adapt to
educate people to operate in a new economy and potentially for a different way of life. AI and
robotics are also likely to change how education itself works, altering what learning is like,
5
the role of teachers and researchers, and how universities work as institutions. However, the
potential changes in HE are hard to grasp for a number of reasons. One reason is that impact
is, as Clay (2018) puts it, “wide and deep” yet the research literature discussing it is siloed.
AI and robotics for education are separate literatures, for example. AI for education, learning
analytics (LA) and educational data mining also remain somewhat separate fields.
text and data mining (TDM), are also usually discussed separately. Thus if we wish to grasp
the potential impact of AI and robots on HE holistically we need to extend our vision across
the breadth of these diverse literatures. A further reason why the potential implications of AI
and robots for HE are quite hard to grasp is because rather than a single technology,
something like AI is an idea or aspiration for how computers could participate in human
decision making. Faith in how to do this has shifted across different technologies over time;
as have concepts of learning (Roll and Wylie, 2016). Also, because AI and robotics are ideas
that have been pursued over many decades there are some quite mature applications:
impacts have already happened. Equally there are potential applications that are being
developed and many only just beginning to be imagined. So, confusingly from a temporal
perspective, uses of AI and robots in HE are past, present and future. Although hard to fully
grasp, it is important that a wider understanding and debate is achieved, because AI and
robotics pose a range of pedagogic, practical, ethical and social justice challenges. A large
Introducing AI and robots will not be a smooth process without its challenges and ironies.
6
There is also a strong tradition in the educational literature of critical responses to
technology in HE. These typically focus on issues such as the potential of technology to
dehumanise the learning experience. They are often driven by fear of commercialisation or
datafication of HE. Thus the questions around the use of AI and robots are as much about
what we should do as what is possible (Selwyn, 2019a). Yet according to a recent literature
review most current research about AI in learning is from computer science and seems to
neglect both pedagogy and ethics (Zawacki-Richter et al., 2019). Research on AIEd has also
been recognised to have a WEIRD (western, educated, industrialized, rich and democratic)
bias for some time (Blanchard, 2015). One device to make the use of AI and robots more
graspable is fiction, with its ability to help us imagine alternative worlds. Science fiction has
shaping the future (Dourish and Bell, 2014). Science fiction has had a fascination with AI and
robots, presumably because they enhance or replace defining human attributes: the mind
settings, transforming traditional teaching and learning processes. This review explores the
School (PPNHS). AI at Punta Princesa National High School (PPNHS) PPNHS has
school's Learning Recovery and Continuity Plan highlights the integration of technology to
7
enhance teaching and learning. The integration of AI in education has the potential to
address challenges related to data privacy and over-reliance on AI to ensure the responsible
and effective use of these technologies in educational settings. This review provides a
focus on PPNHS. Future research should continue to explore the benefits and challenges of
intelligence is transforming numerous industries and aspects of life, from healthcare and
finance to education and transportation, with its potential to automate tasks, enhance
intelligence is opening up new possibilities for innovation and growth, but also raises
important questions about job displacement, bias, and the need for ethical guidelines. As
artificial intelligence continues to evolve, it is likely to have a profound impact on the future
of work, education, and society as a whole, requiring humans to adapt and develop new
skills.
Artificial intelligence (AI) systems offer effective support for online learning and
tasks, and powering adaptive assessments. However, while the opportunities for AI are
promising, the impact of AI systems on the culture of, norms in, and expectations about
interactions between students and instructors are still elusive. In online learning, learner–
instructor interaction (inter alia, communication, support, and presence) has a profound
8
impact on students’ satisfaction and learning outcomes. Thus, identifying how students
identify any gaps, challenges, or barriers preventing AI systems from achieving their
intended potential and risking the safety of these interactions. To address this need for
systems in online learning. Findings show that participants envision adopting AI systems
the risk of violating social boundaries. Although AI systems have been positively
recognized for improving the quantity and quality of communication, for providing just-in-
time, personalized support for large-scale settings, and for improving the feeling of
connection, there were concerns about responsibility, agency, and surveillance issues.
These findings have implications for the design of AI systems to ensure explainability,
this study include the design of AI system storyboards which are technically feasible and
concerns of AI systems through Speed Dating, and suggesting practical implications for
maximizing the positive impact of AI systems while minimizing the negative ones. The
over-reliance can negatively affect critical thinking, creativity, and independent learning
it can provide for students, students were concerned about responsibility issues that could
9
arise when AI’s unreliable and unexplained answers lead to negative consequences. For
instance, when communicating with an AI Teaching Assistant, the black-box nature of the
AI system leaves no choices for students to check whether the answers from AI are right
or wrong (Castelvecchi, 2016). Accordingly, students believe they would have a hard time
deciphering the reasoning behind an AI’s answer. This can result in serious responsibility
issues if students apply an AI’s answers to their tests but instructors mark them as wrong.
As well, students would find more room to argue for their marks because of AI’s
unreliability. Acknowledging that AI systems cannot always provide the right answer, a
refers to the ability to offer human-understandable justifications for the AI’s output or
simulations and virtual labs, makes learning more immersive and enjoyable. Students
workforce. However, challenges like data privacy concerns and potential overreliance on
personalized instruction, enhanced critical thinking skills, and better preparation for the
evolving demands of the digital age, contributing to a more dynamic and adaptive
10
considerations are crucial to maximize the benefits of AI and mitigate potential drawbacks,
ensuring a positive and equitable impact on students’ overall learning and development.
engagement. Virtual tutors and AI-driven tools offer real-time support, enhancing critical
thinking skills and independent learning. However, when it comes to negative effects of
the development of essential non-cognitive skills and human interactions crucial for social
and emotional development. Ultimately, whether AI is considered good or bad for students
approach. When utilized thoughtfully, AI can enhance the learning experience, providing
valuable support and personalized instruction. Yet, vigilance is crucial to address potential
risks and ensure that AI serves as a positive force in education, promoting equitable
access, student well-being, and the cultivation of diverse skills essential for success in
how students engage with their academic and personal lives. However, the impact of AI
literature to assess how AI affects student well-being, focusing on mental health, social
11
learning, mental health support, and improved communication efficiency, it also raises
intelligence, leading to social isolation and anxiety. Furthermore, issues such as data
environments. The review highlights the need for balanced AI integration that supports
both academic success and student well-being, advocating for further empirical studies
education, it is crucial to develop strategies that mitigate its negative effects while
intelligence (AI) into higher education is reshaping how students engage with academic
content and spend their free time, yet its impact on their well-being remains
underexplored. Despite the growing use of AI in both academic tasks and personal
activities, empirical studies on its effects on student well-being are notably scarce. This
study addresses this gap by conducting a mini-review that seeks to synthesize the limited
experimental and empirical evidence available on this critical issue. While the small
number of studies reflects the early stages of research in this field, it is vital to establish
a clear understanding of what is currently known. By doing so, this mini-review lays the
how AI affects students’ mental health, social interactions, and overall well-being in higher
education. Conducting this review is timely and necessary to create a foundation for
as its use continues to expand. The increasing use of artificial intelligence (AI) in higher
12
education is reshaping how students engage with their academic and personal lives.
review synthesizes current literature to assess how AI affects student well-being, focusing
on mental health, social interactions, and academic experiences. While AI offers benefits
efficiency, it also raises concerns regarding digital fatigue, loneliness, technostress, and
and emotional intelligence, leading to social isolation and anxiety. Furthermore, issues
educational environments. The review highlights the need for balanced AI integration that
supports both academic success and student well-being, advocating for further empirical
in education, it is crucial to develop strategies that mitigate its negative effects while
intelligence (AI) into higher education is reshaping how students engage with academic
content and spend their free time, yet its impact on their well-being remains
underexplored. Despite the growing use of AI in both academic tasks and personal
activities, empirical studies on its effects on student well-being are notably scarce. This
study addresses this gap by conducting a mini-review that seeks to synthesize the limited
experimental and empirical evidence available on this critical issue. While the small
number of studies reflects the early stages of research in this field, it is vital to establish
a clear understanding of what is currently known. By doing so, this mini-review lays the
13
how AI affects students’ mental health, social interactions, and overall well-being in higher
education. Conducting this review is timely and necessary to create a foundation for
Theoritical Framework
of the singularity concept, if machines were able to achieve singularity, then “human
affairs, as we know them, could not continue.” Exactly how or when we arrive at this era
is highly debated. Some futurists regard the singularity as an inevitable fate, while others
are in active efforts to prevent the creation of a digital mind beyond human oversight.
construction by learners (Blikstein & Worsley, 2016; Siemens & Long, 2011). AI
solving scenarios and simulations. For instance, Intelligent Tutoring Systems (ITS)
(Anderson et al., 1995). This aligns with the principles of constructivism, where learners
14
highlights the effectiveness of AI in promoting active learning, emphasising its role in
and their unique cognitive processes (Dede, 2010). AI in education excels at creating
based on individual student needs (Russel & Norvig, 2010). Machine learning algorithms
fosters a more student-centric approach, where learners have the autonomy to explore
topics at their own pace. Research in this domain underscores the positive impact of
acquisition, reinforcing the constructivist tenet that learners actively shape their
artificial intelligence (AI) clearly reveals that brain science has resulted in breakthroughs
in AI, such as deep learning. At present, although the developmental trend in AI and its
and human intelligence. It is urgent to establish a bridge between brain science and AI
research, including a link from brain science to AI, and a connection from knowing the
brain to simulating the brain. The first steps toward this goal are to explore the secrets of
connection diagram of the brain; and to integrate neuroscience experiments with theory,
15
models, and statistics. Based on these steps, a new generation of AI theory and methods
can be studied, and a subversive model and working mode from machine perception and
Technology Acceptance Model shows the acceptance and the use of information
technologies can bring immediate and long-term benefits at organisational and individual
levels, such as improved performance, financial and time efficiency and convenience
(Foley Curley, 1984; Sharda, Barr & McDonnell, 1988). The potential of technology to
deliver benefits has long motivated IS management research to examine the willingness
adoption of technology became of primary importance in the 1980s, which coincided with
the growth of the use of personal computers. However, a major stumbling stone at the
development of the research on the adoption of personal computing was the lack of
empirical insight into users’ responses to the information system performance. Before the
to advance IS-related research (e.g. (Benbasat, Dexter & Todd, 1986; Robey & Farrow,
1982; Franz & Robey, 1986)). Research had emphasised the importance of factors such
as users’ involvement in the design and implementation of information systems (Robey &
Farrow, 1982; Franz & Robey, 1986). A second stream of research had been underpinned
came to evaluating and refining system design and characteristics (Gould & Lewis, 1985;
Good et al., 1986). Those studies had widely used subjective performance perception
scales but neglected the validation of the quality of those measures. As a result, the
16
correlation of those subjective measures with actual use had not been sufficiently
significant to confirm their internal and external validity (De Sanctis, 1983; Ginzberg,
1981; Schewe, 1976; Srinivasan, 1985). Hence, there was a need to develop reliable
characteristics and system use. The Theory of Reasoned Action (TRA), developed by
Ajzen and Fishbein (Ajzen, 2011) was used to predict the attitudinal underpinnings of
behaviours across a wide range of areas. However, the generic nature of TRA stimulated
a great deal of discussion on the theoretical limitations of the application of the model in
the IS field (Davis, Bagozzi & Warshaw, 1989; Bagozzi, 1981). The model did not measure
variables specific to technology use. Hence, researchers had to identify the factors salient
to the utilisation of technology and information systems. To address the limitations related
to the lack of a theoretical model and scales to measure the acceptance of technology,
Davis (Davis, 1989) developed the technology acceptance model (TAM) based on TRA.
The model’s underpinning logic was that in the context of technology utilisation,
behavioural intention was not shaped by a generic attitude toward behavioural intention,
but specific beliefs related to technology use. The goal of TAM was to become the
17
Definition of Terms
and development
18
SILOED-(of a system, process, department, etc.) isolated from others
understanding or interpretation
GRASP-the act of seizing or holding something firmly with the hand, or to a performance
assessment model.
VIVIDLY-in a way that produces powerful feelings or strong, clear images in the mind.
example or representation,"
19
HIGHER EDUCATION(HE)-education that takes place after secondary school, and is
theoretical concept.
LEVERAGING-to use something that you already have in order to achieve something
new or better
CURRICULA- a course of study, or a set of lessons and materials used to teach a subject.
20
References
Amer, M., Daim, T., &Jetter, A. (2013). A review of scenario planning. Futures, 46, 23-40.
Atanassova, I., Bertin, M., & Mayr, P. (2019) Editorial: mining scientific papers: NLP-
https://doi.org/103389/frma.2019.00002
Auger, J. (2013). Speculative design: Crafting the speculation. Digital Creativity, 24(1),
11-35. Badampudi, D., Wohlin, C., & Petersen, K. (2015). Experiences from using
(pp. 1-10). Baker, T., Smith, L. and Anissa, N. (2019). Educ-AI-tion Rebooted? Exploring
https://www.nesta.org.uk/report/education-rebooted/
Bates, T., Cobo, C., Mariño, O., & Wheeler, S. (2020). Can artifcial intelligence transform
https://doi.org/10.1186/s41239-02000218-x
Education, 20(4), 455–467. Belpaeme, T., Kennedy, J., Ramachandran, A., Scassellati,
https://doi.org/10.1126/scirobotics.aat5954
Brevini, B. (2020). Black boxes, not green: Mythologizing artifcial intelligence and
omitting the environment. Big Data & Society, 7(2), 2053951720935141. Canzonetta, J.,
& Kannan, V. (2016). Globalizing plagiarism & writing assessment: a case study of
Turnitin. The Journalnal of Writing Assessment, 9(2).
21
http://journalofwritingassessment.org/article.php?article=104
https://doi.org/10.1109/HICSS.
Dunne, A., & Raby, F. (2001). Design noir: The secret life of electronic objects. New
York: Springer Science & Business Media.
Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). Principled artifcial
intelligence: Mapping consensus in ethical and rights-based approaches to principles for
AI. SSRN Electronic Journal.
https://doi.org/10.2139/ssrn.35184
Følstad, A., Skjuve, M., & Brandtzaeg, P. (2019). Diferent chatbots for diferent
purposes: Towards a typology of chatbots to understand interaction design. Lecture
Notes in Computer Science (including subseries Lecture Notes in Artifcial Intelligence
and Lecture Notes in Bioinformatics). 11551 LNCS, pp. 145–156. Springer Verlag.
Future TDM. (2016). Baseline report of policies and barriers of TDM in Europe.
https://project.futuretdm.eu/wp-content/
FutureTDM_D3.3-Baseline-Report-of-Policies-and-Barriers-of-TDM-in-Europe.pdf.
Gabriel, A. (2019). Artifcial intelligence in scholarly communications: An elsevier case
study. Information Services & Use, 39(4), 319–333. Grifths, D. (2015). Visions of the
future, horizon report. LACE project.
http://www.laceproject.eu/visions-of-the-future-oflearning-analytics/.Heav
Holmes, W., Bialik, M. and Fadel, C. (2019). Artifcial Intelligence in Education. The
center for curriculum redesign. Boston, MA. Hussein, M., Hassan, H., & Nassef, M.
(2019). Automated language essay scoring systems: A literature review. PeerJ
Computer Science.
https://doi.org/10.7717/peerj-cs.20
Inayatullah, S. (2008). Six pillars: Futures thinking for transforming. foresight, 10(1), 4–
21.Jarke, J., & Breiter, A. (2019). Editorial: the datafcation of education. Learning, Media
and Technology, 44(1), 1–6.JISC. (2019). The intelligent campus guide. Using data to
make smarter use of your university or college estate.
https://www.jisc.ac.uk/rd/projects/intelligent-campus.
22
Jones, E., Kalantery, N., & Glover, B. (2019). Research 4.0 Interim Report. Demos.
Jones, K. (2019). “Just because you can doesn’t mean you should”: Practitioner
perceptions of learning analytics ethics. Portal, 19(3), 407–428.Jones, K., Asher, A.,
Goben, A., Perry, M., Salo, D., Briney, K., & Robertshaw, M. (2020). “We’re being
tracked at all times”: Student perspectives of their privacy in relation to learning
analytics in higher education. Journal of the Association for Information Science and
Technology.
https://doi.org/10.1002/asi.24358
Lowendahl, J.-M., & Williams, K. (2018). 5 Best Practices for Artifcial Intelligence in
Higher Education. Gartner. Research note. Luckin, R. (2017). Towards artifcial
intelligence-based assessment systems. Nature Human Behaviour, 1(3), 1–3.Luckin, R.,
& Holmes, W. (2017). A.I. is the new T.A. in the classroom.
https://howwegettonext.com/a-i-is-the-new-t-a-inthe-classroom-dedbe5b99e9e
Luckin, R., Holmes, W., Grifths, M., & Pearson, L. (2016). Intelligence unleashed an
argument for AI in Education. Pearson.
https://www.pearson.com/content/dam/one-dot-com/one-dot-com/global/Files/about-
pearson/innovation/open-ideas/intelligence-Unleashed-v15-Web.pdf
Journal of Educators Online, 16(1), n1.Maughan, T. (2016). The hidden network that
keeps the world running.
https://datasociety.net/library/the-hidden-network-that-keeps-the-world-running/
McDonald, D., & Kelly, U. (2012). The value and benefts of text mining. England:
HEFCE. Min-Allah, N., & Alrashed, S. (2020). Smart campus—A sketch. Sustainable
Cities and Society.
https://doi.org/10.1016/j.scs.2020.102231.
Nathan, L. P., Klasnja, P. V., & Friedman, B. (2007). Value scenarios: a technique for
envisioning systemic efects of new technologies. In CHI’07 extended abstracts on
human factors in computing systems (pp. 2585–2590). Nurshatayeva, A., Page, L. C.,
White, C. C., & Gehlbach, H. (2020). Proactive student support using artifcially
intelligent conversational chatbots: The importance of targeting the technology.
EdWorking paper, Annenberg University
https://www.edworkingpapers.com/sites/default/fles/ai20-208.pdf
23
Page, L., & Gehlbach, H. (2017). How an artifcially intelligent virtual assistant helps
students navigate the road to college. AERA Open.
https://doi.org/10.1177/2332858417749220
24
25