See discussions, stats, and author profiles for this publication at: [Link]
net/publication/370779124
AI-CHATBOT-INTEGRATION IN CAMPUS-MANAGEMENT-SYSTEMS
Conference Paper · May 2023
DOI: 10.21125/edulearn.2023.0971
CITATIONS READS
9 800
1 author:
Stefan Bieletzke
Trainings-Online Ges. für E-Portale mbH
34 PUBLICATIONS 34 CITATIONS
SEE PROFILE
All content following this page was uploaded by Stefan Bieletzke on 16 May 2023.
The user has requested enhancement of the downloaded file.
AI-CHATBOT-INTEGRATION IN CAMPUS-MANAGEMENT-SYSTEMS
S. Bieletzke
Trainings-Online GmbH (GERMANY)
Abstract
The question is how large language models can be used reasonable in the teaching, administration or
research by university-students, lecturers or administrative staff.
This article shows the necessity and the possibility of integrating a text-intelligence like ChatGPT into a
campus management system. By using the application programming interface (API), the free inquiry
prompt can become a mixed inquiry, whose important parts and borders are defined and controlled by
the university. The university can define how known data, requested data and established data are
combined and used in an active dialog with defined dialog-rules for different university services.
The article first presents an empirical study in which 5000 students were addressed by a chatbot. It
shows that a lot of questions asked could have led to legal problems for the university if answered
incorrectly. Based on this, a prototypical project is shown where instead of using a free chat, students
communicate indirectly with the artificial intelligence (AI) via the Campus-Management-System (CMS)
as a mediator. It is shown how a professional prompt can lead to controllable dialogues, based on the
university rules and data. Practical implementations such as "learn-question of the day", "subject-
specific motivation of the student", "question about the examination regulations" or "help in finding a
creative title" are presented. Though the examples are specifically realized for TraiNex
([Link]), the description is meant to be general enough to gain insights for other systems.
Keywords: Chatbot, Campus management system, university, chatGPT, AI, GPT, use cases
1 INTRODUCTION
Universities have been testing the use of chatbots for a long time. [1, 2] Especially rule-based script
chatbots were in the focus. These script chatbots do respond to a question within a prepared script and
have a limited and well-defined area of expertise. They typically react to certain keywords in a chat,
leading to a pre-planned dialogue. The user is then guided through a menu-driven dialogue to present
him a final answer. However, the user experience with these chatbots can often be disappointing. [3]
Chatbots with artificial dialog intelligence raise hopes for better service-oriented applications. These kind
of chatbots have been developed based on successes in machine learning, which means that their rules
are derived from the automatic analysis of a large amount of data and the patterns contained within that
data. A simple text-pattern could be that “The cat drinks” is mostly followed by the word “milk” or “water”.
However, it is important to note that chatbots using the models known as GPT (Generative Pre-trained
Transformer) - like ChatGPT - should not be confused with general artificial intelligence (AI) and
technological singularity [4], as sometimes depicted in hype articles. Nevertheless, the language models
behind them are very powerful in text analysis. Additionally, the dialog appears more natural and human-
like because the responses are based on a larger dataset, which expands their capabilities. Real
conversations can be created by considering previous dialogue-parts, making people feel understood.
2 EMPIRICAL STUDY ON CHATBOT-USE BY UNIVERSITY-STUDENTS
What questions would students ask if they had a chatbot to help them during their studies? This was the
topic of a study in November 2022. 5000 German students were greeted by a chatbot in their campus
management system for a week during their daily studies. The chatbot said "Hello there! What's your
name?" and asked, pretending to be very competent, if it could help them with their studies. [3]
When a question was asked, the chatbot politely reminded students of its limitations in answering
questions and ended the conversations. All questions were gathered and analyzed. It was found that
34% of the questions were about "study content", such as "What should I learn today?", "How can I
cheat better?" or "Is a Master's degree worth it?". Technical questions were raised in 18% of the cases,
such as "How does the e-Library work?". Another 17% of participants asked questions related to
"university organization", such as "Do you have the e-mail address of the Exam Office?". [3]
Figure 1: Fake Chatbot-Support in empirical study inside Campus-Management-System
It's noticeable that only around 10% of the 5000 users interacted with the chatbot, and only 25% of them
asked a question (2.4% of the total). A post-survey showed that in November 22, many students didn't
think a chatbot was competent enough for advice, so they did not start using it. However, the questions
asked by the students who trusted the fake-competent chatbot are important for developing future AI
chatbots. [2]
After analyzing the given questions, it was found that if the chatbot answered them incorrectly, about
half of them could have caused legal problems for the university. For example, a student asked the
question “How long should the homework paper be?”. If the chatbot had answered "20 pages are
enough", even though it contradicts the examination regulations, this can be considered legally
problematic for the university.[2]
Answers regarding specific learning content could also cause legal difficulties. If the chatbot answers
the question "What is ethics?" with a definition that is outdated and rejected in the materials provided by
the responsible teaching staff, it could result in a poor rating for the affected student by the teacher and
afterward complaints by the student.
Because there are many questions that are legally questionable to answer, the categorization by
interaction capabilities of chatbots has been expanded. The extension involves the additional dimension
“accessible data”. This is because the legal certainty of chatbot responses depends on the accessible
data stock. Targeted optimization, enlargement or even limitation of the data stock could increase legal
certainty. To ensure the legal certainty of responses, it may be advantageous if chatbot responses do
not come from an open but an only partly-open or closed data stock, as the following examples
illustrate. [2]
In a situation like position 1 of the figure 2, the student could ask "How long does an oral exam take?".
The scripted chatbot would branch into a dialogue script based on the recognized keyword “oral exam”
and ask through a menu whether a bachelor's colloquium or a practical lecture is meant. After the
student selects an answer option from the menu, the chatbot ultimately quotes the relevant passage
from the internal examination regulation or refers to this source. Altogether, in this situation the quality
of information is good but the interaction with the chatbot can be considered as poor.
If we do the same with an AI-chatbot, like in Pos.9, the interaction is good and human-like in a natural
dialogue. But: It can be a legal problem if the AI-chatbot gives a general answer, based on its general
trainings data. This may not be helpful to the student and could even lead to legal problems if the answer
is mistaken for an official statement from the specific university.
Figure 2: Chatbot-Typology by data stock and interaction type [1]
While rule-based chatbots are considered to have weak interactions, they can be legally secure due to
their closed data sets. On the other hand, AI chatbots are highly interactive but can lead to legal
uncertainties when their data sets are open.
To be used in universities,
• chatbots that operate based on defined-rules (script bots) need to improve in terms of their
interaction possibilities (development path A) or
• AI-chatbots need to be trained on a set of data that can be controlled (development path B).
We will now proceed with development path B.
3 INTERFACE FOR CREATING CONTROLLABLE CHATBOTS
The question is how to best utilize AI chatbots based on GPT-language models in teaching,
administration, and research at a university while avoiding traps and drawbacks. When used effectively,
these chatbots should benefit all parties involved, including students, teachers, administrative staff, and
management.
Currently, users' experiences with ChatGPT at universities are often limited to its use on the OpenAI
website. This involves making a free-text request like a phrase or a sentence to get an answer and even
follow-up questions and responses, that come from a data stock that the AI has been trained on in the
past.
This type of direct ChatGPT-use can be unfavorable for academic purposes, because the university
cannot control the communication request specifically or incorrect information could be put out from the
general training data. Additionally, there could be disadvantages for students who have valid privacy
concerns and do not want to be pushed to sign up for ChatGPT. In Germany, asking students to use
ChatGPT would currently not comply with General Data Protection Regulation (GDPR), and therefore
would be unlawful.
One solution could be to use the application programming interface (API) for the AI language model.
This API allows universities to access the AI capabilities and the GPT-Model in a standardized and
seamless way, even without the user having to register with ChatGPT. Through the API, the campus
management system (CMS) acts as a mediator between the user and the AI and the GPT-model. The
mediator oversees the input, prepares the processing, and controls the output of the communication. [5]
Access will be easy: The user selects a function within the Campus Management System, the system
that they already know very well. They enter necessary information and receive a response after a few
seconds, which may include a possibility for a follow-up question. The entire dialogue takes place within
the Campus Management System and the user does not have a direct interaction with ChatGPT.
The university can control dialogue opening, prompt layout and other dialogue aspects according to
Table 1:
Table 1: Controllable dialogue aspects
Controllable dialogue aspects Example
Which users get what kind of services Students may ask for AI-title
suggestions if they are in the
final semester
Which data is requested by the user Field of study and 3 keywords
as part of the input are needed
Which other university-data or task- build the answer on the
specific data is used as input university guide to scientific
research work
In which way the AI should do the Create 3 titles and 3 theses with
processing possible falsification
how the general world knowledge of each title suggestion should be
the AI is weighted in relation for the more creative than the previous
prescribed data one
How AI should handle not known Don't hallucinate persons,
problems or unknown data-details websites or other sources.
what role does the AI see Be a friendly,
itself/himself/herself in? competent and experienced
college professor
whether the AI passively waits for Actively ask students whether
questions or actively seeks dialogue they already have a title
what kind of output is wanted Scope
(max. 50 words per thesis) ,
Tone of voice (scientific,
undergraduate level)
The creation of the prompt is central because it is the initial set of instruction, that pushes the AI in the
right direction. We distinguish between the spontaneous poor prompts that a user enters in the ChatGPT
dialog box and the professional prompt that can be used by a GPT-API.
3.1 Professional-Prompt vs. Poor-Prompt
In the following the difference between a poor-prompt and a professionally-created prompt through an
API will be explained. For example, imagine that someone is in the final stage of their Bachelor's degree
and only has a few keywords regarding their topic. The goal is to come up with a good academic title.
One option is to use the free ChatGPT version to get some title ideas quickly. However, if they want
high-quality results, they would need to create a well-designed prompt that, for example, takes into
account their school's requirements. They would need to test and optimize this prompt before using it
for their request. If the prompt is a poor problem prompt and not good enough, the results will also be
inadequate. Many users will give up in resignation and start to disbelief in the abilities of AI.
It is maybe underestimated that prompting is a challenging media skill. Some people are already talking
about the profession of prompt engineers. Professional prompters will create very long prompts to guide
the AI to an optimal result. "Long" means that the initial prompt consists of 150 initial sentences which
lead to a set of questions, instructions and a successful dialogue including a final solution. The structure
of such professional prompts needs to be developed by professionals, but then can be algorithmically
used with a campus management system and automatically adapted for each user.
3.2 Professional-Prompt-Generating in the Service TitleFinder
In order to make the prompt more professional, a combination of known, requested and specified data
is done by the campus management system. According to Figure 3, shown on the right side, the process
is as follows:
1) The Campus Management System knows which student has to write a thesis.
2) Within the Campus System, such students are given the option to click on the "Find Title" service.
3) The student is asked for "3 keywords for the planned work", as well as "the number of pages" and an
existing working title.
4) The Campus Management System already knows the field of study (e.g. psychology) and the degree
(e.g. Bachelor).
5) For example, the university has specified "no question marks in the title" or the structure system.
6) When the student has answered the questions, the professional prompt is created in the background
by combining the requested, known, and university-specific details, as well as the task-specific
instructions for finding a title. The prompt consists of about 150 different instructions.
7) The Query Engine API is used by the CMS to query the GPT-AI with the prompt, and the result is
delivered back to the CMS TraiNex.
8) The Campus Management System can roughly check the text response for words that are not allowed
or add additions like "non-binding".
9) The requested title suggestion is displayed to the student within the Campus Management System.
10) The student can check the titles and, if necessary, adjust them and have a work outline generated
for a title chosen by the student, which roughly meets the requirements of the university (e.g. in
accordance with the structure system). [4]
Figure 3: Process of creating and processing a professional prompt [4]
The student does not leave the campus management system TraiNex in any of the steps and the AI is
only involved in step 7 or the dotted purple area. The user does not see the prompt or know that an AI
was used in the background to generate the answer. The user cannot engage in a free-form
conversation. Technically, it is only a simple question and answer interaction, not a continuous dialogue.
A free version of the TitleFinder is available as TitleMate on the iOS store, or as a website at
[Link] The relevant screens of the TitleFinder/TitleMate-App for entering
information and displaying results are shown in the figure 4.
Figure 4: Service of finding a scientific title, here: iOS-TitleMate App
3.3 Other Scenarios for the API
In taking the “chat” out of “ChatGPT” first and secondly creating an own chat, which is based on the
CMS-data and realized by the API, the university can generate a lot of different solutions quickly. The
university-chatbot can be customized in different ways to improve the user's conversation experience.
One way is for the chatbot to actively engage with the user and try to maintain a longer conversation,
ask the user appropriate questions and even let the ChatBot try to follow the dialogue-steps attention-
interest-desire-action. The prompt should determine how the conversation flows, what role the chatbot
plays, what its goal is, and when the conversation ends. Some successful examples have already been
tested and realized using the few-shot method. This method involves providing the AI/GPT-Modell with
the data it needs at the start of the dialogue.
3.3.1 Student-Applicant-Advisor-Chat
When people, who are interested in studying, visit the public website of the university and select a
specific study program, the chatbot will guide them into a conversation about that program. The chatbot
will ask about the reasons for their interest and their relevant hobbies. It will introduce the study contents
that match their hobbies or expectations. The chatbot may encourage them to explore the program
further by mentioning typical courses, career fields, costs, application processes, and more information
about the university. The chatbot only answers questions about the selected study program and does
not provide general information, hallucinate websites, or imagine people, places, or costs. Example of
a chatbot answer after the user mentioned, that he is active on TikTok: “Yes, great, your experience in
TikTok can definitely be a good start for the study of Media Communication, but it is important to note,
that the program covers a wide range of topics beyond social media … Are you interested in …?”
3.3.2 Learn-script-quiz/Question of the Day
The chatbot will poke a student to start a dialogue and quickly direct the conversation towards a topic,
related to the student’s studies. In the background the chatbot is automatically given relevant data, like
a small chapter from a current textbook of the student’s courses. The chatbot is also given instructions
on how to develop a question based on this material. The student will be asked the question after an
introduction and the answer will be evaluated and possibly discussed. The student will be praised or
encouraged to engage more with the material. For example: "Hello Albert, do you remember that your
exam on international trade will be soon? Today I have an exciting question for you about international
trade. Please explain the difference between import and export!"
3.3.3 Motivations-Chat
The chatbot talks to students on the homepage of the campus management system during times when
they may need extra motivation. The chatbot knows the student's name, gender, preferred language,
field of study and next five classes. Its job is to act as good friend from a higher semester and to motivate
the student, reflect positively on their classes, and give practical tips on time and study management.
The chatbot starts with a general question about the student's field of study (Position 1 in figure 5) and
then tries to talk about one of their specific classes or ask if certain topics are being covered. It won't
answer general questions (Position 3 in figure 5), but it might give tips on efficient studying or talk about
job opportunities (Position 4 in figure 5). If there are any legal questions, the chatbot will say it can't
answer them and advises the student to check the CMS Trainex. The conversation ends after about five
interactions.
Figure 5: Example of motivational chat
This Motivation Chat was also created for employees. Administrative staff are addressed shortly after
lunchtime for a Motivation Chat. The prompt provides the person's first name, department, and
approximate age to adjust the tone accordingly. The AI asks about their current task, provides
encouragement, praise, or comfort, and gives tips for more efficient work organization. The university
can set a general or employee-specific motivation or learning goal, even kind of an agenda, such as
"Tasks should be done always precisely" or "Time tracking need to be used in a correct way" or “increase
the loyalty”. The conversation ends politely after about five interactions.
With some additional effort it would be possible for the university to track how much people use the
motivation chat, either by how often or how well they use it. However, this would make users rightly feel
uncomfortable about that service and would not meet privacy rules. So, it's important to make sure that
specific conversations are never saved and therefore get trackable by the university management. This
needs to be guaranteed by a never-store-without-exception-technology in such a service from the start.
4 EXPANDING THE USED TEXT-DATA
To access larger amounts of data, one has to expand the few-shot-learning-method, which is used in
the examples above. In the previously used API-few-shot, the GPT-AI is given the necessary data for
the dialog. For example, when chatting with the TitleFinder, the GPT-AI receives a few paragraphs from
the university's "scientific work" course, namely "Creating theses," "falsification vs. verification" and
"How to structure a numerical outline." However, for the Motivation Chat, the paragraphs on "motivation
in studying" and "time management in studying" are used as data. Each paragraph is about 3 pages
long and has a length of 6000 characters or 2000 syllables/tokens. This is the maximum of data that
can initially be passed if the following dialogue should have around 5 questions. [6]
4.1 Embedding Instead Few-Shot-Method
To use text data that is longer than 6000 characters, such as a complete course-textbook or an
examination regulation, a different approach must be taken. The approach called “Finetuning” would be
a solution if one wants to expand the general training-data used by ChatGPT. [6] For use via the API
the “Embedding”-approach is better because the used data remains under control of the university.
Embedding requires breaking the text manually into paragraphs, each with a title and a suitable question
about this paragraph. Then the text is tokenized by an algorithm, which means counting syllables. Next
it is vectorized by a special algorithm, which is kind of a specific preparation of the text for text-analysis
and word-similarity. The final document consists only of numerical vectors instead of letters. It is stored
as a vector model on the university's server. The vector document is like a compiled version of the
original text and is used by the GPT-AI to recognize similarities in the text sections. To answer questions,
the GPT-AI only needs the vector model, which is called by its name.
In all requests, this model (such as the examination regulations) can be referred to. The prompt itself is
still constructed as previously described, with only the "data basis to be used" area referring to the vector
model. Though this process is not easy, it is only suitable for documents that are too long for few-shot
learning and for which the effort is worth it due to a high volume of requests while being unchangeable
over time, such as a standard study guide or university-wide examination regulations.
4.2 Example from the Administrative Sector
A request for a university-wide examination order can be realized through embedding. Since few-shot
learning is not suitable for including documents such as 30-page exam regulations as text-data.
Therefore, the document has been completely tokenized/vectorized and can be accessed via a unique
model name through the prompt. The "compiled" exam regulations can be used for any prompts. It could
be made accessible to students through the "Question to the Exam Office Chatbot" service, where the
Bot maybe acts like an experienced fellow student from a higher semester, while maintaining a youthful
yet correct but unbinding style. Or it could be made accessible to administrative staff as support for
legally binding statements, as described now.
For example, a German administrative assistant from the examination office could receive an e-mail in
English from a student. The e-mail arrives within the Campus Management System and can be pre-
processed by the administrative assistant by clicking "Translate into German" with the option
"Summarize: What is the main question?". From the result, the administrative assistant concludes, for
example, that the answer can be found in the examination regulations. They click "Answer the main
question", select the option "Use examination regulations 2023", provide relevant additional information
such as "inquirer is studying via distance learning" and choose the option "quote relevant passages of
official document".
Again, one can request, process and output the whole question-answer-text inside the Campus
Management System without knowing to use a GPT-AI. Admin staff can even ask for more information
in their responses and create new response suggestions. The response can also immediately be
formatted in an e-mail-writing-style, translated into the school's communication language (e.g. English),
and sent securely through the Campus Management System while automatically conforming to a formal
yet friendly communication style.
A problem is that, according to European General Data Protection Regulations (GDPR), no personal
data may be transmitted to services in the USA. This scenario "examination regulations request" can
therefore only be used in Germany if the request is anonymized in such a way that no personal data is
contained, e.g. no surname and no matriculation number.
5 CONCLUSIONS
For many years, algorithmic intelligence, developed by humans, has been supporting students,
teachers, and administrators in dealing with problems that require mathematical or well-structured
operations, such as room assignments, workload distribution, exam registration, and evaluation. [1,5,7]
Algorithmic intelligence is safe to use and legally secure in these fields because it produces the same
results for identical requests. For example, a final grade is always calculated using the same principle
and not in a creative way. In addition, the algorithms can be tailored very precisely. And, sometimes
most important, if using algorithmic intelligence, it can be clarified how an answer/decision was reached.
But generative language models can be very helpful for text-based tasks, esp. if they are poorly
structured or need creativity. However, there are problems with using ChatGPT directly, including
privacy concerns and the lack of prompt media literacy among users, as well as that the university
specific data is not well known by the AI. Only by using the API it is possible to integrate AI language
models into university software systems and that new solutions and services can be developed for the
university.
As shown, using the API enables the university to control the AI with a specific prompt. This professional
prompt is made up of parts that are well-known, gathered through questioning and tailored to the
university’s specific needs. Additionally, the API allows you to expand the data used with university-
specific information, or limit the request to only specific data. It makes sense to use the API integrated
into the Campus Management System because the CMS is controlled and monitored by the university
and it has knowledge of relevant data and situations. When a GPT-AI is integrated into a campus
management system like TraiNex, all members of a university can access it through the web without the
need for users to register and worry about their privacy.
To express it in a “mathematical”-formula:
((ChatGPT - Chat) + API) * CMS = University-Chat-System.
The question is how quickly text-based AI technology is developing. Soon, it will be able to recognize
and interpret images and diagrams, generate them, and search for data on the web in seconds. If the
language model of ChatGPT can be installed locally on a university server and the data protection issue
is solved, it is likely that AI-based services will become widespread at universities.
The AI does not disrupt the universities or the existing campus management systems, but rather adds
and improves services and functions. [8] For CMS TraiNex it is estimated that around 30% of the
features can be improved. Even these 30% can be crucial in the competition between CMS and the
competition between the universities. Therefore, the integration of AI into suitable services in campus
management systems should be done carefully, but quickly, to relieve the administration, improve
teaching and learning, and elevate the university to a new level.
ACKNOWLEDGEMENTS
The empirical study was realized and financed by Trainings-Online E-Portals GmbH Bielefeld/Germany
(TrOn), which is an associated partner in the European Union Erasmus+ project "HYBOT - Enhancing
hybrid teaching in higher education through chatbots". Trainings-Online GmbH develops and operates
the campus management system TraiNex ([Link] All examples
were planned, developed and realized by Trainings-Online. All examples are successful integrated in
the Campus-Management-System TraiNex as an available module for German universities. TraiNex is
a pioneer in the field of including artificial-intelligence-functionality into campus management systems
by using the GPT-API. More info at [Link]
REFERENCES
[1] Mai, V., Bauer, A., Deggelmann, C., Neef, C., & Richert, A. (2022). AI-Based Coaching: Impact of a
Chatbot’s Disclosure Behavior on the Working Alliance and Acceptance (Vol. 13518/Springer Nature
Switzerland)
[2] Nichol, A. (2018): The next generation of AI assistants in enterprise“, O’Reilly Radar,
[Link] (Abruf am 10.3.23)
[3] Bieletzke, S., Kronsbein, P., „Empirische Studie zu hochschulischen Chatbot-Einsatzmöglichkeiten“, 2022,
[Link]
Einsatzmoglichkeiten (Abruf am 25.4.23)
[4] Bieletzke, S., “Singularität: Point-of-no-Return zur Utopie oder Dystopie?”, 2020
[Link] (Abruf am 25.4.23)
[5] Bieletzke, S., Integration von KI-Chatbots via API in Campus-Management-Systeme von Hochschulen,
2023, [Link]
Chatbots_via_API_in_Campus-Management-Systeme_von_Hochschulen (Abruf am 25.4.23)
[6] o.V. (2022): Open AI-Api-Dokumentation, [Link] (Abruf am
28.4.23)
[7] Kuhail, M. A., Alturki, N., Alramlawi, S., & Alhejori, K. (2022). Interacting with educational chatbots: A
systematic review. Education and Information Technologies: The Official Journal of the IFIP Technical
Committee on Education, S. 1–46
[8] Weßels, D. (2020). Digitale Disruption und Künstliche Intelligenz -Hochschulen im Dornröschenschlaf?,
[Link]
Hochschulen_im_Dornroschenschlaf (Abruf am 3.4.23)
View publication stats