0% found this document useful (0 votes)
70 views20 pages

Intelligent Tutoring Systems Overview

This paper provides a comprehensive overview of Intelligent Tutoring Systems (ITS), detailing their history, architecture, and effectiveness in education. It discusses the evolution from Computer-Assisted Instruction to ITS, emphasizing the role of artificial intelligence in personalizing learning experiences. The authors also review various studies on the effectiveness of ITS compared to human tutoring, highlighting that while ITSs show promise, they have not yet reached the effectiveness levels of human tutors.

Uploaded by

augusto.dunham
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views20 pages

Intelligent Tutoring Systems Overview

This paper provides a comprehensive overview of Intelligent Tutoring Systems (ITS), detailing their history, architecture, and effectiveness in education. It discusses the evolution from Computer-Assisted Instruction to ITS, emphasizing the role of artificial intelligence in personalizing learning experiences. The authors also review various studies on the effectiveness of ITS compared to human tutoring, highlighting that while ITSs show promise, they have not yet reached the effectiveness levels of human tutors.

Uploaded by

augusto.dunham
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

International Journal of Computer Applications (0975 - 8887)

Volume 181 - No.43, March 2019

Intelligent Tutoring Systems: A Comprehensive


Historical Survey with Recent Developments

Ali Alkhatlan Jugal Kalita


University of Colorado Colorado Springs University of Colorado Colorado Springs
Department of Computer Science Department of Computer Science
1420 Austin Bluffs Pkwy, Colorado Springs CO 80918 1420 Austin Bluffs Pkwy, Colorado Springs CO 80918

studied the human tutor and attempted to absorb and adapt what
ABSTRACT they learned into Intelligent Computer-Assisted Instruction (ICAI)
or Intelligent Tutoring Systems (ITS) [4].
This paper provides interested beginners with an updated and Self, in a paper published in 1990, claimed that ITSs should
detailed introduction to the field of Intelligent Tutoring Sys- be viewed as an engineering design field. Therefore, ITS design
tems (ITS). ITSs are computer programs that use artificial intelli- should be guided by methods and techniques appropriate for design
gence techniques to enhance and personalize automation in teach- [3][5]. Twenty years after Self’s claim, ITSs had become a growing
ing. This paper is a literature review that provides the follow- field with signs of vitality and self-confidence [3].
ing: First, a review of the history of ITS along with a discus-
Intelligent tutoring systems motivate students to perform chal-
sion on the interface between human learning and computer tu-
lenging reasoning tasks by capitalizing on multimedia capabilities
tors and how effective ITSs are in contemporary education. Sec-
to present information. ITSs have successfully been used in all ed-
ond, the traditional architectural components of an ITS and their
ucational and training markets, including homes, schools, univer-
functions are discussed along with approaches taken by various
sities, businesses, and governments. One of the goals of ITSs is to
ITSs. Finally, recent innovative ideas in ITS systems are pre-
better understand student behaviors through interaction with stu-
sented. This paper concludes with some of the author’s views re-
dents [4].
garding future work in the field of intelligent tutoring systems.
ITSs are computer programs that use AI techniques to provide
intelligent tutors that know what they teach, whom they teach, and
General Terms how to teach. AI helps simulate human tutors in order to produce
intelligent tutors. ITSs differ from other educational systems such
Computer Assisted Instruction, Tutoring Systems as Computer-Aided Instruction (CAI). A CAI generally lacks the
ability to monitor the learner’s solution steps and provide instant
Keywords help [6]. For historical reasons, much of the research in the domain
of educational software involving AI has been conducted under the
Tutoring systems, intelligent tutoring systems, artificial intelligence name of Intelligent Computer-Aided Instruction (ICAI). In recent
decades, the term ITS has often been used as a replacement for
ICAI. The field of ITS is a combination of computer science, cog-
1. INTRODUCTION nitive psychology, and educational research (Figure 1). The fact
that ITS researchers use three different disciplines warrants impor-
From the earliest days, computers have been employed in a vari-
tant consideration regarding the major differences in research goals,
ety of areas to help society, including education. The computer was
terminologies, theoretical frameworks, and emphases among ITS
first introduced to the field of education in the 1970s under the aegis
researchers. Consequently, ITS researchers are required to have a
of Computer Assisted Instruction (CAI). Efforts at using computers
good understanding of these three disciplines, resulting in compet-
in education were presented by Carbonel in the 1970s. He claimed
ing demands. Fortunately, many researchers have stood up to meet
that a CAI could be endowed with enhanced capabilities by incor-
this challenge this challenge [7][8][9].
porating Artificial Intelligence (AI) techniques to overcome current
limitations [1].
In 1984, a study conducted by Bloom [2] showed that learners 2. RELATED SURVEY PAPERS
who studied a topic under the guidance of a human tutor, com-
bined with traditional assessment and corrective instructions per- The field of ITS has a long history of productive research and con-
formed two standard deviations (sigma) better than those who re- tinues to grow. There have been a number of well known surveys to
ceived traditional group teaching. Researchers in the field of AI saw keep researchers, new and old, updated. In this section, we will list
a solid opportunity to create intelligent systems to provide effec- these surveys with a few key points about each. These surveys can
tive tutoring for individual students, tailored to their needs and to be divided into two main categories. The first category belongs to
enhance learning [3]. Researchers found a new and inspiring goal, the surveys that present a general discussion of ITSs. The second

1
International Journal of Computer Applications (0975 - 8887)
Volume 181 - No.43, March 2019

4. HUMAN TUTORS VS. COMPUTER TUTORS


A number of studies have shown the effectiveness of one-on-one
human tutors [17][18][2]. When students struggle with difficulties
in understanding concepts or exercises, the most effective choice
is to seek a one-on-one tutor. There are a variety of features that
human tutors are able to provide to students. Good human tutors
allow students to do as much work as possible while guiding them
to keep them on track towards solutions [19]. Of course, students
learning by themselves also can increase their knowledge and rea-
soning skills. However, this may consume much time and effort.
A one-on-one tutor allows the student to work around difficulties
by guiding them to a strategy that works and helping them under-
Fig. 1: The Domain of ITS Adapted From [8]. stand what does not. In addition, tutors usually promote a sense of
challenge, provoke curiosity, and maintain a student’s feeling of be-
category belongs to the surveys that specialize in a specific dimen- ing in control. Human tutors give hints and suggestions to students
sion in ITS. rather than giving them explicit solutions. This motivates students
A well-known survey which belongs to the first category was to overcome challenges. Furthermore, human tutors are highly in-
published in 1990 by Nwana [8]. The survey identifies components teractive in that they give constant feedback to students while the
of ITSs and describes the evolution from Computer-Assisted In- students are solving problems. In order to enable an ITS to give
struction and some of popular ITSs of that era. Another survey on similar feedback as given by a human tutor, we must ensure that it
ITS was published in 1994 by Shute et al. [10]. This is a more in- interacts with students as human tutors do. This leads to the ques-
depth survey regarding the history of ITS, ITS evaluation and the tion of how to make an ITS deal with students as effectively as
future of ITSs as seen at the time. Finally, in-depth case studies human tutors.
were published by Woolf et al. in 2001 [11] for the purpose of pre- When modeling ITSs, a student’s problem solving processes
senting intelligent capabilities of ITSs when interacting with stu- must be monitored step by step. By keeping track of the steps in-
dents. Four tutors were used to exhibit these abilities. The authors crementally, it is possible to detect if a student has made a mistake
ended by discussing evaluations, and some critical development is- so that the system can intervene to help the student recover. Feed-
sues for ITSs of the time. back can be provided when mistakes are made and hints can be
The other survey category is more concerned with reviewing a given if students are unsure of how to proceed. One technique used
specific dimension of ITSs. Authoring tools in ITSs were reviewed for tracing a student’s problem solving is to match the steps a stu-
by Murry and Tom in 2003 [12]. The paper is an in-depth sum- dent takes with a rule-based domain expert. In the model tracing
mary and analysis of authoring tools in ITSs along with a char- technique, the system monitors and follows a student’s progress
acterization of each authoring tool. Another example of a specific step by step. In case the student makes an error or a wrong assump-
topic-based survey paper is in regard to conversational ITSs [13]. tion, the system intervenes to give explanatory feedback, a hint, or
The history of constraint-based tutors were reviewed in 2012 by a suggestion to allow the student diagnose errors. Otherwise, the
Mitrovic [14]. The paper concentrated on the history and advanced system silently follows the student’s progress. A lot of experiments
features that have been implemented in tutoring systems. Other sur- have shown how student model tracing facilitates learning perfor-
vey papers in this category covers dimensions such as behavior of mance in many educational areas such as the visual presentation
ITSs [6], and behavior of ITSs in ill defined domains [15]. in the Geometry Tutor and the graphical instruction in the LISP
tutor (GIL) [19]. Indeed, model tracing tutoring systems support
students’ learning of the target domain [19] [20].
3. WHY THIS SURVEY
The study of ITSs is a considerable research area as it involves a 5. EFFECTIVENESS OF ITS
large number of researchers, working on topics that have strong
relations with other disciplines. Interested beginners to this field An important question to answer is whether or not ITSs are really
may struggle to understand the basic aspects and methodologies effective in providing the learning outcomes they claim to obtain.
of ITSs. Difficulties for beginners include understanding how an There have been a number of meta-analysis efforts to investigate
ITS generally works, AI technologies involved and their functions, the effectiveness of ITSs. The following present a few recent such
learning theories and their uses, main ITS types and how they are efforts with their findings to answer the question.
different in terms of interaction behaviors, and the importance of A meta-analysis was conducted by VanLehn in 2011 for the
ITSs in education and how effective they are. purpose of comparing effectiveness of computer tutoring, human
The main goal of this work is to provide interested beginners tutoring and no tutoring [20]. In this analysis, computer tutors were
with an updated, in-depth and demystifying introduction to the characterized based on the granularity of the user interface inter-
field. It is an extensive literature review that presents the main as- actions, including answer-based, step-based, and substep-based tu-
pects of ITSs in a limited number of pages and direct the readers to toring systems. Their analysis included studies published between
the appropriate old and recent references. The paper neither aims to 1975 and 2010. 10 comparisons were presented from 28 evaluation
focus on a specific topic or dimension in ITSs, nor does it envision studies. The study found that human tutoring raised test scores by
details that may require hundreds of pages. After understanding an effect size of 0.79 compared to no tutoring; thus it is not as ef-
ITSs’ main concepts and behaviors, a reader can move on to ex- fective as 2.0 found by Bloom earlier [2]. Moreover, it was found
tensively detailed, but slightly dated sources such as Woolf [16] to that step-based tutoring (0.76) was as almost effective as human
take the next steps. tutoring whereas substep-based tutoring was only 0.40 as effective

2
International Journal of Computer Applications (0975 - 8887)
Volume 181 - No.43, March 2019

compared to no tutoring. VanLehn’s findings suggest that tutoring


researchers should focus on ways to improve computer tutoring to
reach up to Bloom’s finding that human tutoring has 2.0 multiplica-
tive effect compared to no tutoring.
The meta-analysis conducted by Steenbergen and Cooper in
2013 analyzed the effectiveness of ITSs on k-12 students’ math
learning [21]. This empirical research examined 26 reports compar-
ing the effectiveness of ITSs with that of regular classroom instruc-
tion. Their finding was that ITSs did not have a significant effect on
student learning outcomes when used for a short period. However,
the effectiveness appeared to be greater when ITS was used for one
full school year or longer. In addition, the effects appeared to be
greater on general students than on low achievers.
The meta-analysis by Ma et al. [22] was conducted in 2014
for the purpose of comparing the learning outcomes for those who
learn by using ITSs and those who learn in non-ITS learning en-
vironments. Their goal was to verify the effect sizes of ITSs tak-
ing into account factors such as type of ITS, type of instruction
(individual, small, large human instruction etc.), and subject do-
Fig. 2: Traditional Architecture of ITS (Adapted from [8]).
main (chemistry, physics, mathematics etc.), and other factors. Ma
et al., analyzed 107 effect size findings from 73 separate studies.
The ITS environment was associated with greater learning achieve- the domain knowledge elements. Figure 2 shows the traditional ar-
ment compared to teacher-led and large group instruction with an chitecture of ITSs [3][8][24].
effect size of 0.42, 0.57 for non-ITS computer based instruction, In addition, even though ITSs differ greatly in their internal
and 0.35 for text books or workbooks. On the other hand, there was structures and components and contain a wide variety of features,
no considerable difference between learning outcomes from ITSs their behaviors are similar in some ways as stated by VanLehn [6].
and from individualized human tutoring (-0.11) or small group in- According to VanLehn, ITSs behave in similar ways such that they
struction (0.05). Ma et al., reported that ITSs achieved higher ed- involve two loops named inner loop and outer loop. The outer loop
ucation outcome than other forms of instructions except for small mainly decides which task students should practice next among
group human tutoring. In addition, the ITS effect varied as features other tasks. The decision takes place based on the student’s his-
and characteristics of ITSs, student attributes, domain knowledge, tory of knowledge and background. The inner loop is responsible
and other factors varied. for monitoring the student’s solution steps within a task by pro-
Finally, the meta-analysis produced by Kulik and Fletcher in viding appropriate pedagogical intervention such as feedback on a
2015 [23] compared the learning effectiveness of ITSs with conven- step, hints on the next step, assessment of knowledge and review of
tional classes from 50 studies. 92% of the studies indicated that stu- the solution.
dents who interacted with ITSs outperformed those who received The goal of this section is to describe these components along
traditional class instructions. In 39 of the 50 studies, performance with their functions.
improvement gains were up to 0.66 median effect sizes, which is
considered to be moderate to strong. However, the effect was weak
6.1 Domain Model
for standardized tests as the effect size was 0.13.
Because of the fact that there is no general agreement on the The expert knowledge, the domain expert, or the expert model, rep-
effectiveness of ITSs, questions come up for researchers to answer. resents the facts, concepts, problem solving strategies, and rules
How effective are ITSs really?, What are the critical reasons that of the particular domain to be taught, and provides ITSs with the
affect learning in ITSs?, What possible changes can be made to knowledge of what they are teaching. The material and detailed
improve ITSs? knowledge are usually derived from experts who have years of ex-
perience in the domain to be taught. It is important to mention that
6. ARCHITECTURE OF ITS to find what to teach is the goal of the domain model. However, it
is separate from the control information (how to teach), which is
ITSs vary greatly in architecture. It is very rare to find two ITSs represented by the tutoring model [25]. The domain expert fulfills
based on the same architecture. There are three types of knowledge a double function. Firstly, it acts as the source of the knowledge to
that ITSs possess: knowledge about the content that will be taught, be presented to students through explanations, responses and ques-
knowledge about the student, and knowledge about teaching strate- tions. Secondly, it evaluates the student’s performance. In order to
gies. Additionally, an ITS needs to have communication knowledge accomplish these tasks, the system must be able to present correct
in order to present the desired information to the students. Conse- solutions to problems so that the student’s answers can be com-
quently, the traditional ‘typical’ ITS has four basic components: pared to those of the system. In case the ITS is required to guide
the domain model which stores domain knowledge, the student the student in solving problems, the expert model must be able to
model which stores the current state of an individual student in or- generate sensible and multiple paths of solutions to help fill the
der to choose a suitable new problem for the student, and the tutor gap in the student’s knowledge. The expert model can also provide
model which stores pedagogical knowledge and makes decisions an overall progress assessment of students by establishing specific
about when and how to intervene. The intervention can use dif- criteria with which to compare knowledge [8][26][27][28].
ferent forms of interactions: Socratic dialogs, hints, feedback from An ITS must have a knowledge base system which contains
the systems, etc. Finally the user interface model gives access to information on what will be taught to the learners. The need for

3
International Journal of Computer Applications (0975 - 8887)
Volume 181 - No.43, March 2019

explicit. Declarative knowledge consists of facts and concepts in


the form of a semantic net or similar network of concepts linking
what are called chunks. In contrast, procedural memory represents
knowledge of how we do things in the form of production rules
written in IF-THEN format. Thus, chunks and productions are the
basic forms of an ACT-R model [4][35].
In order to use cognitive models to facilitate tutoring, an algo-
rithm called model tracing has been used. The tutor assesses the
student solution by comparing the student solution steps against
what the model would do for the same task. If the student action
is the same as the model action, it is deemed correct. Otherwise it
is not correct. An error is hypothesized when a student step does
not match any rule or it matches one or more of the buggy rules
[7]. Each production rule that generates the matching action can be
interpreted as a skill possessed by the student. So over time, the
Fig. 3: Production and Buggy Rules for Computing the Size of an Angle model is able to evaluate the skills that have been mastered by the
(Adapted From [34]). student (knowledge tracing). Thus, knowledge tracing is used to
monitor the skills that students have acquired from solving a prob-
lem [35][36].
suitable knowledge representation (KR) languages must be consid-
ered in representing and using the knowledge. The principles that The Knowledge Tracing model called Cognitive Mastery
need to be considered when choosing KR languages to build the Learning is one of the most popular methods for estimating the
knowledge are the expressivity of the language, the inference ca- probability that a student knows each skill [37]. The model contin-
pacity of the language, the cognitive plausibility of the language uously keeps assessing the probability that a student has acquired
and pedagogical orientation of the language [3]. Hatzilygeroudis each skill taking into account four parameters for each skill. Cog-
and Prentzas made the first efforts to define and analyze the require- nitive Mastery Learning is known to produce a significant improve-
ments for knowledge representation in ITSs [29]. Various knowl- ment in learning and it has a long history of application. Edu-
edge representation and reasoning schemes have been used in ITSs. cational data mining approaches such as Learning Factor Analy-
These include symbolic rules, fuzzy logic, Bayesian networks, and sis (LFA) [38] and Performance Factors Analysis (PFA) [39] have
case-based reasoning, and hybrid representations such as neuro- been used to futher improve ITSs using this model.
symbolic and neuro-fuzzy approaches. More details on examples Despite the fact that cognitive tutors have led to impressive stu-
of ITS systems along with the knowledge representation languages dent learning gains in a variety of domains, these model tracing tu-
used can be found in [3]. tors have not been widely adopted in educational or other settings
The following explains three traditional types of ITS ap- such as corporate training. The fact is that building complete and
proaches for representing and reasoning with the domain knowl- optimal cognitive tutors requires software pieces such as an inter-
edge. Two types of domain knowledge models used frequently in face, a curriculum, a learner interacting management system, and
ITSs are the cognitive model, and the constraint based model. The a teacher reporting package. Additionally, the process also needs a
third approach incorporates an expert system in the ITS [14]. team of professionals to work together, resulting in high cost and
time. These two requirements have limited practical use of such
6.1.1 Cognitive Model. The cognitive model is a traditional tutors [40]. To reduce the cost of building model tracing tutors, au-
approach to model the domain knowledge in ITSs. It has been used thoring tools with some of the capabilities have been built. An ex-
in a family of successful ITS systems [30]. The tutors that use a ample is Cognitive Tutor Authoring Tools (CTAT) [41]. CTAT has
cognitive model in designing the tasks in the domain have been been used to create full ITSs without programming. This has led to
called Cognitive Tutors. Cognitive tutors are effective and several a new paradigm of an ITS called Example-Tracing Tutors [41].
scientific studies have found that cognitive tutors improve student
learning and demonstrated learning outcomes [31]. They have been
fielded in a variety of scientific domains such as algebra, physics, 6.1.2 Constraint based Model (CBM). Constraint based
geometry and computer programming [32]. Cognitive tutors use a modeling was proposed by Ohlson in 1992 to overcome difficul-
cognitive model to provide students with immediate feedback. The ties in building the student model [14]. Since then, CBM has been
goal of this approach is to provide a detailed and precise description used widely in numerous ITSs to represent instructional domains,
on the relevant knowledge in a task domain including principles students’ knowledge and higher level skills. CBM is based on
and strategies for problem-solving. A rule-based model generates Ohlson’s theory of learning using performance errors, resulting in a
a step by step solution to provide support to students in a rich en- new methodology for representing the knowledge using constraints
vironment for problem solving that generates feedback to students which cannot be violated during problem solving. It is different
on the correctness of each step in the solution and can keep track of from model tracing, which generates all possible paths of solutions
many approaches (strategies) to the final correct answers. Not only using production rules. CBM can be used to represent both domain
are the correct solutions represented, but also the common mistakes and student knowledge. This model has been used to design and
that the students usually make (Bug Libraries) as shown in Figure implement efficient and effective learning environments [14][42].
3 [33][34]. The fundamental idea behind CBM is that constraints repre-
Cognitive tutors have been built based on the ACT-R theory of sent the basic principles and facts in the underlying domain which
cognition and learning [3]. The underlying principle of ACT-R is a correct solution must follow [43]. The observation here is that all
the distinction between explicit and implicit knowledge. Procedural correct solutions for any problem are similar in that they do not vio-
knowledge is considered implicit whereas declarative knowledge is late any domain principles or “constraints”. Instead of representing

4
International Journal of Computer Applications (0975 - 8887)
Volume 181 - No.43, March 2019

both correct and incorrect spaces as in model tracing, it is sufficient 6.2 Student Model
to capture only the domain principles [14].
The form of a constraint is an ordered pair (Cr, Cs), where Cr It would be difficult for an ITS to succeed without some under-
is the relevance condition and Cs is the satisfaction condition so the standing of the user. The student model represents the knowledge
constraint follows the form: and skills of the student dynamically. Just as domain knowledge
must be explicitly represented so that it can be communicated, the
student model must also be represented likewise. Ideally, the stu-
If < relevance condition > is true, dent model should store aspects of the student’s behavior and skills
Then < satisf action condition > had better also in such a way that the ITS can infer the student’s performance and
be true. skills.
According to Nwana, the uses of the student model can be clas-
The relevance condition may contain simple or compound tests sified into six different types [8]. The first type is corrective in that
to specify features of student solutions whereas the satisfaction it enables removing bugs in a student’s knowledge. The second type
condition is an additional test that has to be met in order for the is elaborative in that it fills in the student’s incomplete knowledge.
student’s solution to be correct [32]. The third type is strategic in that it assists in adapting the tutorial
strategy based on the student’s action and performance. The fourth
The CBM approach was proposed to avoid some limitations of
type is diagnostic in that it assists in identifying errors in the stu-
model tracing tutors. First, the nature of CBM’s knowledge rep-
dent’s knowledge. The fifth type is predictive in that it assists in
resentation as constraints allows for creativity. The system accepts
understanding the response of the student to the system’s actions.
student’s solutions even though it is not represented in the system as
The sixth and final type is evaluative in that it assists in evaluating
long as they do not violate any constraints. On the contrary, model
the students overall progress.
tracing limits the students’ solutions to ones stored in the model.
Thus, the idea that different students might have different strate- The student model acts as a source of information about the stu-
gies or beliefs to find their results is considered. Second, creating dent. The system should be able to infer unobservable aspects of the
a bug library as used in model tracing requires a lot of time since student’s behavior from the model. It should reconstruct miscon-
the types of mistakes can be vast. Consequently, CBM concentrates ceptions in the student’s knowledge by interpreting the student’s
on domain principles that every correct solution must meet. CBM’s actions. The representation of the student model is likely to be
hypothesis in this regard is that all correct solutions share the same based on the representation of domain knowledge. The knowledge
features, so it is enough to represent the correct space by capturing can be separated into elements with evaluations of mastery incor-
domain principles. In case of student errors, the system can ad- porated into the student model. This allows the system to compare
vise the student on the mistake without being able to represent it. the state of the student’s knowledge with that of the expert. As a re-
Finally, for some instructional tasks categorized as ill-defined (for sult, instructions can be adapted to exercise the weaknesses in the
details see [15]), it may even be impossible to follow the steps of student’s skills. It should be noted that incomplete knowledge is not
the correct solutions because the runnable models are expressed as necessarily the source of incorrect behavior. The knowledge to be
set of production rules for both the expert and student. CBM avoids taught can evolve, which presents a challenge to the tutoring sys-
this limitation and can handle some ill-defined tasks [42]. tem. It is for this reason that explicit representations of a student’s
supposed incorrect knowledge must be included in a student model
6.1.3 Expert Approach. The third approach for representing so that remediation can be performed as necessary. An important
and reasoning with domain knowledge consists of integrating an feature of the student model is that it is executable or runnable.
expert system in an ITS. This is considered a broad approach in This allows for prediction of a particular student’s behavior in a
ITS since several formalisms of expert systems can be used such particular context. This ultimately allows this important architec-
as rule-based, neural networks, decision trees and case-based rea- tural component of an ITS to interact appropriately with the stu-
soning. An expert system mimics the ability of an expert in terms dent. These interactions may include correction of misconceptions,
of decision making and skills in modeling, and solves a problem providing personalized feedback, and suggestion for learning a par-
[44]. The advantage of the expert system approach is in its abil- ticular item, etc. [8] [9].
ity to accept a broader domain to represent and reason with, unlike Designing a student model is not an easy mission. It should
constraint based and cognitive models work with limited domains be based on responses to certain questions. What does the student
[15]. know? What types of knowledge will the student need to solve a
Fournier-Viger et al. [15] showed that an expert system ap- problem? It is from such questions that the methodology for design-
proach should provide for two modalities. First, the expert sys- ing a student model should derived. It is first necessary to identify
tem should be able to generate expert solutions and then compare the knowledge that the student has gained in terms of the compo-
these solutions with the learner’s solutions. GUIDON [45] is as an nents that are integrated with the mechanism. It is secondly nec-
example of an ITS that uses this modality. The second modality essary to identify the understanding level of the student vis-a-vis
for using an expert system approach in ITSs is to compare ideal the functionality of the mechanism. It is finally necessary to iden-
solutions with the learners’ solutions. Some examples of systems tify the pedagogical strategies used by the student to arrive at a
that are able to meet the second modality are AutoTutor [46], and problem’s solution. These must be taken under consideration in the
DesignFirst-ITS [47]. development of the student model [27].
Despite the fact that the expert system approach is powerful, it There are several kinds of student characteristics that should
faces some limitations as noted by [15]: “(1) developing or adapt- be taken into consideration. In order to build an efficient student
ing an expert system can be costly and difficult, especially for ill- model, the system needs to consider both static and dynamic char-
defined domains; and (2) some expert systems cannot justify their acteristics of students. Static characteristics include information
inferences, or provide explanations that are appropriate for learn- such as email, age, and mother tongue, and are set before the learn-
ing”. ing processes start, whereas dynamic characteristics come from

5
International Journal of Computer Applications (0975 - 8887)
Volume 181 - No.43, March 2019

the behavior of students during the interaction with the system likely using when producing language [54]. Kumar in 2006 used
[48][49]. According to [48], the challenge is to find the relevant the overlay student model in an ITS for computer programming
dynamic characteristics of an individual student in order to adapt called DeLC (Distributed eLearning Center) for distance and elec-
the system for each student. The dynamic characteristics include tronic teaching [55]. It used the overlay student model to capture
knowledge and skills, errors and misconceptions, learning styles the level of understanding of the user. However, it also used an-
and preferences, affective and cognitive factors, and meta-cognitive other modeling approach named the stereotype approach to model
factors. The term knowledge here refers to the knowledge that has learner’s manner of access to training resources, their preferences,
been acquired by the student previously, while learning style or habits and behaviors during the learning process [56].
preferences refer to how the student prefers to perceive the learn- LS-Plan is a framework for personalization and adaption in
ing material (e.g., graphical representation, audio materials and text e-learning systems. It uses a qualitative overlay model based on
representation). Affective factors include the emotional character- Bloom’s Taxonomy. LS-Plan also uses a bug model to detect
istics of the students such as being angry, happy, sad, or frustrated. misconceptions of the users [57]. PDinamet, a web-based adaptive
Cognitive factors refer to the cognitive features of students, for in- learning system for the teaching of physics in secondary school,
stance, attention, ability to learn and ability to solve problems and uses an overlay student model to store concepts that the student
make decisions. Meta-cognitive aspects involve attitude and ability has already learned or has not learned yet. Consequently, the tutor
for help-seeking, self-regulation, and self-assessment [48][50]. can recommend to an individual student a certain topic by taking
Several approaches have been used to build the student model. into account the student’s skill level and the learning activities the
The following subsections discuss some approaches that have been student has already participated in PDinamet [58].
found in the literature.
The overlay model neither takes into account the incorrect
6.2.1 Overlay Model. The overlay model was invented in knowledge that the student has nor the students’ cognitive needs,
1976 by Stansfield, Carr and Goldstein. It is one of the most popular preferences and learning styles. This is the reason why most per-
student models, and it has been used by many tutoring systems. It sonalized tutoring systems combine the overlay model with other
assumes that student knowledge is a subset of domain knowledge. approaches such as stereotypes, fuzzy logic, machine learning, and
If the student has a different behavior from that of the domain, it is perturbation [48].
considered a gap in the student’s knowledge. As a result, the goal
is to eliminate the gap between them as much as possible [48][51].
Consequently, the domain contains a set of elements and the over- 6.2.2 Stereotypes Model. Another widely used approach for
lay model indicates a set of masteries over these elements. A sim- student modeling is in terms of stereotypes. The stereotypes ap-
ple overlay model uses a Boolean value to indicate if an individual proach in student modeling began with a system called GRUNDY
student knows the element or does not know the element. In the by Rich [59]. According to Rich, “A stereotype represents a col-
modern overlay model, a qualitative measure is used to indicate the lection of attributes that often co-occur in people. They enable the
level of student knowledge (good, average or poor). The advantage system to make a large number of plausible inferences on the basis
of using this model is that it allows making the amount of student of a substantially smaller number of observations. These inferences
knowledge as large as necessary. However, the disadvantage of us- must, however, be treated as defaults, which can be overridden by
ing this model is that the student may take a different approach to specific observation”[60][59].
solve a problem. The student may also have different beliefs in the The main assumption in stereotypes is that it is possible to
form of ‘misconceptions’ that are not stored in the domain knowl- group all possible users based upon certain features they typically
edge [48]. share. Such groups are called stereotypes. A new user will be as-
Carmon and Conejo in 2004 proposed a learner model in their signed to a specific stereotype if his/her features match this stereo-
MEDEA system, which is a framework to build open ITSs [52]. type. Most ITSs give students freedom to choose, meaning that the
The classical overlay model was used to represent knowledge and student chooses his/her own learning path in the courseware. As
attitude of the students in MEDEA. The learner model was di- a consequence, students may study material that is too hard or too
vided into two sub-models: attitude model and learner knowledge easy for them, or skip learning certain courseware elements. Beside
model. The attitude model contains static information about the stu- generating, selecting and sequencing material for the students, the
dents (users’ personal and technical characteristics, users’ prefer- ITSs should take into consideration the current knowledge of the
ences, etc.). These features were collected directly from the student students. As they reduce the cognitive overload as well as provide
before the learning processes take place. The learner knowledge individualized guidance for learning and the teaching process [61],
model was responsible for the student’s knowledge and perfor- stereotypes are particularly important to overcome the problem of
mance. These features were updated during the learning processes. initializing a student model by assigning the student to a certain
For each domain concept, the learner model stores an estimation of group of students. The system might ask the user some questions
the knowledge level of the student on this concept [51][52]. to initiate its student model [48].
InfoMap is designed to facilitate both human browsing and For example, let us consider a system that teaches the Python
computer processing of the domain ontology in a system. It uses programming language. The system might start interactions with
the overlay model combined with a buggy model to identify defi- students by asking questions in order to discover the stereotype
cient knowledge [53]. Another ITS that applies the overlay model this student belongs to. A related question that could be asked is
for the student model is ICICLE (Interactive Computer Identifica- if the student is an expert in C++ programming. If the student is
tion and Correction of Language Errors). The system’s goal is to an expert in C++, the system would infer that this student knows
employ natural language processing to tutor students on grammat- the basic concepts in programming such as loops, while loops and
ical components of written English. ICICLE uses a student overlay nested loops. Consequently, the system will assign this particular
model to capture the user’s mastery of various grammatical units student to a stereotype whose members know these basic program-
and thus can be used to predict the grammar rules he or she is most ming concepts [60].

6
International Journal of Computer Applications (0975 - 8887)
Volume 181 - No.43, March 2019

Many adaptive tutoring systems have used the stereotype ap- the mistakes that students make during interaction with the system
proach to student modeling and often combine them with other stu- (enumeration) or by listing the common misconceptions that stu-
dent modeling approaches [60]. INSPIRE is an ITS for personal- dents usually have (generative technique). This model gives better
ized instruction. The stereotype approach is used to classify knowl- explanations of student behavior than the overlay model. However,
edge on a topic to one of four levels of proficiency: Insufficient, it is costly to build and maintain [49].
Rather Insufficient, Rather Sufficient, Sufficient. Besides stereo- Many adaptive tutoring systems have used the perturbation
types, the fuzzy logic technique is used to deal with student diagno- technique for their student model. Hung and his colleagues in 2005,
sis [62]. Another ITS using stereotypes is Web-PVT which teaches used the perturbation model (also called buggy model), with 31
the passive voice in English using the Web Passive Voice Tutor. types of addition errors and 51 subtraction errors, to help the system
Machine learning and stereotypes were used to tailor instruction analyze and reason with student’s mistakes [53]. LeCo-EAD uses
and feedback to each individual student. The initialization of the the perturbation model to represent students’ incorrect knowledge
model for a new student is performed using a novel combination of to provide personalized feedback and support to distant students in
stereotypes and a distance weighted k-nearest neighbor algorithm real time [68]. The perturbation model was also used by Surjano
[63]. and Maltby in combination with stereotypes and the overlay model
AUTO-COLLEAGUE, an adaptive computer supported col- to perform a better remediation of student mistakes [69]. Baschera
laborative learning system, aims to provide a personalized and and Gross also used a perturbation student model in 2010 for the
adaptive environment for users to learn UML [64]. AUTO- purpose of spelling training to better diagnose students’ errors [44].
COLLEAGUE uses a hybrid student model that combines stereo-
types and the perturbation approach, to be discussed next. The 6.2.4 Constraint Based Model. The constraint based model
stereotypes are concerned with three aspects of the user (the level (CBM) was first published for short-term student modeling and
of expertise, the performance type and the personality). Another the diagnosis of the current solution state. CBM uses constraints
ITS that uses stereotype for student modeling is CLT. CLT teaches to present both domain and student knowledge [48]. The process
C++ iterative constructs (while, do-while, and for loops). The trig- of diagnosing the student’s solution is by matching the relevance
gers for the stereotypes used in CLT are verbal ability, numerical conditions of all constraints to the students’ solutions. The satis-
ability, and spatial ability, each of which can be rated low, medium faction condition for all relevance conditions are matched as well.
and high [65]. The system checks each step taken by the student, diagnoses any
According to Chrysafiadi et al., the advantages of the stereo- problem, and provides feedback to the student when there is an er-
types technique are that the knowledge about a particular user is ror. The feedback informs the student that the solution is wrong,
inferred from the related stereotype(s) as much as possible, with- indicates the part of the solution that’s wrong, and then specifies
out explicitly going through the knowledge elicitation process with the domain principle that is violated [14]. According to Mitrovic et
each individual user [48]. In addition, maintaining information al., important advantages of CBM are that CBM does not require
about stereotypes can be done efficiently with low redundancy. On a runnable expert module, leading to computational simplicity; it
the other hand, the disadvantages of stereotypes are that stereotypes does not require extensive studies of students’ bugs; and it does not
are constructed based upon external characteristics of users and on require complex reasoning about possible origins of student errors
subjective human judgment, usually of a number of users/experts. It [70]. These advantages have led researchers to apply the CBM ap-
is common that that some stereotypes do not represent their mem- proach to their tutoring systems in a variety of domains.
bers accurately. Therefore, many researchers have pointed out the SQLT-Web is a web enabled ITS for the SQL database lan-
issue of inaccuracy in stereotypes. Stereotypes suffer from two ad- guage. It diagnoses the student’s solution and adapts feedback to
ditional problems. First, the users must be divided into classes be- his/her knowledge and learning abilities [71]. J-LATTE, which is
fore the interactions with the system begin, and as a result, some an ITS that teaches a subset of the Java programming language,
classes might not exist. Second, even if a class exists, the designer uses the CBM approach in the student model. When the student
must build the stereotype, which is time consuming and error- submits his/her solution, the student modeler evaluates it and pro-
prone. duces a list of relevant, satisfied and (possibly) violated constraints
[72] [73]. INCOM is a system which helps students of a logic
6.2.3 Perturbation Model. The perturbation student model is programming course at the University of Hamburg. Weighted con-
an extension of the overlay model. Besides representing the knowl- straints are used to achieve accuracy in diagnosing students’ solu-
edge of the students as a subset of the expert’s knowledge (over- tions [74]. EER-Tutor is also another system that teaches database
lay model), it also includes possible misconceptions which allow concepts and adapts the CBM student model to represent the stu-
for better remediation of student mistakes [66]. The perturbation dent’s level of knowledge [14].
model incorporates misconceptions or lack of knowledge, which
may be considered mal-knowledge or incorrect beliefs [67]. Ac- 6.2.5 Cognitive Theories. The use of cognitive theories for
cording to Martins et al., the perturbation model can be obtained the purpose of student modeling and error diagnosis leads to ef-
by replacing correct rules with wrong rules [66]. When applied, fective tutoring systems, as many researchers have pointed out. A
they produce the answers given by the student. Since there can be cognitive theory helps interpret human behavior during the learn-
several reasons for a student’s wrong answer (several wrong rules ing process by trying to understand human processes of thinking
in student knowledge before the beginning of the interaction with and understanding. The Human Plausible Reasoning (HPR) The-
the student to acquire knowledge and after interaction with special- ory [75], and the Multiple Attribute Decision Making (MADM)
ized knowledge), the system proceeds to generate discriminating Theory [76] are some cognitive theories that have been used in stu-
problems and presents them to the student to identify the wrong dent modeling [48].
rules that this user has. Human Plausible Reasoning (HPR) is a theory which catego-
The mistakes that students make are usually stored in what is rizes plausible human inferences in terms of a set of frequently
termed the bug library. The bug library is built by either collecting recurring inference patterns and a set of transformations on these

7
International Journal of Computer Applications (0975 - 8887)
Volume 181 - No.43, March 2019

patterns. In particular, it is a domain-independent theory, originally [85]. A Bayesian student model also has been implemented in the
based on a corpus of people’s answers to everyday questions [75]. context of an Assessment-Based Learning Environment for English
A system that uses HPR in student modeling is RESCUER, which grammar. A Bayesian student model is used by pedagogical agents
is an intelligent help system for UNIX users. The set of HPR trans- to provide adaptive feedback and adaptive sequencing of tasks [86].
formations are applied to statements to generate different possible A Bayesian student model is also used in E-teacher to provide
interpretations of how a user may have come to the conclusion that personalized assistance to e-learning students with the goal of au-
the command s/he typed is acceptable to UNIX [77]. Another sys- tomatically detecting a student’s learning style [87]. A Dynamic
tem that uses HPR to model the student is F-SMILE. F-SMILE Bayesian network was used by Conati and Maclaren to recognize a
stands for File-Store Manipulation Intelligent Learning Environ- high level of uncertainty regarding multiple user emotions by com-
ment, which aims to teach novice learners how to use file-store ma- bining information on both the causes and effects of emotional be-
nipulation programs. The student model in F-SMILE captures the havior [88]. Similarly, a Dynamic Bayesian network was imple-
cognitive state, as well as the characteristics of the learner and iden- mented in PlayPhysics to reason about the learner’s emotional state
tifies possible misconceptions. The LM Agent in F-SMILE uses a from cognitive and motivational variables using observable behav-
novel combination of HPR with a stereotype based mechanism to ior [89]. TELEOS (Technology Enhanced Learning Environment
generate default assumptions about learners until it is able to ac- for Orthopedic Surgery) used a Bayesian student model to diag-
quire sufficient information about each individual learner [78]. nose the students’ knowledge states and cognitive behaviors [90].
Another cognitive theory which has been used to build stu- A Bayesian student model was also applied in Crystal Island, which
dent models is Multiple Attribute Decision Making (MADM) [76]. is a game-based learning environment in the domain of microbiol-
MADM makes preference decisions (e.g., evaluation, prioritiza- ogy to predict student affects by modeling students’ emotions [91].
tion, or selection) among available alternatives that are usually
characterized by multiple, usually conflicting, attributes. Web-IT is 6.2.7 Fuzzy student modeling. In general, learning and de-
a Web-based intelligent learning environment for novice adult users termining the student’s state of knowledge are not straightforward
of a Graphical User Interface (GUI) that manipulates files, such as tasks, since they are mostly affected by factors which cannot be di-
the Windows 98/NT Explorer. Web-IT uses MADM in combination rectly observed and measured, especially in ITSs where there is a
with an age stereotype to dynamically provide personalized tutor- lack of real life interaction between a teacher and students. One
ing [79]. A novel mobile educational system has been developed possible approach to deal with uncertainty is fuzzy logic, intro-
by Alepis and Kabassi, incorporating bi-modal emotion recogni- duced by Zadeh in 1956 as a methodology for computing and rea-
tion based on two modes of interactions, mobile device microphone soning with subjective words instead of numbers [48]. Fuzzy logic
and keyboard, through a multi-criteria decision making theory to is used to deal with uncertainly in real world problems caused by
improve the system’s accuracy in recognizing emotions [80]. imprecise and incomplete data as well as human subjectivity [92].
Fuzzy logic uses fuzzy sets that involve variables with uncertain
6.2.6 Bayesian Networks. Another well-known and estab- values. A fuzzy set is described by variables and values such as “ex-
lished approach for representing and reasoning about uncertainty cellent”, “good” and “bad” rather than a Boolean value “yes/no” or
in student models is Bayesian networks [48]. A Bayesian net- “true/false”.
work (BN) is a directed acyclic graph containing random variables, A fuzzy set is determined by a membership function expressed
which are represented as nodes in the network. A set of probabilis- as U (x) [93]. The value of the membership function U (x) is called
tic relationships between the variables is presented as arcs. The BN the degree of membership or membership value, and has a value
reasons about the situation it models, analyzing action sequences, between 0 and 1. The use of fuzzy logic can improve the learning
observations, consequences and expected utilities [49]. Regarding environment by allowing intelligent decisions about the learning
the student model, components of students such as knowledge, mis- content to be delivered to the learner as well as tailored feedback
conceptions, emotions, learning styles, motivation and goals can be that should be given to each individual learner [48]. Fuzzy logic
represented as nodes in a BN [48]. can also diagnose the level of knowledge of the learner for a con-
BNs have been shown to be powerful and multi-purpose when cept, and predict the level of knowledge for other concepts that
modeling any problems that involve knowledge. Bayesian networks are related to that concept [92]. Chrysafiadi and Virvou, in 2012,
have attracted attention from theoreticians and system designers perform an empirical evaluation of the use of fuzzy logic in stu-
not only because of sound mathematical foundation, but also for dent modeling in a web-based educational environment for teach-
being a natural way to represent uncertainty using probabilities. ing computer programming. The result of the evaluation showed
Therefore, BNs have been used in many different domains such as that the integration of fuzzy logic into the student model increases
medical diagnosis, information retrieval, bioinformatics, and mar- the learner’s satisfaction and performance, improves the system’s
keting, for many different purposes such as troubleshooting, diag- adaptivity and helps the system make more reliable decisions [94].
nosis, prediction, and classification [49]. Those who are interested The use of fuzzy logic in student modeling is becoming popular
in using Bayesian networks can use tools such as GeNIe [81] and since it overcomes computational complexity issues and mimics
SMILE [82] for easy creation and efficient BNs. human-like nature [48][93].
Andes is an ITS providing help in the domain of Newtonian
physics [83][84]. The student model in Andes uses Bayesian net- 6.3 Tutor Model
works to carry out long-term knowledge assessment, plan recogni-
tion, and prediction of the students’ actions during problem solving. An ITS provides personalized feedback to an individual student
Another student model that uses BN is in Adaptive Coach Explo- based upon the traits that are stored in the student model. The tu-
ration (ACE), an intelligent exploratory learning environment for tor model or the pedagogical module, as it is alternatively called,
the domain of mathematical functions. The student model is capa- is the driving engine for the whole system [95]. This model per-
ble of providing tailored feedback on a learner’s exploration pro- forms several tasks in order to behave like a human-like tutor that
cess, also detecting when the learner is having difficulty exploring can decide how to teach and what to teach next. The role of the

8
International Journal of Computer Applications (0975 - 8887)
Volume 181 - No.43, March 2019

tutor model is not only to provide guidance like a tutor but also to Model Tracing Tutors have developed teaching strategies and
make the interaction of the ITS with the learner smooth and natural interactions between the system and the student to reach the level
[96]. The pedagogical module should be able to answer questions of performance of experienced human tutors. However, many re-
such as should the student be presented a concept, a lesson, or a test searches have criticized model tracing tutor because an MTT needs
next. Other questions include deciding how to present the teaching a strategic tutor [99]. According to them, an MTT should encour-
material to the student, evaluate student performance, and provide age students to construct their own knowledge instead of telling it
feedback to the student [43][96]. to them. In other words, students can learn better if they are en-
Indeed, the pedagogical module communicates with all other gaged in a dialog that helps them construct their knowledge them-
components in the system’s expert model, and the student model selves instead of being hinted toward inducing the knowledge from
and acts as an intermediary between them [97]. When a student problem-solving experiences.
makes a mistake, the pedagogical module is responsible for pro- A 3rd generation model tracing tutor, named Ms. Lindquist,
viding feedback to explain the type of error, re-explain the usage of using what is called an Adding Tutorial Model, was the first model
that rule and provide help whenever the student needs it [98]. The tracing tutor that was designed to be more human-like in caring
tutor must also decide what to present next to the student such as when participating in a conversation. Ms. Lindquist could produce
topic, or the problem to work on. To do so, the pedagogical model probing questions, positive and negative feedback, follow-up ques-
must consult the student model to determine the topics on which tions in embedded subdialogs, and requests for explanation as to
the student needs to focus. These decisions that this model makes why something is correct [99][102].
are according to the information about the student stored in the stu- DEPTHS, which is an ITS for learning software design pat-
dent model and the information about the learned content which the terns, implements a curriculum planning model for selecting appro-
expert model stores [74]. priate learning materials (e.g., concepts, content units, fragments
The pedagogical module is responsible for the interaction be- and test questions) that best fit the student’s characteristics [97].
tween the student and the system in case the student needs help DEPTHS is able to decide on the concepts that should be added to
at any given step, for remediation of student’s error. It does so by the concept plan of a particular student along with a detailed lesson
giving a sequence of feedback messages (e.g., hints), or suggesting and a test plan for that concept. Each time the student performance
to the student to study a certain topic to increase learning perfor- significantly changes, the concept plan is created from scratch. The
mance. All ITSs have embedded the pedagogical module to con- decision to add a new concept to the concept plan is made accord-
trol interaction with the students. The following will present some ing to the curriculum sequence stored in the expert model and the
pedagogical techniques which have been used for the purpose of performance of the student and his/her current knowledge stored in
delivering content and making interventions when needed in ITSs. student model [97].

6.3.1 Decision Making in Cognitive Tutor and Constraint


Based Systems. Model Tracing Tutors (MTT) (Cognitive Tutors), 6.3.2 Tutorial Dialog in Natural Language. Human tutors
specifically their 2nd generation architecture [99], give three types use conversational dialogs during tutoring to deliver instructions.
of feedback to students: flag feedback, buggy messages, and a chain Early ITSs were not able to provide the use of natural language,
of hints. Flag feedback informs the student on the correct or wrong discourse, or dialog based instruction. However, many modern ITSs
answer by using a color (e.g., green = correct or red = wrong). A use natural language [103]. The aim of this sub-section is to present
buggy message is attached to a specific incorrect answer the stu- how tutorial dialog techniques can be used to build interaction en-
dent has provided to inform the student of the type of errors s/he vironments in ITSs along with some well-known dialog based ITSs
has made. found in the literature.
In case the student needs help, s/he can ask for a hint to re- AutoTutor is a natural language tutoring system that has been
ceive the first hint from a chain of hints, which include sugges- developed for multiple domains such as computer literacy, physics,
tions to make the student think. The student can request hints to get and critical thinking [46]. AutoTutor is a family of systems that has
more specific hints on what to do, and when the chain of hints is all a long history. AutoTutor uses strategies of human tutors such as
delivered, eventually, the system tells the student exactly what to comprehension strategies, meta-cognitive strategies, self-regulated
type [33]. CBM tutors such as KERMIT and SQL-Tutor, ITSs that learning and meta-comprehension [104][46]. In addition, AutoTu-
teach conceptual database design, provide six levels of feedback to tor incorporates learning strategies derived from learning research
the student: correct, error flag, hint, detailed hint, all errors and so- such as Socratic tutoring, scaffolding-fading, and frontier learn-
lution. The correct level simply indicates the student whether the ing [103]. Benjamin et al. claim that the use of discourse in ITSs
answer is correct or incorrect. The error flag indicates the type of can facilitate new learning activities such as self-reflection, answer-
construct (e.g., entity and relationship) that contains the error. Hint ing deep questions, generating questions and resolving conflicting
and detailed hint are feedback that are generated from the violated statements [46].
constraint. The complete solution is displayed at the solution level In AutoTutor, the pedagogical interventions that occur between
[95][100]. the system and students are categorized as positive feedback, neu-
Besides providing feedback to remedy students’ errors, person- tral feedback, negative feedback, pump, prompt, hint, elaboration
alized guidance can also be given to help students, as Kenny and and splice/correction [105]. Latent Semantic Analysis (LSA) is
Pahl have done in SQL-Tutor [101]. They offer the student advice used in AutoTutor as the backbone to represent computer literacy
and recommendation about subject areas that a particular student knowledge. The modules of AutoTutor are different from the tradi-
needs to focus on. The decision about the particular areas recom- tional modules that have been identified and used in cognitive and
mended by the system is determined by collecting data from the constraint based tutors. The fact that AutoTutor uses language and
student model. The pedagogical model retrieves the information on discourse have led to the use of novel architectures (for more details
all errors made by the student from the student model to make the on the architecture of AutoTutor see [105]). AutoTutor incorporates
decision. a variety computational architectures and learning methodologies,

9
International Journal of Computer Applications (0975 - 8887)
Volume 181 - No.43, March 2019

Table 1. : Tutoring Tactics in CIRCSIM-Tutor Adopted From [111] 6.3.3 Spoken Dialogue. It is well-known that the best human
tutors are more effective than the best computer tutors [114]. The
Plan Tactics
main difference between human and computer tutors is the fact that
human tutors predominantly use spoken natural language when in-
Tutor Ask the student a series of questions. teracting with learners. This raises the question of whether making
Give answer Ask the student to explain their answer. the interaction more natural, such as by changing the modality of
the computer tutor to spoken natural language dialogue, would de-
Hint Remind the student (“Remember that....”).
crease the advantage of human tutoring over computer tutoring. In
Acknowledge 4 possible cases (see below). fact, the majority of dialogue-based ITSs use typed student input.
However, many potential advantages of using speech-to-speech in-
teraction in the domain of ITSs have been found in the litera-
ture [114] [115]. One advantage is in terms of self-explanation,
which gives the student a better opportunity to construct his/her
and has been shown to be very effective as a learning technology
knowledge [115]. For instance, Hauptmann et al. showed that self-
[46].
explanation happens more often in speech than in typed interaction
Atlas is an ITS that uses natural language dialogs to increase [116]. Another advantage is that speech interaction provides a more
opportunities for students to construct their own knowledge [106]. accurate student model. Students use meta-communication strate-
The two main components of Atlas are APE [107], the Atlas Plan- gies such as hedges, pauses, and disfluencies, which allow the tu-
ning Engine and CARMEL [108], the natural language understand- tor to infer more information regarding student understanding. The
ing component. APE is responsible for constructing and generating following will discuses some computer tutors, which implement
coherent dialogues while CARMEL understands and analyzes stu- spoken dialogue [114] [115].
dent’s answers. Another conversational ITS is DeepTutor, an ITS
ITSPOKE is an ITS which uses spoken dialogue for the pur-
developed for the domain of Newtonian physics [109]. A frame-
pose of providing spoken feedback and correcting misconceptions
work called learning progressions (LPs) used by the science edu-
[117]. The student and the system interact with each other in En-
cation research community is integrated as a way to better model
glish to discus the student’s answers. ITSPOKE uses a microphone
students’ cognition and learning. The system implements conversa-
as an input device for the student’s speech and sends the signal to
tional goals to accurately understand the student at each turn by an-
the Sphinx2 recognizer [118]. Litman et al. showed that ITSPOKE
alyzing the interaction that occurs between the system and student.
is more effective than typed dialogue; however, there was no evi-
Conversational goals such including coaching students to articu-
dence that ITSPOKE increases student learning [114]. In addition,
late expectations, correcting students’ misconceptions, answering
it was clear that speech recognition errors did not decrease learning.
students’ questions, feedback on students’ contributions, and er-
ror handling [13]. In order to understand a student’s contributions Another spoken ITS is SCoT (Spoken Conversational Tutor).
while interacting with DeepTutor, a semantic similarity task needs SCoT’s domain is shipboard damage control, which refers to fires,
to be computed based on the quadratic assignment problem (QAP) flood and other critical situations that happen aboard Navy vessels
[110]. An efficient branch and bound algorithm was developed to [115]. Pon-Barry et al. suggested several challenges that ITS devel-
model QAP to reduce the explored space in search for the optimal opers should be aware of when developing spoken language ITSs.
solution. First, repeated critical feedback from the tutor such as You made
this mistake more than once and We discussed this same mistake
CIRCSIM-Tutor is a tutoring system in the area of cardiovas-
earlier cause negative effect. This suggests further work on better
cular physiology that incorporates natural language dialogue with
understanding and use of more tact in correcting user’s misconcep-
the learner by using a collection of tutoring tactics that mimic ex-
tions. Second, even though the accuracy of speech recognition is
pert human tutors [111] [112]. It can handle different syntactic con-
high, small recognition errors can make the tutor less effective.
structions and lexical items such as sentence fragments and mis-
spelled words. Tutoring tactics in CIRCSIM-Tutor are categorized
into four major types as illustrated in Table 1. Theses evolved from
the major types of tactics used in repeated pedagogical interven- 7. CURRENT DEVELOPMENTS IN ITS
tions: ask the next question, evaluate the user’s response, recognize
the user’s answer, and if the answer is incorrect either provide a hint In recent years, numerous effective and successful ITSs have been
or the correct answer. The architecture of CIRCSIM-Tutor contains built. We have presented many such systems in prior sections of this
the following: a planner, a text generator, an input understander, a paper. This section is intended to take a glance at a few significant
student model, a knowledge base, a problem solver and a screen recent systems and key areas of research focus at the current time.
manager [111]. CIRCSIM-Tutor showed significant improvement
in students from pre-test to post-test. The input understander mech- 7.1 Affective Tutoring System
anism of the system was able to recognize and respond to over 95%
of students’ inputs. Evens et al. suggest the use of APE for planning Affective Tutoring Systems (ATS) are ITSs that can recognize hu-
in tutoring sessions. man emotions (sad, happy, frustrated, motivated, etc.) in different
Dialog based ITSs have the same main goal as traditional ITSs, ways [119]. It is important to incorporate the emotions of students
which is to increase the level of engagement and learning gains. in the learning process because recent learning theories have es-
However, dialog based ITSs can use different dimensions of eval- tablished a link between emotions and learning, with the claim
uation in classifying learner’s responses, comprehending learner’s that cognition, motivation and emotion are the three components
contributions, modeling knowledge, and generating conversation- of learning [120][121]. Over the last few years, there has been a
ally smooth tutorial dialogs. D’Mello and Graesser [113] conducted great amount of interest in computing the learner’s affective states
a study to describe how dialog based ITSs can be evaluated along in ITSs and studying how to respond to them in effective ways
these dimensions using AutoTutor as a case study. [122].

10
International Journal of Computer Applications (0975 - 8887)
Volume 181 - No.43, March 2019

Affective tutors use various techniques to enable computers to are increasingly concerned about how learning technology systems
recognize, model, understand and respond to students’ emotions in can be adapted across a diversity of cultures. Nye in 2015 [129]
an effective manner. Knowing the emotional states of the student addressed the barriers faced by ITSs entering the developing world.
provides information on the student’s psychological states and of- Barriers such as lack of student computing skills, problems arising
fers the possibility of responding appropriately [119]. A system can due to multiplicity of languages and cultures, etc., were presented
embed devices to detect a student’s affective or emotional states. along with existing solutions. An analysis of student help seeking
These include PC cameras, PC microphones, special mouses, and behaviors in ITSs across different cultures was conducted by Ogan
neuro-headsets among others. These devices are responsible for et al. [130].
identifying physical signals such as facial image, voice, mouse Models of help seeking behaviors during learning have been
pressure, heart rate and stress level. These signals are then sent to developed based on datasets of students in three different coun-
the system to be processed. Consequently, the emotional state is tries: Costa Rica, the Philippines, and the United States. Ogan et al.
obtained in real time. The ATS objective is to change a negative find that help seeking behaviors across different cultures is not sub-
emotional state (e.g., confused) to a positive emotional state (e.g. stantially transferable. This finding suggests the need to replicate
committed) [72]. research to understand student behaviors. Mohammed and Mohan
In [123], the learners’ affective states are detected by monitor- [131] take the first step toward tackling this issue. Their system pro-
ing their gross body language (body position and arousal) as they vides learners with some control over their cultural preferences in-
interact with the system. An automated body pressure measurement cluding problem description, feedback, and presentation of images
system is also used to capture the learner’s pressure. The system de- and hints. Deployment of such systems has provided researchers
tects six affective states of the learner: confusion, flow, delight, sur- with the opportunity to experimentally investigate phenomena sur-
prise, boredom and neutral. If the system realizes that the student rounding the social acceptability of non-dominant language use in
is bored, the tutor stimulates him by presenting engaging tasks. If education, and its effects on learning.
frustration is detected, the tutor offers encouraging statements or
corrects information that the learner is experiencing difficulty with.
7.3 Game-based Tutoring Systems
Experiments suggest that that boredom and flow might best be de-
tected from body language although the face plays a significant role The novelty of an ITS and its interactive components is quite en-
in conveying confusion and delight. gaging when they are used for short periods of time (e.g., hours),
Jraidi et al. present an ITS that acts differently when the student but can be monotonous and even annoying when a student is re-
is frustrated [124]. For example, it may provide problems similar to quired to interact with an ITS for weeks or months [132]. The un-
ones in which the student has been successful to help the student. derlying idea for game based learning is that students learn better
In case of boredom, the system provides an easier problem to mo- when they are having fun and engaged in the learning process.
tivate the student again or provides a more difficult problem if the Game based tutoring systems engage learners to interact ac-
problem seems too easy. Another approach used in the system to tively with the system, thereby making them more motivated to
respond to student emotions integrates a virtual pedagogical agent use the system for a longer time [133]. Whereas the ITS princi-
called a learning companion to allow affective real time interaction ples maximize learning, the game technologies maximize motiva-
with the learners. This agent can communicate with the learner as tion. Instead of learning a subject in a conventional and traditional
a study partner when solving problems, or provide encouragement way, the students play an educational game which successfully in-
and congratulatory messages, appearing to care about the learner. tegrates game strategies with curriculum-based contents. Although
In other words, these agents can provide empathic responses which there is no overwhelming evidence supporting the effectiveness of
mirror the learner’s emotional states [72]. educational game based systems over computer tutors, it has been
Wolf and his colleagues also implement an empathetic learning found that educational games have advantages over traditional tu-
companion that reflects the last expressed emotion of the learner as toring approaches [134][135]. Moreno and Mayer [136] summarize
long as the emotion is not negative such as frustration or boredom characteristics of educational games that make them enjoyable to
[125][126]. The companion responds in full sentences providing operate. These are interactivity, reflection, feedback, and guidance.
feedback with voice and emotion. The presence of someone who To enhance both engagement and learning, Rai and Beck im-
appears to care can be motivating to the learners. Studies show that plemented game-like elements in their math tutor [137]. The sys-
students who use the learning companion increased their math un- tem provides a math learning environment and the students engage
derstanding and level of interest, and show reduced boredom. An- in a narrated visual story. Students help story characters solve the
other affective tutoring system that uses an empathetic companion problem in order to move the story forward as shown in Figure
to respond to learner emotion is a system that practices interview 4. Students receive feedback and bug messages as when using a
questions with users [127]. The system perceives the user’s emotion traditional tutor. The study found that students are more likely to
by measuring skin conductance and then takes appropriate actions. interact with the version of the math tutor that contains game-like
For instance, the agent displays concern for a user who is aroused elements; however, the authors suggest adding more tutorial fea-
and has a negatively valenced emotion, e.g., by saying “I am sorry tures to a game-like environment for higher levels of learning.
that you seem to feel a bit bad about that question”. Their study
Another tutoring system that uses an educational game ap-
shows that users receiving feedback with empathy are less stressed
proach is Writing Pal (W-Pal), which is designed to help students
when asked interview questions.
across multiple phases of the writing process [138]. Crystal Island
is a narrative-centered learning environment in biology, where stu-
7.2 Cultural Awareness in Education dents attempt to discover the identity and source of an infectious
disease on a remote island. The student (player) is involved in a
In recent years, special attention is being paid to the issues that scenario of meeting a patient and attempts to perform a diagno-
arise in the context of delivering education in a globalized society sis. The study of educational impact using a game based system by
[128]. Researchers in the field of ITS and learning technologies Lester at el. [139] found that students answer more questions cor-

11
International Journal of Computer Applications (0975 - 8887)
Volume 181 - No.43, March 2019

7.5 Collaborative Learning


Current educational research suggests collaborative learning or
group-based learning increases the learning performance of a group
as well as individual learning outcomes [147] [148]. In a collabo-
rative learning environment, students learn in groups via interac-
tions with each other by asking questions, explaining and justify-
ing their opinions, explaining their reasoning, and presenting their
knowledge [149]. A number of researchers have pointed out the
importance of a group learning environment and how significantly
effective it is in term of learning gain [150].
Recently, there has been a rise in interest in implementing col-
laborative learning in tutoring systems to show the benefits ob-
Fig. 4: Math-learning Environment with Game-like Elements [137] tained from interactions among students during problem solving.
Kumar and Rose, in 2011, built intelligent interactive tutoring sys-
tems CycleTalk and WrenchTalk that support collaborative learn-
rectly on the post-test than the pre-test, and this finding was statis- ing environments in the engineering domain [24]. Teams of two
tically significant. Additionally, there was a strong relationship be- or more students work on the same task when solving a problem.
tween learning outcomes, in-game problem solving and increased They conducted a number of experiments to investigate the effec-
engagement [139]. tiveness of collaborative learning and how to engage the students
more deeply in instructional conversations with the tutors using
teaching techniques such as Attention Grabbing, Ask when Ready
7.4 Adaptive Intelligent Web Based Educational and Social Interaction Strategies. It was found that students who
System (AIWBES) worked in pairs learned better than students who worked individ-
ually [151][152]. Another tutoring system that supports collabora-
Adaptive Intelligent Web Based Educational Systems (AIWBES) tive learning is described in [153] for teaching mathematical frac-
or adaptive hypermedia provide an alternative to the traditional, tions.
just-put-it-on-the-web approach in the development of web based
educational courseware. An AIWBES provides adaptivity in terms
of goals, preferences, and knowledge of individual students during
interaction with the system [140].
The area of ITSs inspired early research on adaptive educa-
tional hypermedia, which combine ITSs and educational hyperme-
dia. During the development of the early ITSs, the concern was 7.6 Data Mining in ITSs
to support students in solving problems and how to overcome the
lack of learning material. The required knowledge was acquired Data mining or knowledge discovery in databases as it is alterna-
by developers attending lectures or reading textbooks. As comput- tively called, is the process of analyzing large amounts of data for
ers became more powerful, ITS researchers integrated ITS features the purpose of extracting and discovering useful information [154].
with the learning material. Many research groups have found that Data mining has been used in the field of ITSs for many different
combining hypermedia systems with an ITS can lead to more func- purposes. For instance, it has been used to identify learners who
tionality than traditional static educational hypermedia [141]. game the system. Gaming the system or off-task behavior which
A number of systems have been developed under the category is defined as “attempting to succeed in the environment by exploit-
of AIWBES. ELM-ART (ELM Adaptive Remote Tutor) is a WWW ing properties of the system rather than by learning the material and
based ITS to support learning programming in Lisp. It has been trying to use that knowledge to answer correctly” [155]. Identifying
used in distance learning to not only support course material from situations where the system has been gamed has been the focus for
the textbook, but also to provide problem solving support. Adaptive many researchers in recent years. Additional discussions on mining
navigation through the material was implemented to support learn- student datasets can be found in [156] [157].
ing by individual students. The system classifies the content of a Another use of data mining in ITSs is to detect student af-
page to be as ready to be learned or not ready to be learned because fect. Detecting student’s affective states can potentially increase
some prerequisite knowledge has not been learned yet [142]. In ad- the engagement level and learning outcomes as stated by Baker et
dition, the links are sorted depending on the relevancy to the current al. [157]. For example, classification methods have been used in
student state so the students know which are the most similar situ- automating detectors to predict student states, including boredom,
ations or most relevant web pages. When the student enters a page engaged concentration, frustration, and confusion [158]. Similarly,
which contains a chunk of prerequisite knowledge to be learned, the classification methods have been used to detect affect such as joy
system alerts the student about the prerequisite and suggests addi- and distress [159].
tional links to textbook and manual pages regarding them. In case Another use of data mining is automatically discovering a par-
the student struggles with understanding some contents or solving tial problem space from logged user interactions rather than tradi-
a problem, he/she can use the help button [142]. Empirical studies tional techniques where domain experts have to provide the source
have shown that hypermedia systems in conjunction with tutoring of the knowledge. As an example, clustering methods including se-
tools can be helpful with a self-learner [143]. Other adaptive intel- quential pattern mining [160] and association rule discovery [161]
ligent hypermedia systems that have been used by hundreds of stu- are used in RomanTutor [162] to extract problem space and support
dents include AHA! [144] and InterBook [145], which have been tutoring services [163]. Interested readers are referred to read [164]
shown to help student learn fast and better [146]. for more details.

12
International Journal of Computer Applications (0975 - 8887)
Volume 181 - No.43, March 2019

7.7 Authoring Tools and interact with learners. The following paragraphs present some
shortcomings of ITSs from the authors’ point of view.
ITS researcher teams have been interested in simplifying the build- People with special needs generally suffer from slower learn-
ing process for ITSs by making authoring of ITSs more accessible ing pace; therefore, special attention should be paid to investigate
and affordable to designers and teachers. Authoring tools in the how an ITS can be specialized to improve their learning skills, say
domain of ITSs can be categorized in different dimensions such reading and writing skills. ITSs have already proven their peda-
as tools that require programming skills and those that do not, gogical effectiveness and helped improve learners’ outcomes. Con-
pedagogy-oriented and performance-oriented [12], or paradigm sequently, ITS systems are likely to be helpful to either adults or
specific such as model tracing system, and constraint based tutor children with special needs in their quest to achieve their learning
[165]. goals. Obviously, one must incorporate proven successful strate-
SModel [166] and Tex-Sys [167] are tools that fall into the cate- gies for teaching such individuals into the models as an ITS is con-
gory of tools that require programming skills. SModel is a Bayesian structed. Effective and targeted ITSs or ITS modules are likely to
student modeling component, which provides services to a group be of great assistance in teaching individuals with cognitive disabil-
of agents in the CORBA platform. Tex-Sys is another example in ities such as Down Syndrome, traumatic brain injury, or dementia,
the same category, which provides a generic module for designing as well as for less severe cognitive conditions such as dyslexia, at-
ITS components (domain model, and student model) for any given tention deficit disorder and dyscalculia.
domain. Data mining in the context of ITSs has drawn a significant at-
Examples of authoring tools categorized as pedagogy-oriented tention recently, since the findings can be used to elaborate learn-
are REDEEM [168] and CREAM-Tools [169]. Pedagogy-oriented ing outcomes in many ways, as discussed in the previous section. It
tools are those that concentrate on how to deliver and sequence a will be interesting and beneficial if ITS research has access to long-
package of content [165]. REDEEM provides reusability of exist- term data available to follow a cohort of students for many years.
ing domain material and then provides authoring tactics on how For example, it may be worthwhile to track students who bene-
to teach this material, tactics such as sequencing of contents and fit from their interactions with ITSs during middle or high school
learning activities. Similarly, CREAM-Tools focuses on the op- (early education stage) until later education stages and beyond. Re-
erations required to develop curriculum content, taking into ac- searchers should keep track from an early age until the time of grad-
count aspects of the domain, and pedagogy and didactic require- uation from college including major, Grade Point Average (GPA),
ments. Performance-oriented tools are those that concentrate on the and other variables. However, this is likely to be prohibitively ex-
learner’s performance to provide a rich environment of skills for the pensive.
learners to practice and receive system responses [165]. Examples For instance, students interacting with an ITS in the mathemat-
of authoring tools that belong to this category are Demonstr8 [170], ics domain during early stages of education can be tracked until
XAIDA [171] and Knowledge Construction Dialog (KCD) [172]. their college graduation. This data can further be analyzed in-depth
In recent years, there has been a great interest in building au- to find relations if any, between the ITS data and the learning out-
thoring tools that are specific to certain paradigms and do not re- comes in math related majors years later, considering GPA as a
quire programing skills in order to allow for sharing of components measure of proficiency. Based on the result of analysis, we may
across ITSs and reduce development costs [165]. Cognitive Tutor be able to suggest to students in the future if math related majors
Authoring Tools (CTAT) [173] provides a set of authoring tool spe- are appropriate disciplines for them to pursue. To the best of our
cific for model tracing tutors and example tracing tutors [41]. CTAT knowledge, there is not yet any such datasets. Several benefits can
provides step by step guidance for problem solving activities as be gained from this direction of research. First, there are hundreds
well as how to adaptively select problems based on a Bayesian stu- of ITSs in a variety of subject domains. Thus, it is possible that
dent model. Authoring Software Platform for Intelligent Resources the findings may be extended to other disciplines as well. Second,
in Education (ASPIRE) is also a paradigm specific authoring tool this direction of research may also increase the popularity of ITSs
for constraint based models [174]. ASPIRE supports authoring the as a new educational tool for the purpose of assisting students’ de-
domain model, enabling the subject experts to easily develop con- cisions regarding majors to pursue. Finally, it may also, provide
straint based tutors. Another authoring tool that falls into this cate- us enough time to significantly improve students’ skills and make
gory is AutoTutor Script Authoring Tool (ASAT) [175]. ASAT fa- them ready for their graduation major basic skills.
cilitates developing components of AutoTutor, integrating conver-
sations into learning systems. Finally, the Generalized Intelligent
Framework for Tutoring (GIFT) is a framework and set of tools for 9. CONCLUSION
developing intelligent and adaptive tutoring systems [176]. GIFT
supports a variety of services that include domain knowledge repre- The gap between human tutors and software tutors in the form of
sentation, performance assessment, course flow, pedagogical model ITSs is narrowing, but not closed, even remotely. Many different
and student model. models exist for representing knowledge, teaching styles and stu-
dent knowledge. Each model has its benefits and shortcomings.
Hybrid models have also been created to enhance and strengthen
8. DISCUSSION traditional models. Even with the many unanswered questions that
continue to surround the principles of human thought and learn-
ITSs are educational systems that attempt to adapt to special needs ing, many ITSs have been implemented and tested. ITSs show
of individual learners. What makes ITSs different from other edu- promise in possible standardizing and implementing aspects of hu-
cational systems are their abilities to keep track of cognitive states man learning, but still have many limitations to overcome. The
of individual students and respond appropriately. ITSs have re- close marriage of ITSs with AI and psychology shows continued
ceived ample attention in disciplines such as cognitive science, ed- promise for the advancement of ITSs. While there are no ITSs to
ucation and computer science. The ultimate goal is to achieve the date that possess the cognitive awareness of an actual human tutor,
possibility of mimicking expert human tutors in the way they teach the availability, readiness, and consistency of ITSs may make them

13
International Journal of Computer Applications (0975 - 8887)
Volume 181 - No.43, March 2019

a competitive alternative to human tutors in the future where cost, [17] J. Anania, The effects of quality of instruction on the cog-
time, and scale are the friends of the ITS. nitive and affective learning of students. University of
Chicago, 1981.
10. REFERENCES [18] ——, “The influence of instructional conditions on student
learning and achievement,” Evaluation in education, vol. 7,
[1] J. Carbonell, “AI in CAI: An artificial-intelligence approach no. 1, pp. 1–92, 1983.
to computer-assisted instruction,” IEEE Transactions on [19] D. C. Merrill, B. J. Reiser, M. Ranney, and J. G. Trafton, “Ef-
Man-Machine Systems, vol. 11, no. 4, pp. 190–202, 1970. fective tutoring techniques: A comparison of human tutors
[2] B. S. Bloom, “The 2 sigma problem: The search for meth- and intelligent tutoring systems,” The Journal of the Learn-
ods of group instruction as effective as one-to-one tutoring,” ing Sciences, vol. 2, no. 3, pp. 277–305, 1992.
Educational researcher, vol. 13, no. 6, pp. 4–16, 1984. [20] K. VanLehn, “The relative effectiveness of human tutoring,
[3] R. Nkambou, “Modeling the domain: An introduction to intelligent tutoring systems, and other tutoring systems,” Ed-
the expert module,” in Advances in Intelligent Tutoring Sys- ucational Psychologist, vol. 46, no. 4, pp. 197–221, 2011.
tems, ser. Studies in Computational Intelligence, R. Nkam- [21] S. Steenbergen-Hu and H. Cooper, “A meta-analysis of the
bou, J. Bourdeau, and R. Mizoguchi, Eds. Springer Berlin effectiveness of intelligent tutoring systems on college stu-
Heidelberg, 2010, no. 308, pp. 15–32, DOI: 10.1007/978-3- dents’ academic learning.” Journal of Educational Psychol-
642-14363-2 2. ogy, vol. 106, no. 2, p. 331, 2014.
[4] “Handbook of human-computer interaction,” in Handbook [22] W. Ma, O. O. Adesope, J. C. Nesbit, and Q. Liu, “Intelligent
of Human-Computer Interaction (Second Edition), M. G. H. tutoring systems and learning outcomes: A meta-analysis.”
K. L. V. Prabhu, Ed. North-Holland, 1997, pp. 1551–1582. Journal of Educational Psychology, vol. 106, no. 4, p. 901,
[5] J. Self, “Theoretical foundations for intelligent tutoring sys- 2014.
tems,” Journal of Artificial Intelligence in Education, vol. 1, [23] J. A. Kulik and J. Fletcher, “Effectiveness of intelligent tu-
no. 4, pp. 3–14, 1990. toring systems a meta-analytic review,” Review of Educa-
[6] K. Vanlehn, “The behavior of tutoring systems,” Interna- tional Research, vol. 106, p. 0034654315581420, 2015.
tional journal of artificial intelligence in education, vol. 16, [24] C. Conati, “Intelligent tutoring systems: New challenges and
no. 3, pp. 227–265, 2006. directions.” in IJCAI, vol. 9, 2009, pp. 2–7.
[7] M. Elsom-Cook, Intelligent Computer-aided instruction re- [25] R. Mizoguchi, “Student modeling in ITS,” Emerging Tech-
search at the Open University. Educational Resources In- nologies in Education, vol. 8, pp. 35–48, 1995.
formation Center, 1987.
[26] J. Ong and S. Ramachandran, “Intelligent tutoring systems:
[8] H. S. Nwana, “Intelligent tutoring systems: an overview,” Using AI to improve training performance and ROI,”
Artificial Intelligence Review, vol. 4, no. 4, pp. 251–277, Stottler Henke Associates, Inc. online [Link] shai.
1990. com/papers/ITS using AI to improve training performance and ROI.
[9] E. Wenger, Artificial Intelligence and Tutoring Systems: pdf, 2003.
Computational and Cognitive Approaches to the Communi- [27] E. Sierra, R. Garcı́a-Martı́nez, Z. Cataldi, P. Britos, and
cation of Knowledge. Morgan Kaufmann Publishers Inc., A. Hossian, “Towards a methodology for the design of in-
1987. telligent tutoring systems,” Research in Computing Science
[10] V. J. Shute and J. Psotka, “Intelligent tutoring systems: Past, Journal, vol. 20, pp. 181–189, 2006.
present, and future.” DTIC Document, Tech. Rep., 1994. [28] F. S. Gharehchopogh and Z. A. Khalifelu, “Using intelligent
[11] B. P. Woolf, J. Beck, C. Eliot, and M. Stern, “Growth and tutoring systems in instruction and education,” in 2nd Inter-
maturity of intelligent tutoring systems: A status report,” in national Conference on Education and Management Tech-
Smart machines in education. MIT Press, 2001, pp. 99– nology, vol. 13. IACSIT Press Singapore, 2011, pp. 250–
144. 254.
[12] T. Murray, “An overview of intelligent tutoring system au- [29] I. Hatzilygeroudis and J. Prentzas, “Knowledge representa-
thoring tools: Updated analysis of the state of the art,” in tion requirements for intelligent tutoring systems,” in Intel-
Authoring tools for advanced technology learning environ- ligent Tutoring Systems. Springer, 2004, pp. 87–97.
ments. Springer, 2003, pp. 491–544. [30] M. C. Desmarais and R. S. Baker, “A review of recent ad-
[13] V. Rus, S. D’Mello, X. Hu, and A. Graesser, “Recent ad- vances in learner and skill modeling in intelligent learning
vances in conversational intelligent tutoring systems,” AI environments,” User Modeling and User-Adapted Interac-
magazine, vol. 34, no. 3, pp. 42–54, 2013. tion, vol. 22, no. 1-2, pp. 9–38, 2012.
[14] A. Mitrovic, “Fifteen years of constraint-based tutors: what [31] K. R. Butcher and V. Aleven, “Using student interactions
we have achieved and where we are going,” User Modeling to foster rule–diagram mapping during problem solving in
and User-Adapted Interaction, vol. 22, no. 1-2, pp. 39–72, an intelligent tutoring system.” Journal of Educational Psy-
2012. chology, vol. 105, no. 4, p. 988, 2013.
[15] P. Fournier-Viger, R. Nkambou, and E. M. Nguifo, “Build- [32] V. Kodaganallur, R. R. Weitz, and D. Rosenthal, “A com-
ing intelligent tutoring systems for ill-defined domains,” in parison of model-tracing and constraint-based intelligent tu-
Advances in intelligent tutoring systems. Springer, 2010, toring paradigms,” International Journal of Artificial Intelli-
pp. 81–101. gence in Education (IJAIED), vol. 15, pp. 117–144, 2005.
[16] B. P. Woolf, Building intelligent interactive tutors: Student- [33] V. Aleven, “Rule-based cognitive modeling for intelligent
centered strategies for revolutionizing e-learning. Morgan tutoring systems,” in Advances in intelligent tutoring sys-
Kaufmann, 2010. tems. Springer, 2010, pp. 33–62.

14
International Journal of Computer Applications (0975 - 8887)
Volume 181 - No.43, March 2019

[34] A. Mitrovic, K. R. Koedinger, and B. Martin, “A compara- [50] V. Aleven, B. Mclaren, I. Roll, and K. Koedinger, “Toward
tive analysis of cognitive tutoring and constraint-based mod- meta-cognitive tutoring: A model of help seeking with a cog-
eling,” in User Modeling 2003, ser. Lecture Notes in Com- nitive tutor,” Int. J. Artif. Intell. Ed., vol. 16, no. 2, pp. 101–
puter Science, P. Brusilovsky, A. Corbett, and F. d. Rosis, 128, 2006.
Eds. Springer Berlin Heidelberg, 2003, no. 2702, pp. 313– [51] C. Carmona and R. Conejo, “A learner model in a distributed
322, DOI: 10.1007/3-540-44963-9 42. environment,” in Adaptive Hypermedia and Adaptive Web-
[35] G.-M. Baschera and M. Gross, “Poisson-based inference for Based Systems. Springer, 2004, pp. 353–359.
perturbation models in adaptive spelling training,” Interna- [52] M. Trella, R. Conejo, D. Bueno, and E. Guzmán, “An au-
tional Journal of Artificial Intelligence in Education, vol. 20, tonomous component architecture to develop WWW-ITS,”
no. 4, pp. 333–360, 2010. in Proceedings of the Workshops on Adaptive Systems for
[36] P. A. Jaques, H. Seffrin, G. Rubi, F. de Morais, C. Ghilardi, Web-Based Education. Malaga, 2002, pp. 69–80.
I. I. Bittencourt, and S. Isotani, “Rule-based expert systems [53] C.-H. Lu, C.-W. Wu, S.-H. Wu, G.-F. Chiou, and W.-L. Hsu,
to support step-by-step guidance in algebraic problem solv- “Ontological support in modeling learners’ problem solv-
ing: The case of the tutor Pat2Math,” Expert Systems with ing process,” Journal of Educational Technology & Society,
Applications, vol. 40, no. 14, pp. 5456–5465, 2013. vol. 8, no. 4, pp. 64–74, 2005.
[37] A. T. Corbett and J. R. Anderson, “Knowledge tracing: Mod- [54] L. N. Michaud and K. F. McCoy, “Empirical derivation of
eling the acquisition of procedural knowledge,” User model- a sequence of user stereotypes for language learning,” User
ing and user-adapted interaction, vol. 4, no. 4, pp. 253–278, Modeling and User-Adapted Interaction, vol. 14, no. 4, pp.
1994. 317–350, 2004.
[38] H. Cen, K. Koedinger, and B. Junker, “Learning factors [55] A. N. Kumar, “Using enhanced concept map for student
analysis–a general method for cognitive model evaluation modeling in programming tutors.” in FLAIRS Conference,
and improvement,” in International Conference on Intelli- 2006, pp. 527–532.
gent Tutoring Systems. Springer, 2006, pp. 164–175. [56] T. Glushkova, “Adaptive model for user knowledge in the
[39] P. I. Pavlik Jr, H. Cen, and K. R. Koedinger, “Performance e-learning system,” in Proceedings of the 9th International
factors analysis–a new alternative to knowledge tracing.” Conference on Computer Systems and Technologies and
Online Submission, 2009. Workshop for PhD Students in Computing. ACM, 2008,
[40] S. B. Blessing, S. B. Gilbert, S. Ourada, and S. Ritter, “Au- p. 78.
thoring model-tracing cognitive tutors,” International Jour- [57] C. Limongelli, F. Sciarrone, M. Temperini, and G. Vaste,
nal of Artificial Intelligence in Education, vol. 19, no. 2, p. “Adaptive learning with the LS-plan system: a field evalua-
189, 2009. tion,” Learning Technologies, IEEE Transactions on, vol. 2,
[41] R. Nkambou, J. Bourdeau, and V. Psyché, “Building intelli- no. 3, pp. 203–215, 2009.
gent tutoring systems: An overview,” in Advances in Intelli- [58] E. Gaudioso, M. Montero, and F. Hernandez-Del-Olmo,
gent Tutoring Systems. Springer, 2010, pp. 361–375. “Supporting teachers in adaptive educational systems
[42] A. Mitrovic and S. Ohlsson, “Implementing CBM: SQL- through predictive models: A proof of concept,” Expert Sys-
Tutor After Fifteen Years,” International Journal of Artifi- tems with Applications, vol. 39, no. 1, pp. 621–625, 2012.
cial Intelligence in Education, pp. 1–10, 2015. [59] E. Rich, “Stereotypes and user modeling,” in User Models in
[43] S. Riccucci. (2008) Knowledge management in intelligent Dialog Systems, ser. Symbolic Computation, A. Kobsa and
tutoring systems - AMS tesi di dottorato - AlmaDL - univer- W. Wahlster, Eds. Springer Berlin Heidelberg, 1989, pp.
sit di bologna. 35–51, DOI: 10.1007/978-3-642-83230-7 2.
[44] G. Paviotti, P. G. Rossi, and D. Zarka, “Intelligent tutoring [60] J. Kay, “Stereotypes, student models and scrutability,” in In-
systems: an overview,” Pensa Multimedia, 2012. telligent Tutoring Systems, ser. Lecture Notes in Computer
Science, G. Gauthier, C. Frasson, and K. VanLehn, Eds.
[45] W. J. Clancey, “Use of Mycin’s rules for tutoring,” Rule- Springer Berlin Heidelberg, 2000, no. 1839, pp. 19–30, DOI:
Based Expert Systems. Addison-Wesley, Reading, pp. 19–2, 10.1007/3-540-45108-0 5.
1984.
[61] A. Grubisic, S. Stankov, and B. Zitko, “Stereotype student
[46] B. D. Nye, A. C. Graesser, and X. Hu, “AutoTutor and fam- model for an adaptive e-learning system,” in Proceedings
ily: A review of 17 years of natural language tutoring,” In- of World Academy of Science, Engineering and Technology,
ternational Journal of Artificial Intelligence in Education, vol. 76. World Academy of Science, Engineering and Tech-
vol. 24, no. 4, pp. 427–469, 2014. nology (WASET), 2013, p. 20.
[47] S. Moritz and G. Blank, “Generating and evaluating object- [62] M. Grigoriadou, H. Kornilakis, K. A. Papanikolaou, and
oriented designs for instructors and novice students,” Intelli- G. D. Magoulas, “Fuzzy inference for student diagnosis in
gent Tutoring Systems for Ill-Defined Domains: Assessment adaptive educational hypermedia,” in Methods and Applica-
and Feedback in Ill-Defined Domains., p. 35, 2008. tions of Artificial Intelligence, ser. Lecture Notes in Com-
[48] K. Chrysafiadi and M. Virvou, “Student modeling ap- puter Science, I. P. Vlahavas and C. D. Spyropoulos, Eds.
proaches: A literature review for the last decade,” Expert Springer Berlin Heidelberg, 2002, no. 2308, pp. 191–202,
Systems with Applications, vol. 40, no. 11, pp. 4715–4729, DOI: 10.1007/3-540-46014-4 18.
2013. [63] V. Tsiriga and M. Virvou, “Evaluation of an intelligent web-
[49] E. Millán, T. Loboda, and J. L. Pérez-de-la Cruz, “Bayesian based language tutor,” in Knowledge-Based Intelligent In-
networks for student model engineering,” Computers & Ed- formation and Engineering Systems, ser. Lecture Notes in
ucation, vol. 55, no. 4, pp. 1663–1683, 2010. Computer Science, V. Palade, R. J. Howlett, and L. Jain, Eds.

15
International Journal of Computer Applications (0975 - 8887)
Volume 181 - No.43, March 2019

Springer Berlin Heidelberg, 2003, no. 2774, pp. 275–281, IEEE International Conference on Advanced Learning
DOI: 10.1007/978-3-540-45226-3 38. Technologies-ICALT. Citeseer, 2002, pp. 144–149.
[64] K. Tourtoglou and M. Virvou, “User stereotypes concern- [79] K. Kabassi and M. Virvou, “Personalised adult e-training on
ing cognitive, personality and performance issues in a col- computer use based on multiple attribute decision making,”
laborative learning environment for UML,” in New Direc- Interacting with Computers, vol. 16, no. 1, pp. 115–132,
tions in Intelligent Interactive Multimedia, ser. Studies in 2004.
Computational Intelligence, G. A. Tsihrintzis, M. Virvou, [80] E. Alepis, M. Virvou, and K. Kabassi, “Mobile education:
R. J. Howlett, and L. C. Jain, Eds. Springer Berlin Hei- Towards affective bi-modal interaction for adaptivity,” in
delberg, 2008, no. 142, pp. 385–394, DOI: 10.1007/978-3- Third International Conference on Digital Information Man-
540-68127-4 40. agement, 2008. ICDIM 2008, 2008, pp. 51–56.
[65] S. Durrani and D. Durrani, “Intelligent tutoring systems and [81] N. Jongsawat, A. Tungkasthan, and W. Premchaiswadi, “Dy-
cognitive abilities,” in Proceedings of graduate colloquium namic data feed to Bayesian network model and SMILE web
on computer sciences (GCCS), 2010. application,” Bayesian Network edited by Dr. Ahmed Rabai,
[66] A. C. Martins, L. Faria, C. V. De Carvalho, and E. Car- pp. 155–166, 2010.
rapatoso, “User modeling in adaptive hypermedia educa- [82] M. J. SMILE, “Structural modeling, inference, and learning
tional systems,” Journal of Educational Technology & So- engine and genie: A development environment for graph-
ciety, vol. 11, no. 1, pp. 194–207, 2008. ical decision-theoretic models,” in Proceedings of the Six-
[67] L. Nguyen and P. Do, “Learner model in adaptive learning,” teenth National Conference on Artificial Intelligence (AAAI-
World Academy of Science, Engineering and Technology, 99), July, 1999, pp. 18–22.
vol. 45, pp. 395–400, 2008. [83] A. S. Gertner, C. Conati, and K. VanLehn, “Procedural help
[68] R. A. Faraco, M. C. Rosatelli, and F. A. Gauthier, “An in Andes: Generating hints using a Bayesian network student
approach of student modelling in a learning companion model,” in Proceedings of the Fifteenth National/Tenth Con-
system,” in Advances in Artificial Intelligence–IBERAMIA ference on Artificial Intelligence/Innovative Applications of
2004. Springer, 2004, pp. 891–900. Artificial Intelligence, ser. AAAI ’98/IAAI ’98. American
[69] H. D. Surjono and J. R. Maltby, “Adaptive educational hy- Association for Artificial Intelligence, 1998, pp. 106–111.
permedia based on multiple student characteristics,” in Ad- [84] E. Millán and J. L. Pérez-De-La-Cruz, “A Bayesian diagnos-
vances in Web-Based Learning-ICWL 2003. Springer, tic algorithm for student modeling and its evaluation,” User
2003, pp. 442–449. Modeling and User-Adapted Interaction, vol. 12, no. 2-3, pp.
281–330, 2002.
[70] A. Mitrovic, M. Mayo, P. Suraweera, and B. Martin,
“Constraint-based tutors: A success story,” in Engineering of [85] A. Bunt and C. Conati, “Probabilistic student modelling to
Intelligent Systems, ser. Lecture Notes in Computer Science, improve exploratory behaviour,” User Modeling and User-
L. Monostori, J. Vncza, and M. Ali, Eds. Springer Berlin Adapted Interaction, vol. 13, no. 3, pp. 269–309, 2003.
Heidelberg, 2001, no. 2070, pp. 931–940, DOI: 10.1007/3- [86] J.-D. Zapata-Rivera, “Indirectly Visible Bayesian Student
540-45517-5 103. Models.” in BMA, 2007.
[71] A. Mitrovic, “An intelligent SQL tutor on the web,” In- [87] S. Schiaffino, P. Garcia, and A. Amandi, “Eteacher: Provid-
ternational Journal of Artificial Intelligence in Education ing personalized assistance to e-learning students,” Comput-
(IJAIED), vol. 13, pp. 173–197, 2003. ers & Education, vol. 51, no. 4, pp. 1744–1754, 2008.
[72] R. Zatarain-Cabada, M. L. Barrón-Estrada, G. Alor- [88] C. Conati and H. Maclaren, “Empirically building and eval-
Hernández, and C. A. Reyes-Garcı́a, “Emotion recognition uating a probabilistic model of user affect,” User Modeling
in intelligent tutoring systems for android-based mobile de- and User-Adapted Interaction, vol. 19, no. 3, pp. 267–303,
vices,” in Human-Inspired Computing and Its Applications. 2009.
Springer, 2014, pp. 494–504. [89] K. Muoz, P. M. Kevitt, T. Lunney, J. Noguez, and L. Neri,
[73] J. H. A. Mitrovic, “J-LATTE: A constraint-based tutor for “PlayPhysics: An emotional games learning environment
java,” in 17th Intl. on Conf. on Computers in Education, for teaching physics,” in Knowledge Science, Engineering
2009, pp. 142–146. and Management, ser. Lecture Notes in Computer Science,
[74] N.-T. Le and W. Menzel, “UsingWeighted constraints to di- Y. Bi and M.-A. Williams, Eds. Springer Berlin Heidel-
agnose errors in logic programming - the case of an ill- berg, 2010, no. 6291, pp. 400–411, DOI: 10.1007/978-3-
defined domain,” Int. J. Artif. Intell. Ed., vol. 19, no. 4, pp. 642-15280-1 37.
381–400, 2009. [90] V. M. Chieu, V. Luengo, L. Vadcard, and J. Tonetti, “Student
[75] A. Collins, M. Burstein, and M. Baker, “Human plausible modeling in orthopedic surgery training: Exploiting symbio-
reasoning,” DTIC Document, Tech. Rep., 1988. sis between temporal Bayesian networks and fine-grained
didactic analysis,” International Journal of Artificial Intel-
[76] T. Gwo-Hshiung, “Multiple attribute decision making: meth- ligence in Education, vol. 20, no. 3, pp. 269–301, 2010.
ods and applications,” Multiple Attribute Decision Making:
[91] J. Sabourin, B. Mott, and J. C. Lester, “Modeling Learner
Methods and Applications, 2010.
Affect with Theoretically Grounded Dynamic Bayesian Net-
[77] M. Virvou and B. Du Boulay, “Human plausible reasoning works,” in Affective Computing and Intelligent Interac-
for intelligent help,” User Modeling and User-Adapted In- tion, ser. Lecture Notes in Computer Science, S. D’Mello,
teraction, vol. 9, no. 4, pp. 321–375, 1999. A. Graesser, B. Schuller, and J.-C. Martin, Eds. Springer
[78] M. Virvou and K. Kabassi, “F-smile: An intelligent multi- Berlin Heidelberg, 2011, no. 6974, pp. 286–295, DOI:
agent learning environment,” in Proceedings of 2002 10.1007/978-3-642-24600-5 32.

16
International Journal of Computer Applications (0975 - 8887)
Volume 181 - No.43, March 2019

[92] M. Danaparamita and F. L. Gaol, “Comparing Student [105] A. C. Graesser, K. Wiemer-Hastings, P. Wiemer-Hastings,
Model Accuracy with Bayesian Network and Fuzzy Logic in R. Kreuz, T. R. Group et al., “AutoTutor: A simulation of a
Predicting Student Knowledge Level,” International Journal human tutor,” Cognitive Systems Research, vol. 1, no. 1, pp.
of Multimedia and Ubiquitous Engineering, vol. 9, no. 4, pp. 35–51, 1999.
109–120, 2014. [106] C. P. Rosé, P. Jordan, M. Ringenberg, S. Siler, K. VanLehn,
[93] A. S. Drigas, K. Argyri, and J. Vrettaros, “Decade re- and A. Weinstein, “Interactive conceptual tutoring in atlas-
view (1999-2009): Artificial intelligence techniques in stu- andes,” in Proceedings of AI in Education 2001 Conference,
dent modeling,” in Best Practices for the Knowledge Soci- 2001, pp. 151–153.
ety. Knowledge, Learning, Development and Technology for
All, ser. Communications in Computer and Information Sci- [107] R. Freedman, “Plan-based dialogue management in a
ence, M. D. Lytras, P. O. d. Pablos, E. Damiani, D. Avison, physics tutor,” in Proceedings of the sixth conference on Ap-
A. Naeve, and D. G. Horner, Eds. Springer Berlin Heidel- plied natural language processing. Association for Com-
berg, 2009, no. 49, pp. 552–564, DOI: 10.1007/978-3-642- putational Linguistics, 2000, pp. 52–59.
04757-2 59. [108] C. P. Rosé, “A framework for robust semantic interpreta-
[94] K. Chrysafiadi and M. Virvou, “Evaluating the integration tion,” in Proceedings of the 1st North American chapter of
of fuzzy logic into the student model of a web-based learn- the Association for Computational Linguistics conference.
ing environment,” Expert Systems with Applications, vol. 39, Association for Computational Linguistics, 2000, pp. 311–
no. 18, pp. 13 127–13 134, 2012. 318.
[95] P. Suraweera and A. Mitrovic, “KERMIT: A constraint- [109] V. Rus, N. Niraula, and R. Banjade, “DeepTutor: An effec-
based tutor for database modeling,” in Intelligent Tutoring tive, online intelligent tutoring system that promotes deep
Systems, ser. Lecture Notes in Computer Science. Springer learning,” in Twenty-Ninth AAAI Conference on Artificial In-
Berlin Heidelberg, 2002, no. 2363, pp. 377–387, DOI: telligence, 2015.
10.1007/3-540-47987-2 41. [110] T. C. Koopmans and M. Beckmann, “Assignment prob-
[96] J. Bourdeau and M. Grandbastien, “Modeling tutoring lems and the location of economic activities,” Econometrica:
knowledge,” in Advances in Intelligent Tutoring Systems, journal of the Econometric Society, pp. 53–76, 1957.
ser. Studies in Computational Intelligence, R. Nkambou,
J. Bourdeau, and R. Mizoguchi, Eds. Springer Berlin Hei- [111] M. W. Evens, R.-C. Chang, Y. H. Lee, L. S. Shim, C. W.
delberg, 2010, no. 308, pp. 123–143, DOI: 10.1007/978-3- Woo, Y. Zhang, J. A. Michael, and A. A. Rovick, “Circsim-
642-14363-2 7. tutor: An intelligent tutoring system using natural language
dialogue,” in Proceedings of the fifth conference on Applied
[97] Z. Jeremić, J. Jovanović, and D. Gašević, “Student modeling
natural language processing: Descriptions of system demon-
and assessment in intelligent tutoring of software patterns,”
strations and videos. Association for Computational Lin-
Expert Systems with Applications, vol. 39, no. 1, pp. 210–
guistics, 1997, pp. 13–14.
222, 2012.
[98] L. Lesta and K. Yacef, “An intelligent teaching assistant sys- [112] M. Al Emran and K. Shaalan, “A survey of intelligent lan-
tem for logic,” in Intelligent Tutoring Systems, ser. Lecture guage tutoring systems,” in Advances in Computing, Com-
Notes in Computer Science. Springer Berlin Heidelberg, munications and Informatics (ICACCI, 2014 International
2002, no. 2363, pp. 421–431, DOI: 10.1007/3-540-47987- Conference on. IEEE, 2014, pp. 393–399.
2 45. [113] S. D’Mello and A. Graesser, “Design of dialog-based intel-
[99] N. T. Heffernan, K. R. Koedinger, and L. Razzaq, “Expand- ligent tutoring systems to simulate human-to-human tutor-
ing the model-tracing architecture: A 3rd generation intelli- ing,” in Where Humans Meet Machines. Springer, 2013,
gent tutor for algebra symbolization,” International Journal pp. 233–269.
of Artificial Intelligence in Education, vol. 18, no. 2, p. 153, [114] D. J. Litman, C. P. Rosé, K. Forbes-Riley, K. VanLehn,
2008. D. Bhembe, and S. Silliman, “Spoken versus typed human
[100] A. Mitrovic and B. Martin, “Evaluating the effectiveness and computer dialogue tutoring.” IJ Artificial Intelligence in
of feedback in SQL-tutor,” in International Workshop on Education, vol. 16, no. 2, pp. 145–170, 2006.
Advanced Learning Technologies, 2000. IWALT 2000. Pro-
[115] H. Pon-Barry, B. Clark, K. Schultz, E. O. Bratt, and
ceedings, 2000, pp. 143–144.
S. Peters, “Advantages of spoken language interaction in
[101] C. Kenny and C. Pahl, “Personalised correction, feedback dialogue-based intelligent tutoring systems,” in Intelligent
and guidance in an automated tutoring system for skills Tutoring Systems. Springer, 2004, pp. 390–400.
training,” International Journal of Knowledge and Learning,
vol. 4, no. 1, pp. 75–92, 2008. [116] A. G. Hauptmann and A. I. Rudnicky, “Talking to com-
[102] M. Mendicino, L. Razzaq, and N. T. Heffernan, “A compari- puters: an empirical investigation,” International Journal of
son of traditional homework to computer-supported home- Man-Machine Studies, vol. 28, no. 6, pp. 583–604, 1988.
work,” Journal of Research on Technology in Education, [117] D. J. Litman and S. Silliman, “Itspoke: An intelligent tu-
vol. 41, no. 3, pp. 331–359, 2009. toring spoken dialogue system,” in Demonstration papers at
[103] K. Brawner and A. Graesser, “Natural language, discourse, HLT-NAACL 2004. Association for Computational Linguis-
and conversational dialogues within intelligent tutoring sys- tics, 2004, pp. 5–8.
tems: A review,” Design Recommendations for Intelligent [118] X. Huang, F. Alleva, H.-W. Hon, M.-Y. Hwang, K.-F. Lee,
Tutoring Systems, p. 189, 2014. and R. Rosenfeld, “The sphinx-ii speech recognition system:
[104] D. J. Hacker, J. Dunlosky, and A. C. Graesser, Handbook of an overview,” Computer Speech & Language, vol. 7, no. 2,
metacognition in education. Routledge, 2009. pp. 137–148, 1993.

17
International Journal of Computer Applications (0975 - 8887)
Volume 181 - No.43, March 2019

[119] M. Ben Ammar, M. Neji, A. M. Alimi, and G. Gouard, “The [132] B. Kyun, Young, Gaming for Classroom-Based Learning:
Affective Tutoring System,” Expert Systems with Applica- Digital Role Playing as a Motivator of Study: Digital Role
tions, vol. 37, no. 4, pp. 3013–3023, 2010. Playing as a Motivator of Study. IGI Global, 2010.
[120] M. Spering, D. Wagener, and J. Funke, “The role of emo- [133] A. B. Raut, S. D. A. Uroojussama, U. Farheen, and A. An-
tions in complex problem-solving,” Cognition and Emotion, wari, “Game based intelligent tutoring system,” Interna-
vol. 19, pp. 1252–1261, 2005. tional Journal of Engineering Research and General Sci-
ence, vol. 3, no. 2, 2015.
[121] C. Frasson and P. Chalfoun, “Managing learner’s affective
states in intelligent tutoring systems,” in Advances in Intelli- [134] M. W. Easterday, V. Aleven, R. Scheines, and S. M. Carver,
gent Tutoring Systems. Springer, 2010, pp. 339–358. “Using tutors to improve educational games,” in Artificial
Intelligence in Education, ser. Lecture Notes in Computer
[122] R. S. Baker, S. K. D’Mello, M. M. T. Rodrigo, and Science, G. Biswas, S. Bull, J. Kay, and A. Mitrovic, Eds.
A. C. Graesser, “Better to be frustrated than bored: The Springer Berlin Heidelberg, 2011, no. 6738, pp. 63–71, DOI:
incidence, persistence, and impact of learners? cognitive– 10.1007/978-3-642-21869-9 11.
affective states during interactions with three different
computer-based learning environments,” International Jour- [135] G. T. Jackson and D. McNamara, “Motivational impacts of
nal of Human-Computer Studies, vol. 68, no. 4, pp. 223–241, a game-based intelligent tutoring system,” in Twenty-Fourth
2010. International FLAIRS Conference, 2011.
[136] R. Moreno and R. E. Mayer, “Role of guidance, reflection,
[123] S. D’Mello and A. Graesser, “Automatic detection of
and interactivity in an agent-based multimedia game.” Jour-
learner’s affect from gross body language,” Applied Artifi-
nal of Educational Psychology, vol. 97, no. 1, p. 117, 2005.
cial Intelligence, vol. 23, no. 2, pp. 123–150, 2009.
[137] D. Rai and J. E. Beck, “Math learning environment with
[124] I. Jraidi, P. Chalfoun, and C. Frasson, “Implicit strategies game-like elements: An incremental approach for enhanc-
for intelligent tutoring systems,” in Intelligent Tutoring Sys- ing student engagement and learning effectiveness,” in In-
tems, ser. Lecture Notes in Computer Science, S. A. Cerri, telligent Tutoring Systems, ser. Lecture Notes in Computer
W. J. Clancey, G. Papadourakis, and K. Panourgia, Eds. Science, S. A. Cerri, W. J. Clancey, G. Papadourakis, and
Springer Berlin Heidelberg, 2012, no. 7315, pp. 1–10, DOI: K. Panourgia, Eds. Springer Berlin Heidelberg, 2012, no.
10.1007/978-3-642-30950-2 1. 7315, pp. 90–100, DOI: 10.1007/978-3-642-30950-2 13.
[125] B. P. Woolf, I. Arroyo, D. Cooper, W. Burleson, and [138] P. E, Kristine, Exploring Technology for Writing and Writing
K. Muldner, “Affective tutors: Automatic detection of and Instruction. IGI Global, 2013.
response to student emotion,” in Advances in Intelligent
[139] J. C. Lester, E. Y. Ha, S. Y. Lee, B. W. Mott, J. P. Rowe, and
Tutoring Systems, ser. Studies in Computational Intelli-
J. L. Sabourin, “Serious games get smart: Intelligent game-
gence, R. Nkambou, J. Bourdeau, and R. Mizoguchi, Eds.
based learning environments,” AI Magazine, vol. 34, no. 4,
Springer Berlin Heidelberg, 2010, no. 308, pp. 207–227,
pp. 31–45, 2013.
DOI: 10.1007/978-3-642-14363-2 10.
[140] R. Peredo, A. Canales, A. Menchaca, and I. Peredo, “Intelli-
[126] I. Arroyo, D. G. Cooper, W. Burleson, B. P. Woolf, K. Muld- gent web-based education system for adaptive learning,” Ex-
ner, and R. Christopherson, “Emotion sensors go to school,” pert Systems with Applications, vol. 38, no. 12, pp. 14 690–
in Proceedings of the 2009 Conference on Artificial In- 14 702, 2011.
telligence in Education: Building Learning Systems That
Care: From Knowledge Representation to Affective Mod- [141] P. Brusilovsky, “Adaptive hypermedia: From intelligent tu-
elling. IOS Press, 2009, pp. 17–24. toring systems to web-based education,” in Intelligent Tu-
toring Systems, ser. Lecture Notes in Computer Science,
[127] H. Prendinger and M. Ishizuka, “The empathic companion: G. Gauthier, C. Frasson, and K. VanLehn, Eds. Springer
A character-based interface that addresses users’affective Berlin Heidelberg, 2000, no. 1839, pp. 1–7, DOI: 10.1007/3-
states,” Applied Artificial Intelligence, vol. 19, no. 3-4, pp. 540-45108-0 1.
267–285, 2005.
[142] P. De Bra, “Adaptive educational hypermedia on the web,”
[128] A. Ogan and W. L. Johnson, “Preface for the special issue on Communications of the ACM, vol. 45, no. 5, pp. 60–61,
culturally aware educational technologies,” Int J Artif Intell 2002.
Educ, vol. 25, pp. 173–176, 2015. [143] J. Eklund and P. Brusilovsky, “The value of adaptivity in hy-
[129] B. D. Nye, “Intelligent tutoring systems by and for the de- permedia learning environments: A short review of empiri-
veloping world: a review of trends and approaches for edu- cal evidence,” in Proceedings of Second Adaptive Hypertext
cational technology in a global context,” International Jour- and Hypermedia Workshop at the Ninth ACM International
nal of Artificial Intelligence in Education, vol. 25, no. 2, pp. Hypertext Conference Hypertext, vol. 98, 1998, pp. 11–17.
177–203, 2015. [144] P. D. Bra and L. Calvi, “Aha! an open adaptive hypermedia
[130] A. Ogan, E. Walker, R. Baker, M. M. T. Rodrigo, J. C. Sori- architecture,” New Review of Hypermedia and Multimedia,
ano, and M. J. Castro, “Towards understanding how to assess vol. 4, no. 1, pp. 115–139, 1998.
help-seeking behavior across cultures,” International Jour- [145] J. Eklund and P. Brusilovsky, “Interbook: an adaptive tutor-
nal of Artificial Intelligence in Education, vol. 25, no. 2, pp. ing system,” UniServe Science News, vol. 12, no. 3, pp. 8–
229–248, 2015. 13, 1999.
[131] P. Mohammed and P. Mohan, “Dynamic cultural contextu- [146] P. Brusilovsky, “Adaptive hypermedia for education and
alisation of educational content in intelligent learning envi- training,” in Adaptive Technologies for Training and Educa-
ronments using icon,” International Journal of Artificial In- tion, P. Durlach and A. Lesgold, Eds. Cambridge University
telligence in Education, vol. 25, no. 2, pp. 249–270, 2015. Press, 2012, pp. 46–68.

18
International Journal of Computer Applications (0975 - 8887)
Volume 181 - No.43, March 2019

[147] N. M. Dowell, W. L. Cade, Y. Tausczik, J. Pennebaker, and [161] R. Agrawal, H. Mannila, R. Srikant, H. Toivonen, A. I.
A. C. Graesser, “What works: Creating adaptive and intelli- Verkamo et al., “Fast discovery of association rules.” Ad-
gent systems for collaborative learning support,” in Intelli- vances in knowledge discovery and data mining, vol. 12,
gent Tutoring Systems. Springer, 2014, pp. 124–133. no. 1, pp. 307–328, 1996.
[148] Y. Lou, P. C. Abrami, and S. Apollonia, “Small group and [162] F. Kabanza, R. Nkambou, and K. Belghith, “Path-planning
individual learning with technology: A meta-analysis,” Re- for autonomous training on robot manipulators in space.” in
view of educational research, vol. 71, no. 3, pp. 449–521, IJCAI, 2005, pp. 1729–1731.
2001. [163] P. Fournier-Viger, R. Nkambou, and E. M. Nguifo, “A
[149] A. Soller, “Supporting social interaction in an intelligent col- knowledge discovery framework for learning task models
laborative learning system,” International Journal of Artifi- from user interactions in intelligent tutoring systems,” in MI-
cial Intelligence in Education (IJAIED), vol. 12, pp. 40–62, CAI 2008: Advances in Artificial Intelligence. Springer,
2001. 2008, pp. 765–778.
[150] Y. Hayashi, “Togetherness: Multiple pedagogical conversa- [164] R. S. d Baker, “Mining data for student models,” in Advances
tional agents as companions in collaborative learning,” in in intelligent tutoring systems. Springer, 2010, pp. 323–
Intelligent Tutoring Systems, ser. Lecture Notes in Com- 337.
puter Science, S. Trausan-Matu, K. E. Boyer, M. Crosby, [165] R. Nkambou, J. Bourdeau, and V. Psyché, “Building intelli-
and K. Panourgia, Eds. Springer International Publish- gent tutoring systems: An overview,” in Advances in Intelli-
ing, 2014, no. 8474, pp. 114–123, DOI: 10.1007/978-3-319- gent Tutoring Systems. Springer, 2010, pp. 361–375.
07221-0 14. [166] J.-D. Zapata-Rivera and J. Greer, “Inspectable Bayesian stu-
[151] R. Kumar and C. P. Rose, “Architecture for building conver- dent modelling servers in multi-agent tutoring systems,” In-
sational agents that support collaborative learning,” Learn- ternational Journal of Human-Computer Studies, vol. 61,
ing Technologies, IEEE Transactions on, vol. 4, no. 1, pp. no. 4, pp. 535–563, 2004.
21–34, 2011. [167] S. Stankov, M. Rosić, B. Žitko, and A. Grubišić, “Tex-sys
[152] R. Kumar, C. P. Rosé, Y.-C. Wang, M. Joshi, and A. Robin- model for building intelligent tutoring systems,” Computers
son, “Tutorial dialogue as adaptive collaborative learning & Education, vol. 51, no. 3, pp. 1017–1036, 2008.
support,” Frontiers in artificial intelligence and applica- [168] S. Ainsworth, N. Major, S. Grimshaw, M. Hayes, J. Un-
tions, vol. 158, p. 383, 2007. derwood, B. Williams, and D. Wood, “Redeem: Simple in-
[153] J. K. Olsen, D. M. Belenky, V. Aleven, and N. Rummel, “Us- telligent tutoring systems from usable tools,” in Author-
ing an intelligent tutoring system to support collaborative as ing Tools for Advanced Technology Learning Environments.
well as individual learning,” in Intelligent Tutoring Systems, Springer, 2003, pp. 205–232.
ser. Lecture Notes in Computer Science, S. Trausan-Matu, [169] R. Nkambou, C. Frasson, and G. Gauthier, “Cream-tools: An
K. E. Boyer, M. Crosby, and K. Panourgia, Eds. Springer authoring environment for knowledge engineering in intel-
International Publishing, 2014, no. 8474, pp. 134–143, DOI: ligent tutoring systems,” in Authoring Tools for Advanced
10.1007/978-3-319-07221-0 16. Technology Learning Environments. Springer, 2003, pp.
[154] I. H. Witten and E. Frank, Data Mining: Practical machine 269–308.
learning tools and techniques. Morgan Kaufmann, 2005. [170] S. B. Blessing, “A programming by demonstration authoring
[155] R. S. d Baker, A. T. Corbett, K. R. Koedinger, S. Evenson, tool for model-tracing tutors,” International Journal of Arti-
I. Roll, A. Z. Wagner, M. Naim, J. Raspat, D. J. Baker, and ficial Intelligence in Education (IJAIED), vol. 8, pp. 233–
J. E. Beck, “Adapting to when students game an intelligent 261, 1997.
tutoring system,” in Intelligent tutoring systems. Springer, [171] S. Hsieh and P. Hsieh, “Intelligent tutoring system author-
2006, pp. 392–401. ing tool for manufacturing engineering education,” Interna-
[156] R. S. d Baker, A. Mitrović, and M. Mathews, “Detecting tional Journal of Engineering Education, vol. 17, no. 6, pp.
gaming the system in constraint-based tutors,” in User Mod- 569–579, 2001.
eling, Adaptation, and Personalization. Springer, 2010, pp. [172] P. Jordan, C. P. Rosé, and K. VanLehn, “Tools for author-
267–278. ing tutorial dialogue knowledge,” in Artificial Intelligence in
[157] S. E. Fancsali, “Data-driven causal modeling of “gaming the Education: AI-ED in the Wired and Wireless Future, Pro-
system” and off-task behavior in cognitive tutor algebra,” in ceedings of AI-ED, 2001, pp. 222–233.
NIPS Workshop on Data Driven Education, 2013. [173] V. Aleven, J. Sewall, O. Popescu, M. van Velsen, S. Demi,
[158] R. S. d Baker, S. M. Gowda, M. Wixon, J. Kalka, A. Z. Wag- and B. Leber, “Reflecting on twelve years of ITS author-
ner, A. Salvi, V. Aleven, G. W. Kusbit, J. Ocumpaugh, and ing tools research with CTAT,” Design Recommendations
L. Rossi, “Towards sensor-free affect detection in cognitive for Intelligent Tutoring Systems: Authoring Tools and Expert
tutor algebra.” International Educational Data Mining Soci- Modeling Techniques, p. 263, 2015.
ety, 2012. [174] A. Mitrovic, B. Martin, P. Suraweera, K. Zakharov, N. Mi-
[159] C. Conati and H. Maclaren, “Empirically building and eval- lik, J. Holland, and N. McGuigan, “ASPIRE: an authoring
uating a probabilistic model of user affect,” User Modeling system and deployment environment for constraint-based tu-
and User-Adapted Interaction, vol. 19, no. 3, pp. 267–303, tors,” 2009.
2009. [175] A. Graesser and X. Hu, “AutoTutor script authoring tool,”
[160] R. Agrawal and R. Srikant, “Mining sequential patterns,” in Design Recommendations for Intelligent Tutoring Systems:
Data Engineering, 1995. Proceedings of the Eleventh Inter- Authoring Tools and Expert Modeling Techniques, p. 199,
national Conference on. IEEE, 1995, pp. 3–14. 2015.

19
International Journal of Computer Applications (0975 - 8887)
Volume 181 - No.43, March 2019

[176] M. Hoffman and C. Ragusa, “Unwrapping GIFT: A primer for tutoring,” in Generalized Intelligent Framework for Tu-
on authoring tools for the generalized intelligent framework toring (GIFT) Users Symposium (GIFTSym2), 2015, p. 11.

20

Common questions

Powered by AI

Cognitive models in ITS, such as those based on the ACT-R theory, assess and enhance student problem-solving skills by distinguishing between explicit (declarative) and implicit (procedural) knowledge . They use a method called model tracing to evaluate student solutions step-by-step against the cognitive model's predictions. If a student's action matches the model's action, it is considered correct, otherwise, it is not, and an error is hypothesized . Over time, the system uses knowledge tracing to continuously assess and update the probability that a student has mastered a particular skill by attributing actions to specific production rules, allowing targeted feedback and support . This iterative process aims to identify gaps in student knowledge and provide interventions that promote mastery of problem-solving strategies.

Affective computing in ITS, designed to recognize and respond to students' emotional states, significantly impacts learning and engagement by providing more responsive and individualized tutoring experiences . By detecting emotions such as frustration, boredom, or engagement through various affective sensors, the ITS can adjust its teaching strategies to improve motivation and maintain student interest . This adaptability allows ITS to offer timely feedback and encouragement, mitigating negative emotions like frustration, which can hinder learning, or enhancing positive states, encouraging deeper interaction with the material . Affective computing, therefore, supports a more holistic and empathetic learning environment, addressing not just cognitive but also emotional needs of students.

Knowledge tracing plays a vital role in facilitating personalized learning within ITS by continuously assessing and updating a student's mastery of particular skills . By monitoring the probability that a student knows each skill, the ITS can tailor its interventions—like hints, feedback, and new tasks—to match the student's current level of understanding . This ongoing assessment allows for a dynamic learning path that responds to the student's progress, ensuring that teaching is both challenging and supportive at the appropriate level. This personalization optimizes the learning process by focusing on areas requiring improvement while reinforcing already mastered skills, thus enhancing overall educational outcomes.

When developing an ITS for culturally diverse global learning environments, several factors must be considered, including language and communication styles, educational valued practices, varying pedagogical preferences, and cultural norms regarding learning and authority . ITS must be adaptable to different languages and dialects to ensure accessibility and understanding across cultures . Additionally, the systems should accommodate different teaching strategies that align with cultural learning preferences, such as group-based or individual learning scenarios . Furthermore, integrating culturally relevant content and examples can make the learning experience more relatable and effective. These considerations are crucial for creating educational technologies that genuinely support a global audience, embracing diversity in learning needs and styles.

Cognitive Mastery Learning in ITS distinguishes itself from traditional educational methods by focusing on mastery of individual skills rather than progressing solely on time or topic-based schedules . It involves continuously assessing students' capabilities and adjusting the level of challenge to facilitate mastery of each skill before moving on to more complex topics . Unlike traditional methods that might move forward without ensuring complete understanding, Cognitive Mastery Learning prioritizes a personalized, data-driven approach to ensure that all foundational skills are solidified, which leads to a more thorough and effective learning process.

Despite their demonstrated effectiveness in improving student learning across various domains, Cognitive Tutors face limited adoption in educational and corporate environments due to several factors. One significant barrier is the complexity and resources required in developing comprehensive cognitive models tailored to specific subjects and contexts . The expertise needed to design, implement, and maintain these systems is substantial, often involving interdisciplinary collaboration, which can be a logistical challenge . Additionally, there is often resistance to change in established educational practices and uncertainty about integrating these advanced technologies into traditional curricula . These barriers underscore the need for more accessible authoring tools and training programs to facilitate broader adoption of Cognitive Tutors.

The overlay model contributes to building effective student models by assuming that student knowledge is a subset of domain knowledge, which allows for a clear identification of knowledge gaps . By comparing a student's behavior and understanding with the domain model, educators can pinpoint precisely where a student's understanding diverges from the expected mastery . This approach enables tutors to focus on bridging these gaps by addressing specific misunderstandings or omissions, offering targeted support and resources to promote comprehensive learning. The simplicity yet effectiveness of this method makes it widely applicable and influential in student modeling.

A traditional ITS consists of four essential components: the domain model, the student model, the tutor model, and the user interface model. The domain model stores the expert knowledge necessary to be taught, such as facts, concepts, and rules of the particular domain . The student model records the current state of an individual student, allowing the system to choose suitable problems based on their history and background . The tutor model houses the pedagogical knowledge, determining when and how to intervene with teaching strategies, for instance through dialogs, hints, or feedback . Lastly, the user interface model presents the information to the student, facilitating interaction . These components interact to monitor student progress, adapt to individual needs, and provide tailored educational experiences, enhancing learning outcomes.

Intelligent Tutoring Systems (ITS) can integrate Learning Factor Analysis (LFA) to refine and adapt learning strategies by identifying and evaluating factors that affect learning success . LFA allows ITS to analyze student interaction data to understand which factors, such as specific teaching strategies, task difficulty levels, and individual learning preferences, contribute most to student achievement . By doing so, ITS can dynamically adjust instructional methods, sequence of learning activities, and feedback mechanisms to align with each student's needs and maximize learning effectiveness. This integration results in a more personalized learning experience, where instruction is informed by empirical evidence of what works best for individual learners across different contexts.

Designing a student model involves identifying the knowledge a student already possesses, understanding their ability to apply this knowledge in problem-solving, and recognizing the pedagogical strategies they employ . Challenges include accurately assessing both static and dynamic characteristics of students, such as their knowledge level, learning preferences, and affective states . These challenges can be addressed by integrating various modeling approaches like the overlay model, which assumes student knowledge as a subset of domain knowledge and aims to bridge any identified gaps . Additionally, incorporating dynamic assessment methods that adapt to a student's interaction with the system can help personalize learning experiences and improve the accuracy of the student model.

You might also like