HCI-BIT3107
Baguma Asuman
MScTE(CSC) (IUT, Bangladesh), BSc.IT (IUIU, Kampala Campus)
Email: [email protected] Phone: 0701988620
About the Course unit
Contact Hours / Assessment
o Contact Hours= 45 Hours
o Course work / Continuous Assessment=30%
o Final Examination=70%
Primary Texts/ Resources
1) Text-1: Alan Dix, Janet Finlay, (2001) Human–Computer
Interaction, Third Edition (Chapter 9)
2) Text-2: Katherine Hepburn (2004) Interaction Design,
Beyond Human-Computer Interaction (Chapter 14)
3) Gerard Jounghyun Kim (2015) Human–Computer Interaction
- Fundamentals and Practice.
Asuman Baguma2023-IUIUMC
EVALUATION OF HCI
DESIGNS/SYSTEMS
Asuman Baguma2023-IUIUMC
Outline
The Why, What, Where, and When of Evaluation
Evaluation through Expert Analysis
Evaluation through User Participation
Asuman Baguma2023-IUIUMC
The Why, What, Where, and When of Evaluation
Just imagine that you designed an app for
teenagers to share music, gossip, and photos.
You prototyped your first design and
implemented the core functionality.
Qn: How would you find out whether it would
appeal to them and whether they will use it?
Ans: You would need to evaluate it! How??
Asuman Baguma2023-IUIUMC
The Why, What, Where, and When of Evaluation..
Evaluation is integral to the design process.
It involves collecting and analyzing data about users’ or
potential users’ experiences when interacting with a
design artefact such as a screen sketch, prototype, app,
computer system, or component of a computer system
Evaluation focuses on both the usability of the system
(that is, how easy it is to learn and to use) and on the
users’ experiences when interacting with it (for
example, how satisfying, enjoyable, or motivating the
interaction is).
Asuman Baguma2023-IUIUMC
The Why, What, Where, and When of Evaluation..
There are many different evaluation methods.
Which to use? depends on the goals of the
evaluation.
There are 3 main goals of evaluation:-
i. To assess the extent and accessibility of the
system’s functionality
ii. To assess users’ experience of the interaction, and
iii. To identify any specific problems with the system.
Asuman Baguma2023-IUIUMC
The Why, What, Where, and When of Evaluation..
Why evaluate?:
User experience involves all aspects of the user’s
interaction with the product.
From a business and marketing perspective, well-
designed products sell.
Evaluation also enables problems to be fixed before
the product goes on sale – avert loses!
Asuman Baguma2023-IUIUMC
The Why, What, Where, and When of Evaluation..
What to Evaluate?
What to evaluate ranges from low-tech prototypes to complete
systems, from a particular screen function to the whole workflow,
and from aesthetic design to safety features. Examples:-
Developers of an ambient display may be interested in whether it
changes people’s behaviour.
Game app developers will want to know how engaging and fun
their games are compared with those of their competitors and
how long users will play them.
Government authorities may ask if a computerized system for
controlling traffic lights results in fewer accidents.
Developers of a hospital appointment app?
Developers of a passport application app?
Asuman Baguma2023-IUIUMC
https://passports.go.ug
Asuman Baguma2023-IUIUMC
The Why, What, Where, and When of Evaluation..
Where to Evaluate?
Where evaluation takes place depends on what is being evaluated.
Some characteristics, such as web accessibility, are generally
evaluated in a lab because it provides the control necessary to
investigate systematically whether all of the requirements are met.
User experience aspects, such as whether children enjoy playing
with a new toy and for how long before they get bored, can be
evaluated more effectively in natural settings, which are often
referred to as in-the-wild studies
Asuman Baguma2023-IUIUMC
The Why, What, Where, and When of Evaluation..
When to Evaluate?
The stage in the product lifecycle when evaluation takes place depends
on the type of product and the development process being followed.
Example:-
The product being developed could be a new concept, or it could be an
upgrade to an existing product.
It could also be a product in a rapidly changing market that needs to be
evaluated to see how well the design meets current and predicted
market needs.
When evaluations are conducted during design to check that a product
continues to meet users’ needs, they are known as formative
evaluations.
Evaluations that are carried out to assess the success of a finished
product are known as summative evaluations.
Asuman Baguma2023-IUIUMC
EVALUATION APPROACHES
Asuman Baguma2023-IUIUMC
Evaluation through Expert Analysis
Evaluation should occur throughout the design process.
The first evaluation of a system should ideally be performed before
any implementation work has started.
If the design itself can be evaluated, expensive mistakes can be
avoided, since the design can be altered prior to any major resource
commitments.
Typically, the later in the design process that an error is discovered,
the more costly it is to put right and, therefore, the less likely it is to
be rectified.
However, it can be expensive to carry out user testing at regular
intervals during the design process, and it can be difficult to get an
accurate assessment of the experience of interaction from
incomplete designs and prototypes.
Hence The need for expert analysis!
Asuman Baguma2023-IUIUMC
Evaluation through Expert Analysis …
A number of methods have been proposed to evaluate
interactive systems through expert analysis.
These methods can be used at any stage in the
development process from a design specification, through
storyboards and prototypes, to full implementations,
making them flexible evaluation approaches.
They are also relatively cheap, since they do not require
user involvement.
However, they do not assess actual use of the system, but
rather only whether or not a system upholds accepted
usability principles
Asuman Baguma2023-IUIUMC
Evaluation through Expert Analysis …
The four approaches to expert analysis are:-
i. Cognitive walkthrough.
ii. Heuristic evaluation.
iii. The use of models.
iv. Use of previous work.
Asuman Baguma2023-IUIUMC
Evaluation through Expert Analysis …
The four approaches to expert analysis are:-
Cognitive walkthrough.
The origin of the cognitive walkthrough approach to evaluation
is the code walkthrough familiar in software engineering.
Walkthroughs require a detailed review of a sequence of
actions.
In the code walkthrough, the sequence represents a segment
of the program code that is stepped through by the reviewers
to check certain characteristics (for example, that coding style
is adhered to, conventions for spelling variables versus
procedure calls, and to check that system-wide invariants are
not violated).
Asuman Baguma2023-IUIUMC
Evaluation through Expert Analysis …
The four approaches to expert analysis are:-
Cognitive walkthrough…
In the cognitive walkthrough, the sequence of actions
refers to the steps that an interface will require a user to
perform in order to accomplish some known task.
The evaluators then ‘step through’ that action sequence
to check it for potential usability problems.
Usually, the main focus of the cognitive walkthrough is to
establish how easy a system is to learn.
More specifically, the focus is on learning through
exploration.
Asuman Baguma2023-IUIUMC
Evaluation through Expert Analysis …
The four approaches to expert analysis are:-
Cognitive walkthrough…
To do a cognitive walkthrough, you need four things:-
1) A specification or prototype of the system. It doesn’t
have to be complete, but it should be fairly detailed.
Details such as the location and wording for a menu
can make a big difference.
2) A description of the task the user is to perform on the
system. This should be a representative task that most
users will want to do.
Asuman Baguma2023-IUIUMC
Evaluation through Expert Analysis …
The four approaches to expert analysis are:-
Cognitive walkthrough…
To do a cognitive walkthrough, you need four things:-
3) A complete, written list of the actions needed to
complete the task with the proposed system.
4) An indication of who the users are and what kind of
experience and knowledge the evaluators can assume
about them.
Asuman Baguma2023-IUIUMC
Evaluation through Expert Analysis …
The four approaches to expert analysis are:-
Cognitive walkthrough…
Given this information, the evaluators step through the
action sequence (i.e identified in item 3) to critique the
system and tell a believable story about its usability.
To do this, for each action, the evaluators try to answer
the following four questions for each step in the action
sequence:-
i. Is the effect of the action the same as the user’s goal
at that point? e.g if the effect of the action is to save a
document, is ‘saving a document’ what the user wants to
do?
Asuman Baguma2023-IUIUMC
Evaluation through Expert Analysis …
The four approaches to expert analysis are:-
Cognitive walkthrough…
ii. Will users see that the action is available? e.g Will
users see the button or menu item, for example, that is used
to produce the action?
iii. Once users have found the correct action, will they know
it is the one they need? i.e It is one thing for a button or
menu item to be visible, but will the user recognize that it is
the one he is looking for to complete his task?
iv. After the action is taken, will users understand the
feedback they get? Will the feedback given be sufficient
confirmation of what has actually happened?
Asuman Baguma2023-IUIUMC
Evaluation through Expert Analysis …
The four approaches to expert analysis are:-
Cognitive walkthrough…
It is vital to document the cognitive walkthrough to keep a
record of what is good and what needs improvement in the
design.
It is therefore a good idea to produce some standard
evaluation forms for the walkthrough.
The cover form would list the information in items 1–4 in the
first list above, as well as identifying the date and time of the
walkthrough and the names of the evaluators.
Then for each action (from item 3 on the cover form), a
separate standard form is filled out that answers each of the
four questions in the second list above.
Asuman Baguma2023-IUIUMC
Evaluation through Expert Analysis …
Heuristic Evaluation
Heuristic is a guideline or general principle or rule of thumb
that can guide a design decision or be used to critique a
decision that has already been made.
The general idea behind heuristic evaluation is that several
evaluators independently critique a system to come up with
potential usability problems.
It is important that there be several of these evaluators and
that the evaluations be done independently.
3 to 5 evaluators is sufficient according to Jakob Nielsen and
Rolf Molich. (five evaluators usually result in about 75% of
the overall usability problems being discovered)
Asuman Baguma2023-IUIUMC
Evaluation through Expert Analysis …
Heuristic Evaluation …
To aid the evaluators in discovering usability
problems, Nielsen provides a set of 10 :-
i. Visibility of system status. For example, if a
system operation will take some time, give an
indication of how long and how much is complete.
ii. Match between system and the real world. The
system should speak the user’s language, with
words, phrases and concepts familiar to the user,
rather than system-oriented terms.
Asuman Baguma2023-IUIUMC
Evaluation through Expert Analysis …
Heuristic Evaluation …
To aid the evaluators in discovering usability
problems, Nielsen provides a set of 10 :-
iii. User control and freedom. Users often choose
system functions by mistake. Support undo and
redo.
iv. Consistency and standards. Users should not
have to wonder whether words, situations or
actions mean the same thing in different contexts.
v. Error prevention. Make it difficult to make errors.
Asuman Baguma2023-IUIUMC
Evaluation through Expert Analysis …
Heuristic Evaluation …
To aid the evaluators in discovering usability
problems, Nielsen provides a set of 10 :-
vi. Recognition rather than recall. Make objects,
actions and options visible. The user should
not have to remember information from one
part of the dialog to another.
vii. Flexibility and efficiency of use. Allow users
to tailor frequent actions.
Asuman Baguma2023-IUIUMC
Evaluation through Expert Analysis …
Heuristic Evaluation …
To aid the evaluators in discovering usability problems,
Nielsen provides a set of 10 :-
viii. Aesthetic and minimalist design. Dialogs should not
contain information that is irrelevant or rarely needed.
ix. Help users recognize, diagnose and recover from
errors. Error messages should be expressed in plain
language.
x. Help and documentation. Few systems can be used
with no instructions so it may be necessary to provide
help and documentation.
Asuman Baguma2023-IUIUMC
Evaluation through Expert Analysis …
Heuristic Evaluation …
Each evaluator assesses the system and notes violations of
any of these heuristics that would indicate a potential
usability problem.
The evaluator also assesses the severity of each usability
problem, based on four factors:-
How common is the problem,
How easy is it for the user to overcome,
Will it be a one-off problem or a persistent one, and
How seriously will the problem be perceived?
These can be combined into an overall severity rating on a
scale of 0–4:
Asuman Baguma2023-IUIUMC
Evaluation through Expert Analysis …
Heuristic Evaluation …
0 = I don’t agree that this is a usability problem at all
1 = Cosmetic problem only: need not be fixed unless
extra time is available on project
2 = Minor usability problem: fixing this should be
given low priority
3 = Major usability problem: important to fix, so
should be given high priority
4 = Usability catastrophe: imperative to fix this before
product can be released (Nielsen)
Asuman Baguma2023-IUIUMC
Evaluation through Expert Analysis …
Heuristic Evaluation …
Once each evaluator has completed their
separate assessment, all of the problems are
collected and the mean severity ratings
calculated.
The design team will then determine the ones
that are the most important and will receive
attention first.
Asuman Baguma2023-IUIUMC
Evaluation through Expert Analysis …
Model-based Evaluation
Certain cognitive and design models provide a means of
combining design specification and evaluation into the same
framework.
For example, the GOMS (goals, operators, methods and
selection) model predicts user performance with a particular
interface and can be used to filter particular design options.
Similarly, lower-level modeling techniques such as the
keystroke-level model provide predictions of the time users
will take to perform low-level physical tasks.
Design methodologies, such as design rationale also have a
role to play in evaluation at the design stage.
Asuman Baguma2023-IUIUMC
Evaluation through Expert Analysis …
Model-based Evaluation …
Design rationale provides a framework in which design
options can be evaluated. By examining the criteria
that are associated with each option in the design, and
the evidence that is provided to support these criteria,
informed judgments can be made in the design.
Dialog models can also be used to evaluate dialog
sequences for problems, such as unreachable states,
circular dialogs and complexity.
Models such as state transition networks are useful
for evaluating dialog designs prior to implementation
Asuman Baguma2023-IUIUMC
Evaluation through Expert Analysis …
Previous Studies-based Evaluation
This expert analysis approach uses previous results as
evidence to support (or refute) aspects of the design.
It is expensive to repeat experiments continually and an
expert review of relevant literature can avoid the need to
do so.
It should be noted that experimental results cannot be
expected to hold arbitrarily across contexts.
The reviewer must therefore select evidence carefully,
noting the experimental design chosen, the population of
participants used, the analyses performed and the
assumptions made.
Asuman Baguma2023-IUIUMC
Evaluation through Expert Analysis …
Previous Studies-based Evaluation …
For example, an experiment testing the usability of a
particular style of help system using novice participants
may not provide accurate evaluation of a help system
designed for expert users.
The review should therefore take account of both the
similarities and the differences between the
experimental context and the design under
consideration.
This is why this is an expert review: expertise in the area
is required to ensure that correct assumptions are made.
Asuman Baguma2023-IUIUMC
Evaluation through User Participation
Techniques that are available for evaluation with users
are broadly distinguished as two distinct evaluation
styles:-
a) Those performed under laboratory conditions: Users
are taken out of their normal work environment to take
part in controlled tests, often in a specialist usability
laboratory.
b) Those conducted in the work environment or ‘in the
field’. Evaluation takes the designer or evaluator out
into the user’s work environment in order to observe
the system in action
Asuman Baguma2023-IUIUMC
Evaluation through User Participation …
The techniques considered so far concentrate on evaluating a
design or system through analysis by the designer, or an expert
evaluator, rather than testing with actual users.
However, useful as these techniques are for filtering and refining
the design, they are not a replacement for actual usability testing
with the people for whom the system is intended: the users.
Approaches to evaluation through user participation include:-
i. Empirical or Experimental methods,
ii. Observational Methods,
iii. Query Techniques, and
iv. Methods that use physiological monitoring, such as eye tracking
and measures of heart rate and skin conductance.
Read about these 4 approaches in detail from Chapter 9 of Text1
Asuman Baguma2023-IUIUMC
-END-
Asuman Baguma2023-IUIUMC