0% found this document useful (0 votes)
24 views

CHP 5,6

Uploaded by

abdul1818man
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

CHP 5,6

Uploaded by

abdul1818man
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

Chapter 5

Interaction design basics


5.1 Interaction design
Interaction design is about how the artifact produced is going to affect the
way people work: the design of interventions.

5.2 What is design?


Design: achieving goals within constraints.
Goals: the purpose of the design we are intending to produce.
Constrain: the limitations on the design process by external factors.
Trade-off: choosing which goals or constraints can be relaxed so that
others can be met.

5.2.1 The golden rule of design


Understand your material: computers (limitations, capacities, tools, platforms)
and people (psychological, social aspects, human error).

5.2.2 To make error is human


It is the nature of humans to make mistakes and systems should be designed
to reduce the likelihood of those mistakes and to minimize the
consequences when mistakes happen.

5.2.3 The central message: the user


During design, always concentrate on the user.

5.3 The process of design


Requirements: Through observations and interviews, the features of
the system to be designed are mapped.
Analysis: Through various methods, the gathered requirements are
ordered to bring out key issues.
Design: Various design guidelines help you to move from what you want
to how to do it.
Iteration and prototyping: Try out early versions of the system with
real users.
Implementation and deployment: writing code, documentation and
make hardware.

5.4 User focus


Gather as much information as possible about the future users of the
system. Terminology:
Stakeholders: people affected directly or indirectly by a system
Participatory design: bringing a potential user fully into the design process
Persona: rich picture of an imaginary person who represents your core
user group

5.5 Scenarios
Scenarios are stories for design: rich stories of interaction sometimes
illustrated with storyboards.

5.6 Navigation design


5.6.1 Local structure
Much of interaction involves goal-seeking behavior, because users do not
know the system entirely. Therefore, the interface should always make
clear:
where you are, what you can do, where you are going/what will happen
or state of the system.

Furthermore:in terms of the interaction Icons are not self-explanatory: they


should be explained!
The different meaning of the same command in different modes should
be clear.
The system should give feedback about the effect of an action. In most
information systems, it is as essential to know where you have been.

5.6.2 Global structure - hierarchical organization


Overall structure of an application: the way the various screens, pages or
physical device states link to one another. This can be done using hierarchy:
humans tend to be better at using this structure, as long as the hierarchy
does not go to deep.

5.6.3 Global structure - dialog


Dialog: the pattern of non-hierarchical interaction occurring when the user
performs a certain action, e.g. deleting a file.

5.6.4 Wider still


Style issues: we should conform to platform standards
Functionality issues: the program should conform to standard functions.
Navigation issues: we may need to support linkages between applications

5.7 Screen design and layout


5.7.1 Tools for layout
Grouping and structure: if things logically belong together, then we
should normally visually group them together.
Order of groups and items: the order on the screen should follow the
natural order for the user.
Decoration: decorations can be used to emphasize grouping.
Alignment: the proper use of alignment can help the user to
understand information in lists and columns quickly.
White space: white space can be used to separate blocks, highlight
structures etc.

5.7.2 User actions and control


For entering information, the same criteria dictate the layout. It is also very
important that the interface gives a clear clue what to do. A uniform layout
is then helpful. Accordance (things may by their shape for example suggest
what to do with them) is, sometimes, helpful as well. It is, however, not
appropriate to depict a real-world object in a context where its normal
accordances do not work.

5.7.3 Appropriate appearance


The way of presenting information on screen depends on the
kind of information, the technologies available to present it
and the purpose for which it is used. We have an advantage
when presenting information in an interactive system in that it
is easy to allow the user to choose among several
representations, thus making it possible to achieve different
goals.
In an ideal design, the interface is both usable and
aesthetically pleasing. However, the looks of the interface
should never come to the disadvantage of the usability. This is
mostly the case with the excessive use of color and 3D.

Localization/internationalization: the process of making


software suitable for different cultures and languages.

5.8 Iteration and prototyping


Formative evaluation: intended to improve designs.
Summative evaluation: verify whether the product is good enough.

In order for prototyping methods to work, you need to


understand what is wrong and how to improve it, and you also
need a good starting point. If the design is very complex, it is
sometimes wise to start with various alternatives and to drop
them one by one during the design process.
Chapter 6

HCI in the software process

6.1 Software process


Software engineering: the subdiscipline that addresses the management
and technical issues of the development of software systems.
Software life cycle: the activities that take place form the initial concept
for a software system up until its eventual phasing out and replacement.

HCI aspects are relevant within all the activities of the software life cycle.

6.2 The software life cycle


6.2.1 Activities in the life cycle
Requirements specification: capture a description of what the eventual
system will be expected to provide. Requirements, formulated in natural
language, are translated to a more formal and unambiguous language.
Architectural design: how does the system provide the services expected
from it. In this part, the system is decomposed into components that can
be brought in from existing products or that can be developed from scratch
Detailed design: a refinement of the component description provided by
the architectural design, made for each component seperately.
Coding and unit testing: implementing the detailed design in an
executable programming language and testing the different components.
Integration and testing: integrating the different components into a
complete system and testing it as a whole. Sometimes also certify the
system according to ISO-standards.
Maintenance: all the work on the system after the system is released.

6.2.2 Validation and verification


Verification (designing the thing right) will most often occur within a single
life-cycle activity or between two adjacent activities. Validation of a design
(designing the right thing) demonstrates that within the various activities the
customer’ requirements are satisfied. Because verification proofs are between
rather formal languages, the proofs are rather formal too. The validation proof,
however, is not: there is a gap between the real world and structured design,
known as the formality gap. The consequence is, that there is always a certain
subjectivity involved with validation.
6.2.3 Management and contractual issues
In management, focusing solely on the technical aspects of the software life cycle is
often insufficient. A broader perspective is needed, considering factors like marketability,
training needs, availability of skilled personnel, and other external concerns.

When managing the development process, the timing of various activities and the creation
of intermediate deliverables are crucial. These deliverables show progress to the customer.
While the technical life cycle is defined by stages of activity, the managerial perspective is
defined by the timing of documentation inputs and outputs.

6.2.4 Interactive systems and the software life cycle


The life cycle for development described above presents the process of design
in a somewhat pipeline order. In reality, the actual process is iterative: work in
one design activity affects work in any other activity both before or after it in
the life cycle. All of the requirements for an interactive system cannot be
determined from the start. During the design process, the system is made
’more usable’ by having the potential user test the prototypes and observe his
behaviour. In order to do this, clear understanding of human task performance
and cognitive processes is very important.

6.3 Usability engineering


Usability engineering focuses on clearly defining the criteria used to judge a
product's usability. In the software life cycle, a key aspect of usability
engineering is the inclusion of a usability specification within the requirement
specification. This specification highlights features of the user-system
interaction that enhance the product's usability. Various system attributes are
proposed as metrics for testing usability. For each attribute, six items are
defined to form the usability specification of that attribute:
Measuring concept: makes the abstract attribute more concrete by
describing it in terms of the actual product.
Measuring method: states how the attribute will be measured.
Now level: indicates the value for the measurement with the existing
system.
Worst case: the lowest acceptable measurement for the task.
Planned level: the target for the design.
Best case: the level which is agreed to be the best possible measurement
given the current state of development tools and technology.

6.3.1 Problems with usability engineering


The major feature of usability engineering is the assertion of explicit usability
metrics early on in the design process which can be used to judge a system
once it is delivered. The problem with usability metrics is that they rely on
measurements of very specific user actions in very specific situations. At early
stages of design, the designers do not yet have the information to set goals for
measured observations. Another problem is that usability engineering provides
a means of satisfying usability specifications and not necessarily usability: the
usability metrics must be interpreted correctly.

6.4 Iterative design and prototyping


Iterative design: a purposeful design process which tries to overcome the
inherent problems of incomplete requirement specification by cycling through
several designs, incrementally improving upon the final product with each pass.
On the technical side, this is described by the use of prototypes. There are 3
main approaches of prototyping:
Throw-away: the knowledge gained from the prototype is used in the final
design, but the prototype is discarded.
Incremental: the final product is released as a series of components that
have been prototyped separately.
Evolutionary: the prototype is not discarded but serves as a basis for the
next iteration of the design.
Prototypes differ according to the amount of functionality and performance
they provide relative to the final product. The importance lies in its projected
realism, since they are tested on real users. Since providing realism in
prototypes is costly, there are several problems on the management side:
Time: prototyping costs time which is taken away from the real design.
Therefore, there are rapid-prototyping techniques.
Planning
Non-functional features: some of the most important features, as safety
and reliability, cannot be tested using a prototype.
Contracts: Prototyping cannot form the basis for a legal contract and must
be supported with documentation.

6.4.1 Techniques for prototyping


Storyboards: a graphical depiction of the outward appearance of the in-
tended system, without any accompanying system functionality.
Limited functionality simulations: Programming support for simulations
means a designer can rapidly build graphical and textual interaction
objects and attach some behaviour to those objects, which mimics the
system’s functionality. There are many techniques to build these
prototypes. A special one is the Wizard of Oz technique, in which the
system is con-trolled by human intervention.
High-level programming support: High-level programming languages allow
the programmer to abstract away from the hardware specifics and thinking
terms that are closer to the way the input and output devices are
perceived as interaction devices. This technique can also be provided by a
user interface management system, in which features of the interface can
be designed apart from the underlying functionality
6.4.2 Warning about iterative design
First, design decisions made at the beginning of the prototyping process are
often wrong and design initially can be so great as never to overcome an initial
bad decision. Second, if a potential usability problem is discovered, it is
important to understand and solve the reason for the problem, and not the
symptoms of it.

6.5 Design Rationale


DR is the information that explains why a computer system is the way it is,
including its structural and functional description. The benefits of DR:
DR provides a communication mechanism among the members of the
design team.
DR can capture the context of a design decision in order that a different
design team can determine if a similar rationale is appropriate for their
product.
producing a DR forces the designer deliberate more carefully about design
decisions.
since there are mostly alternatives for a ’best design’, the DR clearifies
the decisions.It also orders the, sometimes many, possible alternatives.
capturing the context of a decision (eg. the hardware) in the DR will help
when using the current design in future designs.

6.5.1 Process-oriented design rationale


DR is often represented using the IBIS (issue-based information system), in
which a hierarchical process oriented structure is used: A root issue identifies
the main problem, and various descendant positions are put forth as potential
solutions. The relationship between issue en position is refuted by arguments.
The IBBIS can be notated textual and graphical.

6.5.2 Design space analysis


In this representation, the design space is initially structured by a set of
questions representing the major issues of the design. Options provide
alternative solutions to the question. Options can evoke criteria and new
questions and therefore the entire representation can also be hierarchically
visualized in a tree-graph.

6.5.3 Psychological design rationale


The purpose of PDR is to design within the natural task-artifact cycle of design
activity. When a new system becomes an artifact, observations often reveal it
supports tasks beyond those the designer intended. Understanding these new
tasks can inform requirements for future artifacts.

The first step in PDR is to identify the tasks the proposed system will address
and characterize these tasks through questions the user tries to answer. For
each question, scenarios of user-system behavior are created to support the
user. The initial system is then implemented with the functionality suggested
by these scenarios.

Once the system is in use, observations and designer reflection are used to produce
the actual design rationale for that version. By documenting the PDR, designers are
encouraged to become more aware of the natural evolution of user tasks and the artifact,
using the outcomes of one design to improve subsequent ones.
Chapter 7
Evaluation techniques
7.1 Evaluation
Evaluation should occur throughout the design life cycle, with the results
feeding back into modifications of the design. A distinction is made between
evaluation by the designer or a usability expert and evaluation that studies
actual use of the system.

7.2 Goals of evaluation


Evaluation has 3 main goals:
1. To assess the extent and accessibility of the system’ functionality.
2. To assess the users’ experience of the interaction.
3. To identify any specific problems with the system.

7.3 Evaluation through expert analysis


The basic intention of expert analysis is to identify any areas that are likely to
cause difficulties because they violate known cognitive principles, or ignore
accepted empirical results. 4 approaches are considered here:
1. Cognitive walk-through
2. Heuristic evaluation
3. The use of models
4. Use of previous work

7.3.1 Cognitive walkthrough


CW is a detailed review of a sequence of actions, in this case, the steps that an
interface will require the user to perform in order to accomplish some known
task. The evaluators go through each step and provide a story about why that
step is not good for new users. To do a CW, you need four things:
1. A specification or prototype of the system
2. A description of the task the user is to perform on the system
3. A complete written list of the actions needed to complete the task with the
system
4. An indication of who the users are and what kind of experience and
knowledge the evaluators can assume about them.

For each step, the evaluators try to answer the following questions:

Is the effect of the action the same as then users goal at that point?
Will the users see that the action is available?
Once the users have found the correct action, will they know it is the one
they need?

After the action is taken, will users understand the feedback they get?

7.3.2 Heuristic evaluation


A heuristic is a guideline or general principle that can guide a design decision
or be used to critique a decision that has already been made. Heuristic
Evaluation is a method for structuring the critique of a system using a set of
relatively simple and general heuristics. Several evaluators independently
critique a system to come up with potential usability problems. Each evaluator
assesses the system and notes violations of any of the following heuristics and
the severity of each of these violations based on four factors:

1. How common is the problem


2. How easy is it for users to overcome
3. Will it be a one-off problem or a persistent one
4. How seriously will the problem be perceived

The overall result is a severity rating on a scale of 0-4.

The 10 heuristics:

1. Visibility of the system status,


2. match between system and real world
3. user control and freedom,
4. consistency and standards,
5. error prevention,
6. recognition rather than recall
7. flexibility and efficiency of use,
8. aesthetic and minimalist design,
9. help users recognize,
10. diagnose and recover from errors, and help and documentation.

7.3.3 Model-based evaluation


Certain cognitive and design models provide a means of combining design
specification and evaluation into the same framework.

7.3.4 Using previous studies in evaluation


A similar experiment conducted earlier can cut some of the costs of a new
design evaluation by reusing the data gained from it.

7.4 evaluation through user participation


7.4.1 Styles of evaluation
1. Laboratory studies In LS, users take part in controlled tests, often in a
specialist usability laboratorium. The advantages are the advanced
laboratory equipment and the interruption-free environment. The
disadvantage is the lack of context, which may result in unnatural
situations.

2. Field studies in FS, the user is observed using the system in its own work environment.
The advantage is the natural use of the system that can hardly be achieved in the lab.
However, the interruptions that come with this natural situation may make the
observations more difficult.

7.4.2 Empirical methods: experimental evaluation


Any experiment has the same basic forms: the evaluator chooses a hypothesis
to test, which can be determined by measuring some attribute of participant
behavior. A number of experimental conditions are considered which differ only
in the values of certain controlled variables. Any changes in the behavioral
measures are attributed to the different conditions. Some factors in the
experiment must be considered carefully: the participants chosen, the
variables tested and manipulated and the hypothesis tested.

Participants should be chosen to match the expected user population as


closely as possible: they must be representative of the intended user
population. The sample size must also be large enough to be representative of
the intended user population.

Variables come in two main types: those manipulated (independent) and


those measured (dependent). The values of the independent variable are
known as levels. More complex experiments may have more than one
independent variable.

Hypotheses are predictions of the outcome of an experiment, framed in


terms of dependent and independent variables, stating that a variation in the
independent variable will cause a difference in the dependent variable. The
aim of the experiment is proving the hypothesis, which is done by disproving
the opposite null-hypothesis.

Experimental design consists of different phases: the first stage is to


choose the hypothesis and define the dependent and independent variable. The
second step is to select the experimental method: between-subjects, in which
each participant is assigned to a different condition, and within-subject, in
which each user performs under each condition.

Statistical measures: the data should first of all be save to enable


performing multiple analysis on the same data. The choice of statistical
analysis depends on the type of data and the questions we want to answer.
Variables can be classified as discrete(which can take a finite number of values
and levels) and continuous variables (which can take any value between a
lower and upper limit) A number of tests can be applied on this data.

7.4.3 Observational techniques


Think aloud and cooperative evaluation
Think aloud is a form for observation where the user is asked to talk through
what he is doing as he is being observed. It has the advantage of simplicity, but
the information provided is often subjective and may be selective. A variation
is cooperative evaluation, in which the user and evaluator work together to
evaluate the system.

Protocol analysis
Methods for recording user actions include paper and pencil, audio recording,
video recording, computer logging and user notebooks. In practice, a mixture
of the different methods is used. With recordings, the problem is transcription.

Automatic protocol analysis tools


Using Experimental Video Annotator, an evaluator can use predefined tags to
write an audio of video transcription in real time. Using Workplace Project, this
can be done while supporting the analysis and synchronization of information
from different data streams. DRUM supports the same facilities.

Post-task walkthrough
A walkthrough after the observation reflects the participants’ actions back to
them after the event. The participant is asked to comment it and to answer
questions by the evaluator in order to collect missing information.

7.4.4 Query techniques


Queries provide direct answers from the user about usability questions, but the
information is often subjective.

Interviews provide a direct and structured way of gathering information and


can be varied to suit the situation. They should be planned in advance with a
basic set of questions, and may then be adapted to the specific user.

Questionnaires are less flexible than interviews: they are planned in


advance. However, it can be used to reach a wider group and takes less time to
administer. The styles of questions that can be included are: general
background questions, open ended questions, scalars, multi-choice questions
and ranked questions. It is always wise to perform a pilot study to test the
questionnaire.

7.4.5 Evaluation through monitoring physiological responses


The physiological response monitors receiving currently most attention are eye
tracking and physiological measurement.
Eye movements are believed to reflect the amount of cognitive processing a
display requires and, therefore, how easy or difficult it is to process. Eye
movements are based on movements between points of interest. Possible
measurements are the number of fixations (more –> less efficient search),
fixation duration (longer –> more difficult display) and scan path (indicating
areas of interest, search strategy and cognitive load). Physiological
measurements may be useful in determining the user’ emotional response to an
interface. It involves attaching various probes and sensors to the user,
measuring hearth activity, sweat glands activity, muscle activity and brain
activity. The disadvantage is that the readings are hard to interpret.

7.5 Choosing an evaluation method


Factors that distinguish different techniques:
Design vs implementation: the earlier in the process, the cheaper and quicker the
evaluation must be.
Laboratory vs field studies

Subjective vs objective: subjective evaluations require the interpretation of the


evaluator and are easily used incorrectly. Objective evaluations provide repeatable
results, but sometimes less information.
Qualitative vs quantitative measurements
Information provided: the level of information required depends on the state of the
design process and influences the required method: the evaluation may concern a certain
part of the system or the system as a whole.
Immediacy of response: some methods record the user’ behavior at the time of the
interaction itself, others rely on the users recollection of events, which may be
incomplete or biased.
Intrusiveness: the more obvious the evaluation method is to the user, the more it may
influence the user’ behavior.
Resources: the limit on resources and other practical restrictions may have their effects
on the user’ design.
7.5.1 A classification of evaluation techniques

You might also like