testing1234567
testing1234567
Michael Weintraub
Fall, 2015
Unit Objective
• Understand what quality assurance means
• Understand QA models and processes
Definitions According to NASA
• Software Assurance: The planned and systematic set of activities that ensures that software life cycle processes and
products conform to requirements, standards, and procedures.
• Software Quality: The discipline of software quality is a planned and systematic set of activities to ensure quality is
built into the software. It consists of software quality assurance, software quality control, and software quality engineering. As
an attribute, software quality is (1) the degree to which a system, component, or process meets specified requirements. (2)
The degree to which a system, component, or process meets customer or user needs or expectations [IEEE 610.12 IEEE
Standard Glossary of Software Engineering Terminology].
• Software Quality Assurance: The function of software quality that assures that the standards, processes, and
procedures are appropriate for the project and are correctly implemented.
• Software Quality Control: The function of software quality that checks that the project follows its standards,
processes, and procedures, and that the project produces the required internal and external (deliverable) products.
• Software Quality Engineering: The function of software quality that assures that quality is built into the software by
performing analyses, trade studies, and investigations on the requirements, design, code and verification processes and
results to assure that reliability, maintainability, and other quality factors are met.
• Software Reliability: The discipline of software assurance that 1) defines the requirements for software controlled
system fault/failure detection, isolation, and recovery; 2) reviews the software development processes and products for
software error prevention and/or controlled change to reduced functionality states; and 3) defines the process for measuring
and analyzing defects and defines/derives the reliability and maintainability factors.
• Verification: Confirmation by examination and provision of objective evidence that specified requirements have been
fulfilled [ISO/IEC 12207, Software life cycle processes]. In other words, verification ensures that “you built it right”.
• Validation: Confirmation by examination and provision of objective evidence that the particular requirements for a specific
intended use are fulfilled [ISO/IEC 12207, Software life cycle processes.] In other words, validation ensures that “you built
the right thing”.
From: http://www.hq.nasa.gov/office/codeq/software/umbrella_defs.htm
Software Quality Assurance
Доверяй, но проверяй
(Russian Proverb - Doveryay, no proveryay)
After all, they are the ones who asked for the
system
From https://www.facebook.com/navalhistory/photos/a.77106563343.78834.76845133343/10153057920928344/
Testing is Computationally Hard
The space is huge and it is generally infeasible to test
anything completely
Hardware
Lots to Consider
• Component behavior
• Interactions between
components
• System and sub-
system behavior
• Interactions between
sub-systems
• Negative path
• Behavior under load
• Behavior over time
• Usability
Two Approaches
techniques
One Extreme: Jury/Peer Reviews
Before anything is accepted, someone other than
the creator must review it and approve it
• Single reviewer model
– Usually a “certified” / senior person
• Panel model
– Highly structured reviews
Value
Review Meeting
Second opinion on clarity,
effectiveness, and efficiency
Moderator Scribe
Learning from others
Review Panel
Peers Avoids “board blindness” on seeing
Author
Experts flaws
Client(s)
Peer pressure to be neat and tie up
loose ends
Paired Programming
Lightweight Peer Reviews
Correctness
Analysis of algorithm used
Many choices exist. Suppose you are deciding between bubble sort,
quicksort, and merge sort.
All will work (sort an array), but which will be the better code ?
Bubble sort is very easy to write: two loops. Slow on average O(n2) –
how big will n be?? O(n) for memory.
3. Stress
– Test at or past design limits
Testing Flow
(Dynamic Evaluation)
Soak –
Operational
Readiness Acceptance
Installation
Performance
Functional Client
Unit Integration
Test Operational Test
Test
System Test
Unit
Two Forms of Testing
Emphasizes adherence to
the specifications Unit tests enable
refactoring
Code bases often include After each small change, the unit
the code and the unit tests tests can verify that a change in
as a coherent piece structure did not introduce a
Usually done by developers building the change in functionality
component
What Makes for a Good Test
Test Perspective Tester Perspective
• Either addresses a partition of inputs or Know why the test exists
tests for common developer errors
• Automated – Should target finding specific
• Runs Fast
problems
– To encourage frequent use
– Should optimize the cost of
• Small in scope defining and running the test
– Test one thing at a time
against the likelihood of
• When a failure occurs, it should finding a fault/failure
pinpoint the issue and not require much
debugging
– Failure messages help make the issue
clear
– Should not have to refer to the test to
understand the issue
Organizing Testing
Test Plan Test Suite
Describes test activities A set of test cases and scripts to
1. Scope measure answers
2. Approach Often the post condition of one test
3. Resources is often used as the precondition for
4. Schedule the next one
Identifies
• What is to be tested
• The tasks required to do the testing
• Who will do each task OR
• The test environment
• The test design techniques
• Entry and exit criteria to be used
• Risk identification and contingency Tests may be executed in any order
planning
• Call the component so it will fail and check the failure reactions
high
Picking the Subset
Selection based on company many defects found few defects found
policy
Every statement must executed
Software
one Quality
Every path must be exercised low high
Crafted by specific end user use
cases (scenario testing) few defects found few defects found
Defect Density
If density for the next release’s additional code is 6 Poor Test
within ranges of prior releases, it is a candidate for Coverage/Quality
5
release 4
3
Unless test or development practices have improved 2
1
0
Release 1 2
Measuring Quality: Defect Seeding
Using a known quantity as inference to the unknown
Challenges
1. Seeding is not easy. Placing right kinds of bugs in enough of the code is hard.
– Bad seeding, being too easy or too hard to find, creates false senses of confidence in your
reviews and testing
• Too easy: doesn’t mean that most or all of the real bugs were found.
• Too hard: danger of looking past the Goodenov line or for things that aren’t there
2. Seeded code must be cleansed of any missed seeds before release. Post clean-
up, the code must be tested to insure nothing got accidently broken.
Measuring Quality: Capture-Recapture
Applies estimating technique used in predicting wild-life
populations (Humphrey, Introduction to Team Software Process, Addison
Wesley, 2000)
If multiple collectors, assign A to the highest collected number and set B to the rest of
the collected defects. When multiple engineers find the same defect, count it just once.
Performance Testing
Measures the system’s capacity to process
load
Involves creating and executing an operational
profile that reflects the expected values of uses
Performance Stress
Aims to assess compliance with Identify defects that emerge only
non-functional requirements under load
Endurance
Measures reliability and availability
Ideally the system should degrade gracefully rather than collapse under load
Under load, other issues like protocol overhead or timing issues take center
stage